With GbE, one can simply overprovision the LAN/CAN network and eliminate congestion. Viola! No customer complaints about network speed. This approach isn’t feasible, however, in the WAN. The following figure is intended to simply provide a visual appreciation of various "pipe" speeds. Notice that you can’t detect the dot representing our 56 kilobits per second or slower modem speeds.
In May 1998, BellCore (now Telcordia) provided me the following figure, top, as a means to think about the various WAN protocol-stack options and how things might evolve as the IP revolution unfolds. One uses this chart by drawing a vertical line down the page, and that will show how applications (at the top) get to the physical layer (bottom) and the protocols options between. Note that Ethernet isn’t shown.
The next figure summarizes the transport-network-layer protocol options carriers use today. Most WAN carriers use networks that are basically ATM over SONET over glass (fiber-optics) using WDM or DWDM. With this transport network, the carrier can provision for voice traffic or video, or can carry IP. For Internet service providers who want best-effort IP packets carried as inexpensively as possible, the ATM layer serves no purpose; they prefer packet on SONET.
|Transport-network layer alternative as of May 1998.|
|The WAN "club sandwich." Today's picture: packet on SONET is more efficient than IP or ATM. POS features reduced equipment, costs, operation and maintenance.|
So the WAN "club sandwich" of protocol today is generally IP/ATM/SONET/DWDM. Each protocol represents a layer of equipment, operations and maintenance costs, and expertise that must be on board – and the layers slightly overlap functionally with resulting competition. In the IP-centric core network of the future, the "sandwich" is too thick to chew; the question is which of these layers will survive and which will slowly disappear.
This is one of the most interesting questions in networking today, so we’ll take some time to explore the options. In reality, nobody today truly knows with certainty since market forces, transition issues, technological evolution and other factors will play in the eventual answer. The solution "ain’t baked yet." But, in a nutshell, here’s the issue. Most observers believe that IP will hold as the convergence layer for all applications as described previously. Also, as mentioned earlier, DWDM is pure gold in terms of bandwidth in the physical layer and therefore is here to stay. Thus, the true question is what, if anything, will ride between IP and DWDM.
The following figure shows one option for the WAN solution called multiprotocol label switching. MPLS was actually born back when routing was slow; MPLS was going to place a fixed-length label in front of each IP packet which specified the next switch in its path – much like how ATM works. Each switch would merely examine the MPLS label, replace it with the next label from the routing table and push the relabeled packet out the right port. No processing required on any data beyond the MPLS label itself, which implies it would be done in hardware (very fast and cheap).
|The MPLS option. The IETF standard is replacing ATM protocol for traffic engineering on IP flows. The largest IP infrastructure in the world is migrating from ATM to MPLS. New ISPs with fiber are not deploying ATM in the core. So the first collapsed layer in the core Internet infrastructure will be ATM.|
But fast ASIC routing is now available, which undermines some of the motivation for MPLS. Also, various MPLS implementations are exhibiting some interoperability problems today. Nevertheless, MPLS can give IP traffic QoS properties in the Internet’s core, and MPLS is being used by some carriers today in lieu of ATM for traffic engineering on IP flows.
It’s also worth noting also the MPLS label is the same size as the ATM VCI/VPI identifier (ATM address labels), which implies existing ATM switches could be reprogrammed (software changes) to become MPLS label-switching routers until they were later replaced with cheaper, faster, true LSRs. The issue is really how widespread MPLS usage will become and will it persist. It was, however, being adopted rapidly by carriers for the Internet’s core as of late 2000.
GbE: surprise contender
In May 1999, the IEEE LAN-standards committee met for the first time to develop 10-GbE standards. Guess who attended this LAN-standards committee for the first time? WAN carriers. They’re interested in this standard because 10 gbps is very close (but not identical) to OC-192 speeds (9.58464 gbps), which are available on their WAN SONET networks.
Obviously, if the IEEE standard for 10 GbE matched OC-192 speeds, then GbE coming from an enterprise 10-GbE switch could slide directly onto the WAN carrier’s OC-192 SONET net with no buffers or speed mismatch. The list at left shows more of the logic driving GbE into consideration for a WAN technology, which it definitely wasn’t two years ago.
Ethernet is beginning now to push upward from the LAN/CAN environment, where it dominates today, to take over the metropolitan-area network and eventually become a WAN option. As GbE continues to grow in sales and volume, it’s driving down the cost of GBIC-to-fiber-optics transfer faster than any other technology.
The next-to-last bullet in the above list deserves special comment. The IEEE GbE standard called 802.1p, which defines CoS, defines the "code point" and priority for each traffic type. Thus the CoS field is interoperable between various vendors’ GbE equipment and therefore between different carriers’ networks. This gives Ethernet an advantage over IETF’s differentiated-services solution, since the "code point" isn’t defined in the diffserv specification.
Also, note that industry hasn’t worked agreements on how to relate GbE CoS markings to diffServ nor to ATM QoS options. This isn’t good from the perspective of getting on with QoS features for IP traffic running over various technologies below it. The defined CoS code point in the GbE standard sidesteps these problems neatly if Ethernet is run from end-to-end.
The following figure summarizes what we’ve been discussing. Again, IP/ATM/SONET/DWDM is the stack today. It’s likely these four protocols will converge over time into fewer protocols dealing with the lower-layer physical domain as well as with network-layer issues. IP will remain the protocol layer for application convergence. It’s evolving to take on different properties necessary to carry multimedia traffic properly, but it will stick.
Next-generation networks: key attributes
In the list at left, I’ve tried to capture some key attributes of the emerging next generation of networks based upon previous information. IP, not ATM, will be the convergence layer for applications. The WAN core will be shared, packet-based and all optical. In fact, optical switching was introduced during Summer 2000 ahead of the schedule WAN carriers gave only one year before that.
The "last mile" problem still isn’t solved in the sense of a clear solution emerging (don’t take your eye off the wireless option). And there’s a new style of network management emerging called policy-based networking or directory-enabled networking.
There are new standards (DEN, common open-policy server) under development for this Holy Grail of network managers. In PBN the network manager enters a series of policies about network use (what traffic is top priority, next priority, how much bandwidth is allowed for hypertext-transfer protocol traffic, how much bandwidth is reserved for voice, who is an authorized user and what are his privileges at all times of network use). These policies are stored in a policy directory and deconflicted. Then they’re translated into specific instructions and distributed to all network elements (called policy-enforcement points and a policy director), where they’re automatically implemented. PBN will be necessary to reach the converged, multimedia IP network of the future.
IP revolution – more impacts
|The IP revolution has already had other major impacts on industry, which are captured in the list at left. Yes, voice over IP has some quality problems which must be solved before it becomes universally accepted, but don’t believe for a minute that it isn’t already a factor.|
Why ATM will/did die
Original source: Dave Passmore, Business Communications Review
|By now you’ve no doubt discerned that ATM is facing
some challenges in the networking industry. Although ATM is far from dead, and
the long-term evolution of the WAN protocol stack is undecided, there are many
reasons offered from various sources about why they believe ATM is doomed in the
long run. The list at left is a compilation of arguments I’ve
collected about this point. Permit me to focus only on a few key points from the
ATM’s complexity compared to IP’s and Ethernet’s simplicity can’t be overemphasized for the DoD environment, where soldiers, sailors and airmen will have to keep the network operating on the battlefield. Now that Ethernet scales up to 10 gbps, ATM’s advantage is gone. In the core of the soon-to-exist all-optical core network, individual packets/frames/cells may not be handled anyway due to limited time to process them at terabits/second speeds. Only lambdas may be switched in the core. Frames/cells/packets get processed at the edge and then routed into the appropriate lambda to traverse the network’s core. Thus scaling is a bogus argument for ATM at those speeds. In the Army, bandwidth on the battlefield is a precious commodity; squandering it on the ATM cell tax (requires a minimum of two OC-3s on an OC-48 link) seems wasteful.
Finally, nobody we’ve talked to can SAR IP packets or Ethernet frames into and out of ATM cells at speeds above OC-12 (622 mbps). At great expense, in a laboratory environment only, higher speeds can be demonstrated. But in real, operational equipment, the SAR function stops at OC-12.
The IP revolution is pushing us into a next-generation of networks. DoD/DA need to respond appropriately to take advantage of what the commercial-off-the-shelf world is producing so quickly. The list at left attempts to draw some conclusions from the material presented thus far. Most are self-explanatory.
"Free and infinite bandwidth over time" deserves comment. "Over time" could be in the range of eight to 10 years. But DWDM is producing a bandwidth glut within fiber-optics which switching technology now needs to catch up to before we see the increases. Optical switching seems to be the answer. The drive is already on today to reach ubiquitous connectivity with every cellphone and personal digital assistant on the planet wirelessly operating off the net.
By "network will be the computer," I mean that virtually all computing will be done over the network. The web browser has emerged as the universal user interface (tremendous potential savings to DoD in training costs if we adopt this!). As described earlier, we in the Army have already adopted GbE with Layer 4/3 switching as our preferred LAN/CAN or installation solution.
This article’s core message is that convergence onto a multimedia, IP-centric, integrated network is beginning within the IT and telecommunications industry. This represents a total, seamless, IT solution for DoD C4ISR needs which provides interoperability, bandwidth-on-demand, lower costs through converged data and voice networks, and integrated functionality never before possible.
So what should we do? The list at left provides a series of specific recommendations geared to the material and conclusions presented previously. There are probably others that need to be added. But what I’d like to address now is a related but non-technical issue that must be solved, or else we’ll fail at taking advantage of what the COTS world is laying at our feet.
The problem is our current structured programmatic and acquisition processes. If we continue to take years to define operational requirements in detail, then go through multiyear processes to select a single vendor to address these detailed requirements, and then wait even more years while the chosen vendor builds the solution, we’re guaranteed to never be able to keep up with the pace of COTS evolution today. Things are moving too fast; we must find ways to be more agile and flexible in acquiring COTS technology. GbE didn’t exist two years ago, yet today it blows away competing technology because it’s faster, simpler and cheaper than anything that existed just two years ago.
Soldiers, sailors and airmen in the field know and keep up with IT. They’ll throw back at us solutions that aren’t state-of-the-art because they know better. So the true challenge is acquisition reform to allow us to be responsive to the fluid and fast-paced change we clearly see in the COTS IT world.
I suggest the answer lies in having programs similar to the Army’s CUITN program, which embraced GbE as soon as possible when it appeared. CUITN has a visionary goal versus concrete, detailed requirements for Army installations. And the program constantly evaluates the latest equipment solutions coming from industry that might support that vision, and selects vendor equipment that evaluated to be the best choice for that individual site at that instant in time.
These type of programs don’t have concrete start-and-end dates. The idea is one of continuous improvement (sounds like the old total-quality-management story, doesn’t it?) in consonance with industry evolution. Legacy systems have to be taken into consideration as we advance and interoperability has to be preserved, but these are solvable problems.
The IP revolution is leading us into the next generation of networks that hold the promise of actually delivering what we’ve always sought for DoD’s C4ISR needs. Embracing and leveraging these emerging solutions is critical to success for the digitized battlefield, Force XXI and the network-centric warfight we all know is coming faster than we want.
This article’s contents were first presented in mid-1999, and the presentation was given to many audiences in 1999-2000. As a result, DoD’s JTA was adapted to allow GbE solutions. The Army’s command-information officer modified policy to permit GbE solutions on Army installations.
Coincidentally, a Defense Science Board report on DoD communications released March 13, 2000, included eight recommendations which align with conclusions and recommendations presented here. Specifically, the DSB report included:
|Recommendation II – DoD vision for the global information grid. The task force recommends establishment of a DoD vision , policy and requirements for an integrated, common-user, QoS-based, DoD-wide virtual intranet. (A virtual intranet is a virtual private network, secured by cryptographic means, that operates with service guarantees over the commercial Internet.|
|Recommendation III -- standards-based GIG. The task force recommends developing policy and requirements for a commercial-standards-based, common-user, QoS-based, DoD-wide virtual intranet using IP as the convergence layer. The panel also recommends that the assistant secretary of defense for command, control, communications and intelligence implement a process to reduce JTA standards and protocols to a minimum essential set that, at its core, should be predominately commercial.|
The task force also recommends that ASD/C3I and the undersecretary of defense for acquisition, technology and logistics establish a policy and review process that requires all DoD information and communication systems to adhere to commercial IP naming and addressing conventions, and that the Joint Chiefs of Staff establish the requirement that all DoD communication systems be able to interpret and route IP datagrams.
Dr. Gentry is senior technical director and chief engineer at Army Signal Command, Fort Huachuca, Ariz.
Back issues on-line | "Most requested" articles | Article search | Subscriptions | Writer's guide
Army Communicator is part of Regimental Division, a division of Office Chief of Signal.