Next-generation networks and the Defense Department's command, control, communications, computers, intelligence, surveillance and reconnaissance abilities

by Michael Gentry

For the past several years, the strategic direction for telecommunications in the commercial sector as well as in the Defense Department and Department of Army could be summarized by a single acronym: ATM (asynchronous-transfer mode). The Army’s common-user installation-transport network program has been fielding an ATM backbone to our installations (essentially campuses) since the early 1990s with Ethernet to the desktop. The Defense Information Systems Agency has also been fielding a wide-area network based on ATM and the synchronous optical network for several years.

We were right in pursuing ATM. Until recently, if you wanted a backbone solution for your campus-area network exceeding 100 megabits per second, there was really only one technology that scaled to higher speeds: ATM. Moreover, ATM offered the necessary quality-of-service features to support convergence on the ATM layer of multiple traffic media (voice, video and data), as well as scalability.

Today we’re at the beginning of a revolution in telecommunications that revolves around the Internet. In particular, Internet protocols are beginning to dominate telecommunications solutions. We’re at a special point today in information-technology history where market forces and technology factors are combining to fashion a future that’s both clear and undeniable. This change offers many implications for DoD’s command, control, computers, communications, intelligence, surveillance and reconnaissance solutions. The potential for a single, integrated, multimedia, bandwidth-on-demand, global network is truly upon us.

This article outlines the forces driving this "New World" and where it will lead us. The figure below will show you at a glance why we’re experiencing the Internet-protocol revolution. Today we’re just past that very special place in the history of electronic communications where the volume of data traffic now exceeds the volume of voice traffic in the world’s networks. The IP revolution will carry us from this Old World to a New World of telecommunications.

Old World vs New World of networks The IP revolution: networks in transition.

The Holy Grail of networking is, and always has been, to build a single, integrated (converged) voice, video and data (multimedia) network. The 1980s solution was the integrated-services data network (often referred to as "I smell dollars now" in those days). In any case, industry moved on to another solution – to the converged multimedia network in the 1990s. This was of course ATM.

Today, for reasons we’ll discuss yet, the IT industry is shifting to a new solution that’s embodied in the expression "everything over IP and IP over everything." We use the expression "everything over IP" to connote the fact that every application from voice to video to data can be carried over an IP network like the Internet.

So this is the story of why the IP revolution is happening and how our networks will transition to the next generation of networks.

C4ISR requirements

At the conceptual or notional level, DoD’s C4ISR requirements are essentially identical to the requirements industry is building the Internet to support (see following figure). What DoD needs is a single, integrated network that will allow any authorized user to securely access any service or data from any location at any point in time. That’s precisely what commercial industry is trying to deliver with the Internet.

C4ISR requirements General description of C4ISR requirements. Any authorized user should be able to securely access any service from any location at any time.

Are we there yet? No, but we’re advancing rapidly on the target. In this article, I’ll outline that direction as gleaned from dozens of executive-briefing conferences with industry and much independent reading and research.

The bottom line is that the commercial sector is building in the Internet the very solution DoD needs for our C4ISR needs. Certainly DoD has special requirements (for example, security) that can’t now be fully satisfied by Internet technologies, but remember that "better is the enemy of good enough." My thesis is that DoD should leverage the enormous advancements and economies the Internet offers us by simply going with the flow and not chasing the last 10 percent of our requirements so hard that we fail to satisfy the mainline 90 percent of our requirements which can be easily, quickly, simply and inexpensively satisfied by the Internet revolution.

Disruptive technologies and trends

Disruptive technologies

1. Semiconductor technology
Moore’s Law fuels price/performance improvements
Packet switching now is microprocessor of networking and replacing circuit switching
Voice and video become packetized data also
"Systems on a chip" network elements
Existing public switched telephone network is not designed for data traffic (IP packets)
IP networks have cost advantage
IP transcends traditional networking boundaries –

data telephone
carrier
enterprise

2. Optical-transmission breakthrough technology (WDM)
Capacity of single fiber doubles every 12 (or fewer) months
More lambdas and more bits per second/lambda in fiber
Today, 6.4 terabits per second in single fiber
New carriers built on DWDM and IP technologies
DWDM and IP=WIN

3. Wireless capacity
Doubles every nine months in given air volume
Intelligent antennas, advanced signal processing and receivers
Wireless loops become preferred choice for network access: narrowband access in developing countries and broadband access in developed countries
Driven by higher capacity, lower costs, low labor and initial costs
Result: networks by 2005 with 100 to 250 times the capacity of today’s networks with same cost
Next I’ll discuss three technological trends and three market trends that combined are key drivers to the IP revolution and clearly point towards the next generation of networks. These are indeed technologies and market trends that disrupt past solutions.

The first disruptive technology (see list at left) is the well-known "Moore’s Law," which describes the relentless advancement of semiconductor technology – which drives price/performance improvements of roughly double the speed for the same price every 18 months. This "law" has been around for many years. (Haven’t we all bought five or six personal computers trying to keep up?) So what’s new here? Simple – semiconductor advancements are now being applied to communications products and changing price/performance ratios for different technologies.

The next disruptive technology is literally a breakthrough technology called wave-division multiplexing and now dense WDM. The simple idea here is that instead of sending a single optical signal down an optical fiber, we now send multiple frequencies or lambdas ("colors") down each optical fiber and ramp up the comm bandwidth of each lambda. As a result, the capacity of a single optical fiber is doubling every 12 months or less – even faster than semiconductor technology (see following figure).

Some companies view the combination of IP with DWDM as a winning hand. DWDM promises to ultimately deliver virtually unlimited bandwidth in the carrier networks. Today it’s literally the case that faster DWDM technology is sitting on the vendor’s shelves waiting for demand to arrive that will justify deployment.

To complete the disruptive technology list, wireless capacity is doubling about every nine months in a given volume of air. Intelligent antennas, advanced signal processors and adaptive receivers are the biggest drivers of these advancements. Wireless is becoming the preferred access technology for narrowband usage in developing countries because they lack the proper infrastructure to start with.

optical transmission trends Disruptive technologies and trends impacting industry: optical transmission.

Trends impacting industry

1. Worldwide web

Web and Netscape Navigator have changed everything
80/20 is now 2/98
IP is only data protocol that matters anymore
E-commerce is imperative for successful enterprise
Innovation is centered around IP=> new applications/content
One prediction: by 2003, Internet traffic will consume more than 90 percent of the world’s bandwidth
The client=web browser and the server=website

2. GbE and switching routers (layer switches)

Change campus network design
Routing went ASIC
Ethernet got: GbE=speed; QOS@L4 and VLAN; trunking; switching routers running 10 gbps line rate deployed this year

Result: Ethernet holds 98 percent of LAN market as of 1999; backbone campus/LAN solution=GbE and L3/4 switches

3. Converged networks
Drive for converged (voice, data and video) networks that are IP-centric
Converged network eliminates operational overhead of dual networks – more competitive in deregulated marketplace
Packet networks "fill the blanks" and thus carry 3 times the traffic in same bandwidth of circuit-switched network (better bandwidth usage)
Lower cost for packet network components
Packets route themselves; no costly point-to-point redundancy ("clouds" vs. "strings")
Industry focus and momentum now behind IP and convergence
Now we shift to disruptive trends affecting our industry (see list at left). First is the worldwide web’s development and growth using Mosaic and later Netscape Navigator, and now Internet Explorer and other web browsers. The web is changing our traffic flow from predominantly horizontal on the campus or installation to predominately vertical, with attendant design ramifications. Here’s the engine driving IP traffic’s growth. This is why voice traffic will represent less than 10 percent of the world’s bandwidth consumption in a very few years. Client-server computing has now been essentially defined as a browser for a client (see top following figure).

Next, in the last two years alone, gigabit Ethernet and switching routers (Layer 4/3 switches) have emerged and redefined the local-area network/CAN solution. Routing, which was slow and expensive and therefore avoided until recently, is now done with application-specific integrated circuits and can be done at wire speed (see second following figure). Ethernet has been revived with the new Institute of Electrical and Electronics Engineers specifications, which gave it 1,000 mbps speeds (1 gigabit per second); class-of-service features; and trunking or link aggregation, which allows one to trunk multiple GbE links into higher speed paths.

The mantra of today’s GbE firms is "wire speed and non-blocking or else it’s broken!" Army testing has verified that indeed GbE switches are that fast. As a consequence of these developments, GbE has now replaced ATM as the preferred backbone solution for the CAN or installation network. Here’s why: GbE is faster, cheaper and simpler than ATM. That’s a winning combination in today’s competitive marketplace.

The last disruptive trend is the commercial sector’s drive towards convergence around IP-centric networking. The Holy Grail is once again in sight. Check out the ATM forum website and compare it with the Internet Engineering Task Force site, and you’ll see industry momentum and focus is clearly now on IP and convergence vice ATM.

Let me point out that there are two major technical challenges to the IP-centric next-generation network: QoS and transitioning to the New World. Although these challenges are clearly not yet fully solved, industry momentum is certainly pushing towards their resolution at full speed.

worldwide web trend Trends impacting industry: the worldwide web.
gigabit Ethernet trends Trends impacting industry: gigabit Ethernet.

Time for a brief summary before we plow ahead. For years, we believed that converged multimedia networking would be done using ATM technology. As I said earlier, we were right to pursue ATM. Until ASIC routing and GbE switches came along, nothing could match ATM’s speed because ATM worked in hardware and routing was done in software. But the Internet, WWW and technology have changed things, and now convergence of multimedia applications can be done on the IP layer (Layer 3) vs. the ATM layer (Layer 2).

Remember that GbE with Layer 4/3 switching can be done at "wire speed and non-blocking or else it’s broken." Moreover, the marketplace and industry are going with that solution. The following figure lays out this "everything over IP and IP over everything" concept using the classical OSI seven-layer model of communications protocols.

7-layer model of communications protocols Classical OSI seven-layer model of communications protocols.

In the next sections of this article, I’ll discuss in more detail why Ethernet has displaced ATM for the CAN backbone solution and what the issues are surrounding the WAN protocol evolution from now on out. The principal point to bear in mind, however, is that we’re now only looking for efficient ways to carry the mushrooming volume of IP traffic representing everything from voice to data with video mixed in.

The GbE case

"For $12,000 I’ll give you a GbE solution so fast that you can’t buy a picture of an ATM solution at any price that will match its speed!"

This was the astounding statement we heard while visiting a firm in January 1999 that specialized in GbE gear. At that point, we were spending millions of dollars for ATM solutions for our installations. So I pried into the basis of this statement. The following figure is the result.

GbE vs ATM on campus/LAN Simplified comparison of GbE picture on campus/LAN. GbE/IP switching dominates campus/LAN today.

First, it’s not a total system-cost comparison because it makes a simplifying assumption that whether GbE or ATM, you start with a chassis and dual-power supplies, and they’re a push in terms of costs. Technology-cost differences arise when you begin to populate these chassis with interface cards to actually move data.

To explain that firm’s assertion, in 1999 a GbE interface card ran about $2,000 a pop, and you’d need six each to build a three-gbps link between two nodes or switches. Total cost = $12,000.

You trunk the three links to get a throughput of about three gbps of pure Ethernet traffic. In the ATM case, OC-48 (2.4 gbps) is the closest match; OC-48 interface cards ran about $84,000 each. You only needed two, so total cost = $168,000.

Actual throughput would be significantly lower than the nominal 2.4 gbps, since traffic to be carried is IP in Ether frames and ATM requires the frames to be segmented into ATM cells, then re-assembled upon leaving the ATM "cloud." Segmentation and reassembly drops throughput to something typically around 1.9 gbps, which doesn’t match the three gbps on the GbE side.

Information Systems Engineering Command’s Technology Integration Center testing

October 1998-present
Clearly demonstrated ease of installation, setup and use – is user friendly
Showed GbE superior to ATM in IP multicast
Troubleshooting easier
Protocols less complex – points to easier interoperability
Applications run better; higher throughput

The next size upward in ATM gear is OC-192 (9.58464 gbps). For the campus network, nobody made OC-192 ATM equipment, so you couldn’t buy it. Thus you couldn’t buy a picture of ATM gear to match the $12,000 GbE solution.

Prices of these interface cards have dropped since January 1999, but GbE interface cards are still comparatively cheap.

Subsequent testing by the Army (see list at left) verified that GbE is indeed as fast as advertised, cheap and much simpler to design, install, make operational and maintain. Faster, cheaper and simpler. One almost never finds this combination in new technology. Consequently, we began to institutionalize the change from an ATM backbone to a GbE backbone for our installations.

The following figure, top, shows our first installation (Fort Carson, Colo.) designed with GbE/IP switching. Total cost for switches was $2 million. This design provides a backbone that supports 10 mbps per desktop on the installation.

The next figure shows the same installation designed with our old ATM solution. Switching costs = $3.8 million. However, this design only supports 1.25 mbps or so for each desktop. Switching costs rise to about $25 million. The GbE solution is dramatically cheaper (and faster and simpler too).

Fort Carson GbE/IP network Fort Carson GbE/IP design with switched 10 mb/s, 100 mb/s and 1,000 mb/s.
Fort Carson ATM network Fort Carson ATM design, same network.

Aided by our push, DoD has subsequently adjusted the Joint Technical Architecture to accommodate GbE as a standard solution. The Army has adjusted the CUITN solution to be GbE vs. ATM in the backbone with impressive results so far.

The other LAN/CAN technologies, Token Ring and fiber data-distribution interface, are also dead in terms of growth (too slow and can’t scale). Although one can find isolated examples where a user chooses ATM for his CAN backbone today, basically the game is over and Ethernet won the LAN/CAN networking solution battle as of late 2000. Today, with GbE installed, the network is no longer the bottleneck in the installation IT solution; the servers can’t keep up with GbE network speeds. For once, processing is the bottleneck and not the network. This may not last for long, but it’s definitely different.

this article continues

dividing rule

Back issues on-line | "Most requested" articles | Article search | Subscriptions | Writer's guide

Army Communicator is part of Regimental Division, a division of Office Chief of Signal.

This is an offical U.S. Army Site |

04/04/12

This is an offical U.S. Army Site |