WAN Technologies

By JoAnne Woodcock

Chapter 8 from Step Up to Networking, published by Microsoft Press

WANs are all about exchanging information across wide geographic areas. They are also, as you can probably gather from reading about the Internet, about scalability—the ability to grow to accommodate the number of users on the network, as well as to accommodate the demands those users place on network facilities. Although the nature of a WAN—a network reliant on communications for covering sometimes vast distances—generally dictates slower throughput, longer delays, and a greater number of errors than typically occur on a LAN, a WAN is also the fastest, most effective means of transferring computer-based information currently available.

On This Page

The Way of a WAN
Circuit Switching
Message Switching
Packet Switching
Developing Technologies

The Way of a WAN

To at least some extent, WANs are defined by their methods of transmitting data packets. True, the means of communication must be in place. True, too, the networks making up the WAN must be up and running. And the administrators of the network must be able to monitor traffic, plan for growth, and alleviate bottlenecks. But in the end, part of what makes a WAN a WAN is its ability to ship packets of data from one place to another, over whatever infrastructure is in place. It is up to the WAN to move those packets quickly and without error, delivering them and the data they contain in exactly the same condition they left the sender, even if they must pass through numerous intervening networks to reach their destination.

Picture, for a moment, a large network with many subnetworks, each of which has many individual users. To the users, this large network is (or should be) transparent—so smoothly functioning that it is invisible. After all, they neither know nor care whether the information they need is on server A or server B, whether the person with whom they want to communicate is in city X or city Y, or whether the underlying network runs this protocol or that one. They know only that they want the network to work, and that they want their information needs satisfied accurately, efficiently, and as quickly as possible.

Now picture the same situation from the network's point of view. It "sees" hundreds, thousands, and possibly even tens of thousands of network computers or terminals and myriad servers of all kinds—print, file, mail, and even servers offering Internet access—not to mention different types of computers, gateways, routers, and communications devices. In theory, any one of these devices could communicate with, or transmit information through, any other device. Any PC, for instance, could decide to access any of the servers on the network, no matter whether that server is in the same building or in an office in another country. To complicate matters even more, two PCs might try to access the same server, and even the same resource, at the same time. And of course, the chance that only one node anywhere on the network is active at any given time is minuscule, even in the coldest, darkest hours of the night.

So, in both theory and practice, this widespread network ends up interconnecting thousands or hundreds of thousands of individual network "dots," connecting them temporarily but on demand. How can it go about the business of shuffling data ranging from quick e-mails to large (in terms of bytes) documents and even larger graphic images, sound files, and so on, when the possible interconnections between and among nodes would make a bowl of spaghetti look well organized by comparison? The solution is in the routing, which involves several different switching technologies.

Switching of any type involves moving something through a series of intermediate steps, or segments, rather than moving it directly from start point to end point. Trains, for example, can be switched from track to track, rather than run on a single, uninterrupted piece of track, and still reach their intended destination. Switching in networks works in somewhat the same way: Instead of relying on a permanent connection between source and destination, network switching relies on series of temporary connections that relay messages from station to station. Switching serves the same purpose as the direct connection, but it uses transmission resources more efficiently.

WANs (and LANs, including Ethernet and Token Ring) rely primarily on packet switching, but they also make use of circuit switching, message switching, and the relatively recent, high-speed packet-switching technology known as cell relay.

Circuit Switching

Circuit switching involves creating a direct physical connection between sender and receiver, a connection that lasts as long as the two parties need to communicate. In order for this to happen, of course, the connection must be set up before any communication can occur. Once the connection is made, however, the sender and receiver can count on "owning" the bandwidth allotted to them for as long as they remain connected.

Although both the sender and receiver must abide by the same data transfer speed, circuit switching does allow for a fixed (and rapid) rate of transmission. The primary drawback to circuit switching is the fact that any unused bandwidth remains exactly that: unused. Because the connection is reserved only for the two communicating parties, that unused bandwidth cannot be "borrowed" for any other transmission.

The most common form of circuit switching happens in that most familiar of networks, the telephone system, but circuit switching is also used in some networks. Currently available ISDN lines, also known as narrowband ISDN, and the form of T1 known as switched T1 are both examples of circuit-switched communications technologies.

Message Switching

Unlike circuit switching, message switching does not involve a direct physical connection between sender and receiver. When a network relies on message switching, the sender can fire off a transmission—after addressing it appropriately—whenever it wants. That message is then routed through intermediate stations or, possibly, to a central network computer. Along the way, each intermediary accepts the entire message, scrutinizes the address, and then forwards the message to the next party, which can be another intermediary or the destination node.

What's especially notable about message-switching networks, and indeed happens to be one of their defining features, is that the intermediaries aren't required to forward messages immediately. Instead, they can hold messages before sending them on to their next destination. This is one of the advantages of message switching. Because the intermediate stations can wait for an opportunity to transmit, the network can avoid, or at least reduce, heavy traffic periods, and it has some control over the efficient use of communication lines.


Packet Switching

Packet switching, although it is also involved in routing data within and between LANs such as Ethernet and Token Ring, is also the backbone of WAN routing. It's not the highway on which the data packets travel, but it is the dispatching system and to some extent the cargo containers that carry the data from place to place. In a sense, packet switching is the Federal Express or United Parcel Service of a WAN.

In packet switching, all transmissions are broken into units called packets, each of which contains addressing information that identifies both the source and destination nodes. These packets are then routed through various intermediaries, known as Packet Switching Exchanges (PSEs), until they reach their destination. At each stop along the way, the intermediary inspects the packet's destination address, consults a routing table, and forwards the packet at the highest possible speed to the next link in the chain leading to the recipient.

As they travel from link to link, packets are often carried on what are known as virtual circuits—temporary allocations of bandwidth over which the sending and receiving stations communicate after agreeing on certain "ground rules," including packet size, flow control, and error control. Thus, unlike circuit switching, packet switching typically does not tie up a line indefinitely for the benefit of sender and receiver. Transmissions require only the bandwidth needed for forwarding any given packet, and because packet switching is also based on multiplexing messages, many transmissions can be interleaved on the same networking medium at the same time.

Connectionless and Connection-Oriented Services

So packet-switched networks transfer data over variable routes in little bundles called packets. But how do these networks actually make the connection between the sender and the recipient? The sender can't just assume that a transmitted packet will eventually find its way to the correct destination. There has to be some kind of connection—some kind of link between the sender and the recipient. That link can be based on either connectionless or connection-oriented services, depending on the type of packet-switching network involved.

  • In a (so to speak) connectionless "connection," an actual communications link isn't established between sender and recipient before packets can be transmitted. Each transmitted packet is considered an independent unit, unrelated to any other. As a result, the packets making up a complete message can be routed over different paths to reach their destination.

    In a connection-oriented service, the communications link is made before any packets are transmitted. Because the link is established before transmission begins, the packets comprising a message all follow the same route to their destination. In establishing the link between sender and recipient, a connection-oriented service can make use of either switched virtual circuits (SVCs) or permanent virtual circuits (PVCs):

    • Using a switched virtual circuit is comparable to calling someone on the telephone. The caller connects to the called computer, they exchange information, and then they terminate the connection.

    • Using a permanent virtual circuit, on the other hand, is more like relying on a leased line. The line remains available for use at all times, even when no transmissions are passing through it.

Types of Packet-Switching Networks

As you've seen, packet-based data transfer is what defines a packet-switching network. But—to confuse the issue a bit—referring to a packet-switching network is a little like referring to tail-wagging canines as dogs. Sure, they're dogs. But any given dog can also be a collie or a German shepherd or a poodle. Similarly, a packet-switching network might be, for example, an X.25 network, a frame relay network, an ATM (Asynchronous Transfer Mode) network, an SMDS (Switched Multimegabit Data Service), and so on.

X.25 packet-switching networks

Originating in the 1970s, X.25 is a connection-oriented, packet-switching protocol, originally based on the use of ordinary analog telephone lines, that has remained a standard in networking for about twenty years. Computers on an X.25 network carry on full-duplex communication, which begins when one computer contacts the other and the called computer responds by accepting the call.

Although X.25 is a packet-switching protocol, its concern is not with the way packets are routed from switch to switch between networks, but with defining the means by which sending and receiving computers (known as DTEs) interface with the communications devices (DCEs) through which the transmissions actually flow. X.25 has no control over the actual path taken by the packets making up any particular transmission, and as a result the packets exchanged between X.25 networks are often shown as entering a cloud at the beginning of the route and exiting the cloud at the end.


A recommendation of the ITU (formerly the CCITT), X.25 relates to the lowest three network layers—physical, data link, and network— in the ISO reference model:

  • At the lowest (physical) layer, X.25 specifies the means—electrical, mechanical, and so on—by which communication takes place over the physical media. At this level, X.25 covers standards such as RS-232, the ITU's V.24 specification for international connections, and the ITU's V.35 recommendation for high-speed modem signaling over multiple telephone circuits.

  • At the next (data link) level, X.25 covers the link access protocol, known as LAPB (Link Access Protocol, Balanced), that defines how packets are framed. The LAPB ensures that two communicating devices can establish an error-free connection.

  • At the highest level (in terms of X.25), the network layer, the X.25 protocol covers packet formats and the routing and multiplexing of transmissions between the communicating devices.

On an X.25 network, transmissions are typically broken into 128-byte packets. They can, however, be as small as 64 bytes or as large as 4096 bytes.

DTEs and DCEs As already mentioned, the sending and receiving computers on an X.25 network are not known as computers, hosts, gateways, or nodes. They are DTEs*.* In X.25 parlance, DTEs are devices that pass packets to DCEs, for forwarding through the links that make up a WAN. DTEs thus sit at the two ends of a network connection; in contrast, DCEs sit at the two ends of a communications circuit, as shown in the following illustration.


PADs So far so good. But since packets are as important to a packet-switching network as atoms are to matter, what about the devices that create and reassemble the packets themselves? In some cases, such as an X.25 gateway computer (the DTE) that sits between a LAN and the WAN, the gateway takes care of packetizing. In other cases, as with an ordinary PC (another type of DTE), the job is handled by a device known as a packet assembler and disassembler, or PAD. In this case, the PAD sits between the computer and the network, packetizing data before transmission and, when all packets have been received, reconstituting the original message by putting the packets back together in the correct order.

Is this work difficult? Well, to a human it might be, because packets are sent along the best possible route available at the time they are forwarded. Thus, it's quite possible for the packets representing a single message to travel over different links and to arrive at their destination out of order. Considering the amount of traffic flowing over a WAN, and considering the possible number of transmitting and receiving nodes, it would seem that the job of reconstructing any given message represents a Herculean task. Well, to people, it probably does. To a PAD, it does not. Putting Humpty Dumpty back together again is all in a day's work for the PAD. It does such work over and over again.

Frame relay

Frame relay is a newer, faster, and less cumbersome form of packet switching than X.25. Often referred to as a fast packet switching technology, frame relay transfers variable-length packets up to 4 KB in size at 56 Kbps or T1 (1.544 or 2 Mbps) speeds over permanent virtual circuits.

Operating only at the data link layer, frame relay outpaces the X.25 protocol by stripping away much of the "accounting" overhead, such as error correction and network flow control, that is needed in an X.25 environment. Why is this? Because frame relay, unlike X.25 with its early reliance on often unreliable telephone connections, was designed to take advantage of newer digital transmission capabilities, such as fiberoptic cable and ISDN. These offer reliability and lowered error rates and thus make the types of checking and monitoring mechanisms in X.25 unnecessary.

For example, frame relay does include a means of detecting corrupted transmissions through a cyclic redundancy check, or CRC, which can detect whether any bits in the transmission have changed between the source and destination. But it does not include any facilities for error correction. Similarly, because it can depend on other, higher-layer protocols to worry about ensuring that the sender does not overwhelm the recipient with too much data too soon, frame relay is content to simply include a means of responding to "too much traffic right now" messages from the network.

In addition, because frame relay operates over permanent virtual circuits (PVCs), transmissions follow a known path and there is no need for the transmitting devices to figure out which route is best to use at a particular time. They don't really have a choice, because the routes used in frame relay are based on PVCs known as Data Link Connection Identifiers, or DLCIs. Although a frame relay network can include a number of DLCIs, each must be associated permanently with a particular route to a particular destination.


Also adding to the speed equation is the fact that the devices on a frame relay network do not have to worry about the possibility of having to repackage and/or reassemble frames as they travel. In essence, frame relay provides end-to-end service over a known—and fast—digital communications route, and it relies heavily on the reliability afforded by the digital technologies on which it depends. Like X.25, however, frame relay is based on the transmission of variable length packets, and it defines the interface between DTEs and DCEs. It is also based on multiplexing a number of (virtual) circuits on a single communications line.

So how, exactly, does frame relay work? Like X.25, frame relay switches rely on addressing information in each frame header to determine where packets are to be sent. The network transfers these packets at a predetermined rate that it assumes allows for free flow of information during normal operations.

Although frame relay networks do not themselves take on the task of controlling the flow of frames through the network, they do rely on special bits in the frame headers that enable them to address congestion. The first response to congestion is to request the sending application to "cool it" a little and slow its transmission speed; the second involves discarding frames flagged as lower-priority deliveries, and thus essentially reducing congestion by throwing away some of the cargo.

Frame relay networks connecting LANs to a WAN rely, of course, on routers and switching equipment capable of providing appropriate frame-relay interfaces.


You're focused on networks when ATM no longer translates as "Automated Teller Machine" but instead makes you immediately think "Asynchronous Transfer Mode." All right. So what is Asynchronous Transfer Mode, and what is it good for?

To begin with, ATM is a transport method capable of delivering not only data but also voice and video simultaneously, and over the same communications lines. Generally considered the wave of the immediate future in terms of increasing both LAN and WAN capabilities, ATM is a connection-oriented networking technology, closely tied to the ITU's recommendation on broadband ISDN (BISDN) released in 1988.

What ATM is good for is high-speed LAN and WAN networking over a range of media types from the traditional coaxial cable, twisted pair, and fiberoptic to communications services of the future, including Fiber Channel, FDDI, and SONET (described in later sections of this chapter).

Although ATM sounds like a dream, it's not. It's here, at least in large part.

Cell relay ATM, like X.25 and frame relay, is based on packet switching. Unlike both X.25 and frame relay, however, ATM relies on cell relay, a high-speed transmission method based on fixed-size units (tiny ones only 53 bytes long) that are known as cells and that are multiplexed onto the carrier.


Because uniformly sized cells travel faster and can be routed faster than variable-length packets, they are one reason—though certainly not the only one—that ATM is so fast. Transmission speeds are commonly 56 Kbps to 1.544 Mbps, but the ITU has also defined ATM speeds as high as 622 Mbps (over fiberoptic cable).

How it works Imagine a "universal" machine—one that can take in any materials, whether they are delivered sporadically or in a constant stream, and turn those materials into lookalike packages. That's basically how ATM works at the intake end. It takes in streams of data, voice, video…whatever…and packages the contents in uniform 53-byte cells. At the output end, ATM sends its cells out onto a WAN in a steady stream for delivery, as shown in Figure 8-1.

That all seems simple enough, but now take a look at the "magic" of ATM in a little more technical detail.

To begin with, remember that ATM is designed to satisfy the need to deliver multimedia. Well, multimedia covers a number of different types of information that have different characteristics and are handled differently, both by the devices that work with them and by higher-level networking protocols. Yet, in order to make use of ATM, something must interface with the different devices and must package their different types of data in

Figure 8-1: ATM breaks data streams into fixed-size cells and delivers them over a WAN. (The "converter" here is not a real ATM switch—it's meant to suggest a hopper or funnel into which the various data streams flow…just an attempt to lighten things up, but the concept is accurate.)

Figure 8-1: ATM breaks data streams into fixed-size cells and delivers them over a WAN. (The "converter" here is not a real ATM switch—it's meant to suggest a hopper or funnel into which the various data streams flow…just an attempt to lighten things up, but the concept is accurate.)

ATM cells for transport. That something is an ATM-capable node that handles the conversions specified in the three-layer ATM model shown in the following illustration:


These are the layers and what they do:

  • The topmost layer, the ATM Adaptation Layer (AAL), sits between what you might consider "ATM proper" and the higher-level network devices and protocols that send and receive the different types of information over the ATM network. AAL, as the adaptation in its name suggests, mediates between the ATM layer and higher-level protocols, remodeling the services of one so that they fit the services of the other. It's a fascinating "place," in that AAL takes in all the different forms of data (audio, video, data frames) and hands the data over to comparable AAL services (audio, video, data frames) that repackage the information into 48-byte payloads before passing them along to the ATM layer for further grooming.

  • The ATM layer attaches headers to the ATM payloads. That might seem simple enough, but the header does not simply say, "this is a cell." Part of the header includes information that identifies the paths and circuits over which those cells will travel and so enables ATM switches and routers to deliver the cells accurately to their intended destinations. The ATM layer also multiplexes the cells for transmission before passing them to the physical layer. This layer, as you can see, has a big job to do. It's somewhat reminiscent of a busy airport, railroad station…or maybe a large department store during the holiday season.

  • The physical layer, the lowest layer, corresponds to the physical layer in the ISO/OSI Reference Model. As in the OSI model, it is concerned with moving information—in this case, the 53-byte ATM cells—into the communications medium. As already mentioned, this medium can be any of a number of different physical transports, including the fiberoptics-based SONET (Synchronous Optical NETwork), a T1 or E1 line, or even a modem. The medium and the message in this case are clearly separable because ATM is a transport method and is independent of the transmissions medium over which the messages travel.

So what happens after ATM filters information down through the AAL, ATM, and physical layers? Once the physical layer sends the cells on their way, they travel to their destinations over connections that might switch them from one circuit to another. Along the way, the switches and routers work to maintain connections that provide the network with at least the minimum bandwidth necessary to provide users with the quality of service (QOS) guaranteed them.

When the cells arrive at their destinations, they go through the reverse of the sending process. The ATM layer forwards the cells to the appropriate services (voice, data, video, and so on) in the AAL, where the cell contents are converted back to their original form, everything is checked to be sure it arrived correctly, and the "reconstituted" information is delivered to the receiving device.

Availability So ATM is a wonderful means of transmitting all kinds of information at high speed. It is reliable, flexible, scalable, and fast because it relies on higher-level protocols for error checking and correction. It can interface with both narrowband and broadband networks, and it is especially suitable for use in a network backbone.

Is there a downside? Well, yes. To begin with, ATM networks must be made up of ATM-compatible devices, and they are both expensive and not yet widely available. In addition, there is a chicken-or-egg dilemma facing serious ATM deployment: businesses are not likely to incur the expense of investing in ATM-capable equipment if ATM services are not readily available through communications carriers over a wide area, yet carriers are reluctant to invest in ATM networking solutions if there is not enough demand for the service.

Eventually, no doubt, ATM will win over both carriers and users, and the world will be treated to extremely fast, reliable ATM delivery. Until then, ATM continues to mature, especially with the help of an organization known as the ATM Forum—a group of vendors and other interested parties working together to develop standards, provide information, and generally encourage the development of ATM-related technology. As time passes, ATM is expected to build up a complete head of steam and begin to fulfill its promise. Certainly, with increasing reliance on networking and growing demand for faster and more sophisticated methods of delivering multimedia, there's a place for this technology.

And that, in a nutshell, is ATM. However, before leaving the subject, it's worth taking a quick look at broadband ISDN, another immature but promising technology, and the one for which the ATM layers were defined.

BISDN BISDN is next-generation ISDN, a technology that can deliver all kinds of information over the network. In BISDN terms, this information is divided into two basic categories, interactive services and distributed (or distribution) services.

  • Interactive services include you-and-me types of transactions, such as videoconferencing, messaging, and information retrieval.

  • Distributed services include you-to-me types of information that are either delivered or broadcast to the recipient. These services are further divided into those that the recipient controls (for example, e-mail, video telephony, and telex) and those that the recipient cannot control other than by refusing to "tune in" (for example, audio and television broadcasts).

But, you might think, current narrowband ISDN is also capable of delivering data, voice, video, and sound, so what's the difference? The difference is in the method of delivery. Narrowband ISDN transmissions are based on time division multiplexing (TDM), which uses timing as the key to interleaving multiple transmissions onto a single signal. In contrast, BISDN uses ATM, with its packet switching and its little 53-byte cells, for delivery.

Thus, ATM defines BISDN, or at least the part of it concerned with delivering the goods. In a sense, BISDN is comparable to a catalog shopping service that delivers everything from food to clothing, and ATM is like the boxes in which those products are packaged and delivered.

Developing Technologies

ATM is but one example of an advanced technology. ATM is here, though not yet widely available. So is it the only one to choose from? No, there are others. One, FDDI, is well known and used in both LANs and WANs. Two others are SONET, another developing technology, and SMDS, which is available through some carriers. All three—FDDI, SONET, and SMDS—tie in with ATM, at least in the sense of being high-speed networking technologies that are recommended by the ATM Forum as interfaces for ATM networks.

All three of the networks described in the following sections are, of course, designed for speed, speed, and more speed. Along with reliability, of course….


FDDI, variously pronounced either "fiddy" or "eff-dee-dee-eye," is short for Fiber Distributed Data Interface. As you've no doubt guessed, it's based on fiberoptic transmission. It's also based on a ring topology and token passing. It's advanced technology, yes, in the form of token ring over fiber.

FDDI was developed for two primary reasons: to support and help extend the capabilities of older LANs, such as Ethernet and Token Ring, and to provide a reliable infrastructure for businesses moving even mission-critical applications to networks. Based on a standard produced by an ANSI committee known as X3T9.5, the FDDI specification was released in 1986—a relatively long time ago in networking terms.

Although FDDI isn't really a WAN technology (its rings are limited to a maximum length of 100 kilometers, or 62 miles), the ground it can cover does make it suitable for use as a backbone connecting a number of smaller LANs, and it can provide the core of a network as large as a Metropolitan Area Network (MAN). In that sense, FDDI is more than LAN but less than WAN. In addition, because FDDI transfers information extremely quickly (100 Mbps), it is often used to connect high-end devices, such as mainframes, minicomputers, and peripherals, or to connect high-performance devices within a LAN. Engineering or video/graphics workstations, for instance, benefit from FDDI because they need considerable bandwidth in order to transfer large amounts of data at satisfactorily high speeds.

As its name indicates, FDDI was developed around the idea of using optical cable. This is, in fact, the type of cable used, especially when high-speed transmission is needed over relatively long distances (2000 to 10,000 meters, or roughly 1 to 6 miles). However, over shorter distances (about 100 meters, or 330 feet), FDDI can also be implemented on less expensive copper cable. In all, FDDI supports four different types of cable:

  • Multimode fiberoptic cable. This type of cable can be used over a maximum of 2000 meters and uses LEDs as a light source.

  • Single mode fiberoptic cable. This can be used over a maximum of 10,000 meters and uses lasers as a light source. Single mode cable is thinner at the core than multimode, but it provides greater bandwidth because of the way the light impulse travels through the cable.

  • Category 5 Unshielded Twisted Pair copper wiring. This cable contains eight wires and, like the next category, can be used over distances up to 30 meters.

  • IBM Type 1 Shielded Twisted Pair copper wiring. This is a shielded cable that contains two pairs of twisted wires, with each pair also shielded.

FDDI topology and fault tolerance

FDDI topology and operation are similar to Token Ring, except (there's always an exception, is there not?) that FDDI is primarily based on optical transmission. In addition, FDDI is characterized by two counter-rotating rings (known as a dual-ring topology).


Why two rings? The second one is there mostly for insurance. Normally in a FDDI network, one ring (known as the primary ring) actually carries the tokens and data, and the secondary ring remains idle and is used as a backup for fault tolerance—insurance. Because the secondary ring is available if needed, whenever a nonfunctioning node causes a break in the primary ring, traffic can "wrap" around the problem node and continue carrying data, only in the opposite direction and on the secondary ring. That way, even if a node goes down, the network continues to function.

Of course, it is also possible for two nodes to fail. When this happens, the wrap at both locations effectively segments the one ring into two separate, noncommunicating rings. To avoid this potentially serious problem, FDDI networks can rely on bypass devices known as concentrators. These concentrators resemble hubs or MAUs in that multiple nodes plug into them. They are also able to isolate any failed nodes, while keeping the network traffic flowing.


Sometimes, however, both rings are used for data. In this case, the data travels in one direction (clockwise) on one ring, and in the other direction (counterclockwise) on the other ring. Using both rings to carry data means that twice as many frames can circulate at the same time and, therefore, the speed of the network can double—from 100 Mbps to 200 Mbps.

FDDI token passing

Token passing on a FDDI network works much the way it does on a Token Ring network. That is, nodes pass a token around the ring, and only the node with the token is allowed to transmit a frame. There is a twist to this, however, that's related to FDDI's fault tolerance. When a node on the ring detects a problem, it doesn't simply sit around and say, "gee, I can't pass the token along, I guess I'll just hang onto it." Instead, it generates a frame known as a beacon and sends it on to the network. As neighboring nodes detect the beacon, they too begin to transmit beacons, and so it goes around the ring. When the node that started the process eventually receives its own beacon back—usually after the network has switched to the secondary ring—it then assumes that the problem has been isolated or resolved, generates a new token, and starts the ball rolling once again.

Structure of a FDDI network

A FDDI network, as already mentioned, cannot include rings longer than 100 kilometers apiece. Another restriction on a FDDI network is that it cannot support more than 500 nodes per ring. Although the overall network topology must conform to a logical ring, the network doesn't actually have to look like a circle. It can include stars connected to hubs or concentrators, and it can even include trees—collections of hubs connected in a hierarchy. As long as the stars and trees connect in a logical ring, the FDDI network is happy.

In terms of the nodes that connect to the network, they come in two varie-ties, depending on how they are attached to the FDDI ring. One variety, called a single attachment station, or SAS, connects to a concentrator and, through it, to the primary ring. Because an SAS connects to a concentrator, the latter device can isolate the node from the rest of the ring if it happens to fail.

The second type of node, called a dual attachment station, or DAS, has two connections to the network. These can link it either to another node and a concentrator or—if their operation is critical to the network—to two concentrators, one of which serves as a backup in case the other fails. This type of two-concentrator connection for a single resource, such as a mission-critical server, is known as dual homing and is used to provide the most fail-safe backup mechanism possible.

In sum: FDDI is a high-speed, high-bandwidth network based on optical transmissions. It is relatively expensive to implement, although the cost can be held down by the mixing of fiberoptic with copper cabling. Because it has been around for a few years, however, it has been fine-tuned to a high level of stability. It is most often used as a network backbone, for connecting high-end computers (mainframes, minicomputers, and peripherals), and for LANs connecting high-performance engineering, graphics, and other workstations that demand rapid transfer of large amounts of data.


SONET, or Synchronous Optical NETwork, is an ANSI standard for the transmission of different types of information—data, voice, video—over the optical (fiberoptic) cables widely used by long-distance carriers. Designed to provide communications carriers with a standard interface for connecting optical networks, SONET was formulated by an organization known as the Exchange Carriers Standards Assocation (ECSA) and later incorporated into an ITU recommendation known as Synchronous Digital Hierarchy, or SDH.

Today, apart from relatively small differences, SONET and SDH are equivalent—SONET in North America and Japan, and SDH in Europe. Together, they represent a global standard for digital networks that enables transmission systems around the world to connect through optical media. SONET is comparable to a standard that ensures that train tracks, regardless of manufacturer, follow the same design specifications and therefore can interconnect to allow trains to pass over them freely and without problem.

Originally designed in the mid-1980s, SONET works at the physical layer and is concerned with the details related to framing, multiplexing, managing, and transmitting information synchronously over optical media. In essence, SONET specifies a standard means for multiplexing a number of slower signals onto a larger, faster one for transmission.

In relation to this multiplexing capability, two signal definitions lie at the heart of the SONET standard:

  • Optical carrier (OC) levels, which are used by fiberoptic media and which translate roughly to speed and carrying capacity

  • Synchronous transfer signals (STS), which are the electrical equivalents of OC levels and are used by non-fiber media

So what does that mean? Well, let's back up a little. SONET is an optical transport, true. But remember that it is a long-distance transport. Although transmissions flow through the SONET system in optical form, they do not begin and end that way. Transmissions are multiplexed onto the SONET optical medium, but they come from—and go to—other, electrically based, types of digital transport such as T1. In this, it helps to think of SONET as something like the Mississippi River, and of the channels it connects to as tributaries that flow into and out of it. (In SONET terminology, those channels actually are called tributaries, so the analogy is reasonably accurate.) The following illustration shows basically what happens during a SONET transmission:


Because SONET is a synchronous transport, the signals it works with are tied to timing, and the various transmission speeds it handles are based on multiples of a single base signal rate known as STS-1 (Synchronous Transport Signal level-1) and the equivalent, optical, OC-1. This base rate operates at 51.84 Mbps. That sounds really fast, and it begins to show why SONET is seen as a desirable transport method, but remember—51.84 Mbps is base signal. SONET rates get even better. Higher-level SONET rates really fly. The next step up, for instance, is STS-3 (equivalent to OC-3), which multiplexes three STS-1 signals onto a single stream and operates at three times the base signal rate—155.520 Mbps. And there's more. STS-12 (OC-12) operates at 12 times the base signal, which works out to 622.08 Mbps. And at the top end, there's STS-48 (OC-48), with a defined transmission speed of 2.488 Gbps (that's gigabits per second).

How it works

As you can see from the preceding illustration, SONET converts electrical (STS) signals to optical (OC) levels for transport. It also "unconverts" them (OC to STS) at the point where the transmissions leave the SONET media for further travel on whatever carrier takes them the rest of the way to their destination. How this all happens is both impressive and intriguing.

To start off with, SONET is not a single, very long piece of optical fiber. (Of course not—that would mean one piece of cable stretching around the world….) Along the way from source to destination, a transmission can pass through more than one intermediate multiplexer, as well as through switches, routers, and repeaters for boosting the signal. Different parts of this route are given different SONET names:

  • A section is a single length of fiberoptic cable.

  • A line is any segment of the path that runs between two multiplexers.

  • A path is the complete route between the source multiplexer (where signals from tributaries are combined) and the destination multiplexer (where the signals are demultiplexed so they can be sent on their way).

The transmissions themselves are made up of 810-byte frames that are sent out at the rate of 8000 per second. These frames contain not only data but also a number of bytes related to overhead—monitoring, management, and so on. To an interested bystander, there are two especially remarkable aspects to the way these frames are managed:

  • First, they pour out in a steady stream, whether or not they contain any information. In other words, they are like freight cars on an endless train. If some data happens to arrive at the time SONET is putting a frame together, that data gets popped into the frame—the freight car is loaded. If no data arrives, the frame leaves the "station" empty.

  • Second, because SONET is a synchronous transport, each frame contains a device called a pointer that indicates where the actual data in the frame begins. This pointer is necessary because timing is such an important part of SONET transmission, but the network itself cannot assume that the arriving data streams are synchronized to the same clock. (That would, in fact, be impossible.) Instead, SONET allows for a certain amount of variation in timing and uses a pointer to ensure that the beginning of the data payload is clearly marked for retrieval at its destination.

Protocol layers in the SONET standard

In doing all of the work of organizing, multiplexing, transmitting, and routing frames, SONET relies on four protocol layers, each of which handles one aspect of the entire transmission. These layers and what they do are:

  • The photonic layer, which converts signals between electrical and optical form

  • The section layer, which creates the frames and takes care of monitoring for errors in transmission

  • The line layer, which is in charge of multiplexing, synchronizing, and demultiplexing

  • The path layer, which is concerned with getting the frame from source to destination

There are many more technical details involved in the definition of a SONET network, but these are the basics, and they should help you understand at least roughly how SONET works. Perhaps the most important lesson to carry away from this is the realization that SONET represents a fast, reliable transport for developing or future WAN technologies, including BISDN (and, by extension, ATM).


And finally, you come to SMDS, more lengthily known as Switched Multimegabit Data Service. SMDS is a broadband public networking service offered by communications carriers as a means for businesses to connect LANs in separate locations. It is a connectionless, packet-switched technology designed to provide business with a less expensive means of linking networks than through the use of dedicated leased lines. Besides reducing cost, SMDS is notable for being well-suited to the type of "bursty" traffic characteristic of LAN (or LAN-to-LAN) communications. In other words, it does the job when it's needed.

Because SMDS is connectionless, it is available when and as needed, rather than being "on" at all times. It is also a fast technology, transmitting at speeds of 1 Mbps to (in the United States) 45 Mbps. The basis of an SMDS connection is a network address designed as a telephone number that includes country code and area code, as well as the local number. This address is assigned by the carrier and is used to connect LAN with LAN. A group address can also be used to broadcast information to a number of different LANs at the same time.


Users who need to transfer information to one or more LANs simply select the appropriate addresses in order to indicate where the information is to be delivered. SMDS takes it from there and makes a "best effort" to deliver the packets to their destinations. It does not check for errors in transmission, nor does it make an attempt at flow control. Those tasks are left to the communicating LANs.

The packets transferred through SMDS are simple, variable-length affairs containing the source and destination addresses and up to 9188 bytes of data. These packets are routed individually and can contain data in whatever form the sending LAN works with—Ethernet packet, Token Ring packet, and so on. SMDS essentially just passes the information from one place to the other and doesn't deal with the form or format of the data. In other words, SMDS acts somewhat like a courier service—it picks up and delivers but does not concern itself with the contents of its packages.

About the Author

JoAnne Woodcock is the author of several popular computer books, including Understanding Groupware in the Enterprise, The Ultimate Microsoft Windows 95 Book, The Ultimate MS-DOS Book, and PCs for Beginners, all published by Microsoft Press. She is also a contributor to the Microsoft Press Computer Dictionary.

Copyright © 1999 by Microsoft Corporation

We at Microsoft Corporation hope that the information in this work is valuable to you. Your use of the information contained in this work, however, is at your sole risk. All information in this work is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Microsoft Corporation. Microsoft Corporation shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages. All prices for products mentioned in this document are subject to change without notice. International rights = English only.

International rights = English only.

Click to order