Chapter 6 - Unifying the Subnets of the Sample Network

On This Page

Focus on Signaled QoS
Large Routed Networks - Diffserv
Switched Local Area Networks - 802
ATM Networks
Small Routed Networks
Small Office and Home Networks

In this section, we will discuss QoS mechanisms in each of the subnetworks comprising the sample network and how they are integrated with the global QoS mechanisms of the end-to-end network.

Focus on Signaled QoS

As discussed previously, providing end-to-end guarantees requires coordination of resource allocation across all subnets on the end-to-end path. Top-down provisioning is adequate for providing low quality guarantees. To the extent that top-down provisioning management systems are able to integrate information regarding network topology, current resource usage in various parts of the network and fine-grain classification information, the quality of the guarantees provided can be improved. However, for any persistent conversation, host-based signaling provides information to the network. This information can be used to improve the quality of guarantees provided even further. For this reason (and due to the general end-to-end focus of this whitepaper), the following discussion will tend to focus on signaled QoS mechanisms. These mechanisms can be superimposed on the background of a top-down provisioned approach, so long as the network administrator enforces the separation of resource pools as described earlier.

Large Routed Networks - Diffserv

We'll start with the large routed networks shown at the center of the sample network. These represent large provider networks such as those of Internet Service Providers (ISPs). These networks are generally constructed with many large routers that are interconnected by high speed, wide area links. These routers typically carry traffic from thousands (if not millions) of simultaneous conversations. The overhead of providing per-conversation traffic handling or of listening to per-conversation signaling in these networks is prohibitive. However, from previous discussion we also know that using signaling and per-conversation QoS mechanisms can provide high quality guarantees most efficiently. Given that it is necessary to support high quality as well as low quality guarantees in this network, we are faced with a choice between incurring signaling and per-conversation overhead or accepting that the network will be operated inefficiently.

Due to the large amount of traffic aggregated in these networks, traffic patterns are relatively predictable and variance in load over time at any device is relatively small. In this case, minor over-provisioning (slight inefficiency) can yield a major improvement in the quality of guarantees that can be offered. It follows that, in general, in large subnetworks, it is preferable to incur minor inefficiencies rather than to incur the overhead of dense signaling and per-conversation QoS mechanisms. Diffserv is ideally suited to this tradeoff as it does not inherently rely on signaling and it handles traffic in aggregate. However, in practical terms, in order to support high quality guarantees through a diffserv network, some minimal signaling overhead must be incurred. The strategy we describe for supporting QoS in large networks is, therefore, based on diffserv style aggregate traffic handling, coupled with sparse processing of signaling messages when high quality guarantees are required.

Diffserv Aggregate Traffic Handling

Diffserv is implemented by supporting aggregate traffic handling mechanisms known as per-hop-behaviors (PHBs) in network devices. Packets entering the diffserv network are marked with diffserv codepoints (DSCPs) which invoke particular PHBs in the network devices. Currently defined PHBs include expedited forwarding (EF), and assured forwarding (AF). The EF PHB offers low latency, and is intended to provide virtual leased line (VLL) service. VLL service offers high quality guarantees and emulates conventional leased line services. The AF PHB offers a range of service qualities, generally lower than EF supported services but higher than traditional best-effort services. The AF PHB uses a group of twelve DSCPs specifying one of four relative priorities and one of three drop-precedence levels within each priority.

Service Level Agreements

In diffserv terms, the quality guarantees offered by the diffserv network are reflected at the edge of the network in the form of service level agreements (SLA). SLAs specify the parameters of a service that can be invoked by particular DSCPs and the amount or rate of traffic that the provider agrees to carry at the specified service level. Traffic submitted in excess of the negotiated rate is subjected to some alternative treatment, also specified in the SLA. SLAs may offer one or more service levels.

Functionality at the Edge of the Diffserv Network

Minimal diffserv functionality requires that the customer mark traffic submitted to the provider's network with the appropriate DSCP and that the provider polices submitted traffic on a per-customer, per-DSCP basis. The provider must police to verify conformance to the SLA, thereby limiting the resources consumed by the customer's traffic in the provider's network. Excess traffic is typically delayed, discarded, or re-marked to a less valuable DSCP. In order to avoid excess traffic from being arbitrarily penalized in the diffserv network, the customer may shape submitted traffic to assure that it conforms to the SLA.

In certain cases, the provider may offer value-added services such as marking or shaping traffic on behalf of the customer. Traffic may be marked or shaped on an aggregate level or at finer granularities in order to provide a level of traffic isolation that suits the customer's requirements. These services are referred to as provider marking or provider shaping.

Many interesting issues arise regarding the implementation of policing, marking and shaping functionality. These are beyond the scope of this document.

Provisioning the Diffserv Network

Provisioning of the diffserv network includes (in order of increasingly dynamic tasks):

  • Selection of network equipment

  • Selection of interfaces and interface capacity

  • Topology determination

  • Selection of enabled PHBs

  • Determination of DSCP to PHB mappings

  • Determination of queuing parameters associated with each PHB

These provisioning tasks determine the aggregate capacity of the provider's network, across all customers. As a result of such provisioning, the network provider effectively divides network resources into the various resource pools (described earlier) serving different qualities of guarantees.

Configuration of the Diffserv Network

We use the term configuration to refer to more dynamic tasks that affect per-customer resource allocation. This configuration includes (in order of increasingly dynamic tasks):

  • Configuring per customer, per-service level policing parameters at the network ingress.

  • Configuring value-added services such as provider marking or provider shaping at the network ingress.

The first of these tasks is quite different from the second. In the first, the provider configures the minimal information necessary to protect the provider's resources per the terms of the SLA. This includes classification criteria sufficient to recognize the originating customer and DSCP of each submitted packet and the corresponding per-customer, per-DSCP aggregate resource limits. The second task pertains to the configuration of information that determines which subset of the customer's traffic gains access to the aggregate resources available to the customer at each service level. The provider has no direct interest in how aggregate resources are divvied up among customer flows (so long as aggregate resource consumption is not being exceeded). This is actually a matter of internal customer policy. Any enforcement of internal customer policy should, from the provider’s perspective, be considered a value-added service.

Note that the first configuration task is relatively static as it changes only with the SLA (on the order of once per-month, per-customer). The second may be far more dynamic.

Configuration of Value-Added Services
Customers purchase aggregate capacities from providers at different service levels. It is in the customer's interest to assure that these resources are being used in an effective manner. When the customer relies on a provider's value-added services to mark and possibly shape customer traffic flows, the customer is also relying on the provider to determine the allocation of negotiated resources among individual customer traffic flows. In this case, it is important that the customer is able to effectively communicate to the provider the appropriate value-added configuration information.

Such information tends to be more dynamic and more voluminous than the simpler per-customer, per-service level configuration information (summarized in the basic SLA). As a result, the typical mechanisms by which SLA configuration information is communicated (e.g. monthly phone calls between a representative of the customer and a representative of the provider) tends to be unsuitable for communication of value-added configuration information. The difficulties in communicating value-added configuration information to the provider suggest that it is preferable for the customer to mark and shape traffic directly, eliminating the need for the provider to configure value-added parameters.

Using RSVP for Admission to the Diffserv Network

The customer should mark and shape traffic such that the volume of traffic marked for any particular service level is consistent with the resources available per the SLA and the customer's expectation regarding quality of guarantee. For example, consider the IP telephony example described earlier. If the SLA provides sufficient capacity to carry 10 IP telephony calls, the customer should avoid marking traffic from more than 10 simultaneous telephony sessions for the low latency service level1. In addition, the customer should assure that high value resources are used subject to some policy that defines the relative importance of different users and/or applications. RSVP signaling between hosts in the customer's network and admission control agents at the edges of the provider's network can be used to achieve both these goals.

Let's look at how admission control can be applied at an ingress point to a provider's network. Either the provider's ingress router or the customer's egress router (or both) can be configured to act as the admission control agent. The router acting as admission control agent should be configured to listen to per-conversation RSVP signaling. (Routers within the diffserv network are not required to listen to RSVP signaling. Instead, they pass RSVP signaling messages transparently.) In addition, it should be configured with the per-service level capacities available to the customer, per the SLA. It is also necessary for the router to understand the mapping from the intserv service level requested in RSVP requests to the corresponding diffserv service level (as described in section Now, when an RSVP request is issued for data that will traverse the provider's network, it will arrive at the router serving as the admission control agent. The router has sufficient information to inspect the resources requested and to map the requested service level to the corresponding service level in the SLA. If the resources requested are available per the SLA, then the router admits the reservation request by allowing the RSVP request to pass unhindered. If resources are not available, the router rejects the request by blocking the RSVP request and returning an error.

In this mode, the router that is the admission control agent listens to per-conversation RSVP requests for the sake of tracking the customer's resource usage against the SLA. The router does not necessarily apply any per-conversation traffic handling. In the case that the admission control agent is the diffserv provider's ingress router, it uses diffserv aggregate traffic handling. Further, the router does not enforce any per-conversation admission control. Instead, it is the responsibility of the customer to make use of the admission control information provided by the edge device and to apply the appropriate marking and policing internally. Typically, well-behaved transmitters will respond by marking packets sent on admitted flows, with the DSCP that maps to the service level requested. Upstream senders should also refrain from marking traffic corresponding to rejected conversations. Alternatively, the sender may:

  • Mark for a lesser DSCP.

  • Refrain from sending traffic on the conversation altogether.

  • Reduce its rate to a rate deemed admissible by the edge device.

Note that admission control agents may return a DCLASS object upstream in response to RSVP signaling requests. This object informs upstream senders of the appropriate DSCP to be marked in packets transmitted on the corresponding flow (thereby overriding the default mapping). In a subsequent section we will discuss in further detail how end systems and/or upstream devices mark DSCPs based on the results of RSVP signaling.

RSVP signaling can also be used to enforce customer policies that determine which users and/or applications are entitled to use resources in the provider's network. This can be accomplished by configuring the customer's egress router to listen to RSVP signaling and to forward the policy objects contained in these messages (which identify the sending user and application) to a policy decision point.

Later in this whitepaper, we will discuss how Microsoft components can be used to provide the admission control functionality described in this section.

Dynamic SLAs and RSVP Signaling

In the previous section, we described the use of RSVP signaling to provide admission control to a diffserv network that provides static SLAs. In the near term, diffserv network providers are expected to be able to provide only static SLAs. This is because the existing QoS provisioning tools themselves are top-down and relatively static.

In the future, we can expect to see increasing demand for dynamic SLAs. Dynamic SLAs are preferable as they enable the provider to respond to changing resource demands from customers, thereby improving the quality/efficiency product of the diffserv network. This is particularly important when high quality guarantees are to be offered. However, dynamic SLAs require that the provider be able to re-provision the network core dynamically. Such re-provisioning is more complex than static provisioning. It also carries associated overhead and potential security problems. Nonetheless, these are not insurmountable problems and the potential reward in terms of improved quality/efficiency product is significant.

There are a number of mechanisms by which dynamic SLAs may be provided. Each of these requires a relatively dynamic QoS signaling protocol between the customer network and the provider network2. The protocol must provide a means by which the customer can request changes in the SLA and must result in any necessary re-provisioning of the provider's network (or refusal of the request). An obvious choice for this protocol is RSVP.

Recall that hosts will typically generate per-conversation RSVP signaling when high quality guarantees are required. We've already seen how this signaling can be used to provide admission control against static SLAs. We can leverage RSVP signaling further to assist in actual re-provisioning of the diffserv network itself. We discuss methods for doing so in the following paragraphs. These methods enable providers to optimize their networks for specific tradeoffs between the quality/efficiency product of the networks and the overhead they are willing to incur.

Triggering Re-Provisioning Based on Per-Conversation Signaling
As in the case of static SLAs, the network administrator configures the ingress router at the edge of the diffserv network to listen to per-conversation RSVP signaling and configures the devices in the core of the network to ignore the per-conversation messages flowing through them. The ingress router tracks the cumulative resources requested from customers at each intserv service level. As these reach high or low water marks, the ingress router triggers re-provisioning in the diffserv core, as appropriate.

Re-Provisioning the Core
Dynamic internal re-provisioning may be effected by various mechanisms. One such mechanism is via use of a bandwidth broker. The bandwidth broker is a hypothetical device, which has knowledge of the provider's network topology and current resource usage and is able to effect re-provisioning of the network to accommodate changes in resource requirements (or to refuse such changes). A more practical re-provisioning mechanism uses RSVP signaling internal to the diffserv network. The network administrator may configure strategic devices within the diffserv network to process either per-conversation or aggregate RSVP signaling. These devices in effect comprise a distributed bandwidth broker.

Note that, regardless of the use of per-flow or aggregate RSVP signaling for admission control and re-provisioning of the diffserv network, the actual traffic handling in a diffserv network is always aggregate, by definition.

Processing RSVP Signaling Messages in the Core
In processing RSVP signaling messages in the core, the network administrator is again faced with a variety of options. The lowest overhead option is to use edge devices that generate aggregate RSVP messages to re-provision major paths in the diffserv network, in response to changing demands from the periphery (signaled in the form of per-conversation or aggregate RSVP signaling messages). Devices at strategic locations within the diffserv network would process these messages. The network administrator can improve the quality/efficiency product of the diffserv network by enabling these devices more densely, or alternatively, can reduce the QoS overhead in the diffserv network by enabling these devices more sparsely.

If the network administrator is willing to incur the associated overhead, the administrator may chose to simply process per-conversation RSVP signaling in the core of the network (as opposed to aggregating them into aggregate signaling at the edges). Again, the administrator is faced with the choice of how densely or sparsely to enable these devices to select the appropriate tradeoff in quality/efficiency product versus overhead.

Provisioning for High Quality Guarantees

As we have shown, to provide high quality guarantees in an efficient manner requires good knowledge of traffic patterns in a network and an awareness of the volume of traffic that will be arriving at each network device for each service level. Since diffserv networks tend to be large, and variance in traffic patterns can be relatively low, it is feasible to offer some medium-quality guarantees while incurring only low losses in efficiency (section 6.2). However, in order to offer high quality guarantees, it is necessary to strictly control the amount of traffic, arriving at various locations in the network, claiming high quality treatment.

One mechanism for doing this along specific routes in the network, is to statically provision the capacities of high priority queues in various devices to accommodate high quality guarantees for a limited amount of traffic. In order to prevent rogue high priority marked traffic from claiming excessive resources along these routes (or other routes), it is necessary to strictly police the volume of traffic marked for high priority queues, throughout the network. Using this approach, it is possible to offer high quality guarantees at the network edges, for a limited volume of traffic, traversing a known route through the network. These guarantees are typically reflected in an SLA by specifying the egress point(s) of the traffic that can be accommodated at high service levels. (The customer should also expect to be policed based on these egress points.) This approach assumes that routes in the diffserv network can be reasonably well determined based on the traffic's ingress and egress points.

The mechanism discussed in the previous paragraph is consistent with the provisioning of static SLAs. A more dynamic mechanism for offering high quality guarantees is to respond to a customer's signaling requesting high quality guarantees. In this approach, the total capacity available in various devices for high quality guarantees is still statically provisioned, but is available to be shared among all customers in response to changing demand. By listening to (and responding to) per-conversation RSVP requests from customers (at least at strategic branch points), the provider can offer topology-aware admission control and high quality guarantees without predetermining the routes available to specific customers.

Emerging Diffserv Networks

For the near future however, we are unlikely to see extensive participation in per-conversation signaling by devices in diffserv networks. As a result, we are likely to see diffserv services offered as illustrated below:

In this diagram, we see a number of end customer networks, interconnected by transit networks. The customer networks can all communicate with each other using the basic best-effort service which exists today. Those that are interconnected by diffserv-enabled transit networks benefit from the low and medium quality QoS guarantees offered by these networks. Overlaid on top of the QoS enabled transit networks, we also see several provisioned QoS 'trunks' that offer high quality guarantees between a statically provisioned, limited set of endpoints (indicated by the heavy line). These form a sort of QoS VPN (virtual private network). Low and medium quality QoS guarantees will dominate the transit networks, with high quality QoS guarantees offered on specific routes, on a limited basis.

Switched Local Area Networks - 802

In this section, we'll discuss switched 802 networks. These are representative of many corporate or campus networks in which some number of hosts, ranging from the members of a small workgroup to an entire building or campus, are served by a number of interconnected switches. In larger campuses, switches may be grouped into subnetworks that are interconnected by layer 3 routers. We will focus initially on QoS mechanisms within the scope of a single switched subnetwork. Later, we will discuss QoS issues related to the interconnection of these subnetworks.

The discussions regarding the application of diffserv in large routed networks can be readily applied to many instances of switched networks. We observed that in large routed networks, small inefficiencies could result in significant quality gains due to the low variance of traffic patterns in the network. In switched networks, we can also accept some degree of inefficiency since local area switched resources tend to be quite inexpensive. It may also be true that switched networks support a large number of simultaneous users and that therefore the variance in traffic patterns is small. However, while this may be true near the core of certain very large switched networks, it is not true near the edges of these networks, where some relatively small number of hosts are attached to each switch. Nonetheless, for existing applications, the bandwidth available near the edges of switched networks tends to be significantly higher than the bandwidth demanded by the hosts, rendering efficiency of resource usage unimportant.

Given that efficiency is of secondary concern in these switched networks, we find that these networks can provide relatively high quality guarantees using relatively low-overhead QoS mechanisms. In particular, we find that aggregate traffic handling mechanisms tend to provide reasonable QoS on switched networks. To the extent that we wish to extract higher quality/efficiency products from these networks, we may combine the aggregate traffic handling mechanisms with some degree of signaling processing.

In its use of QoS mechanisms, the switched network is analogous to the large routed network. Whereas the large routed network uses diffserv as an aggregate form of traffic handling, the switched network uses 802.1p as its aggregate form of traffic handling. While the large routed network appoints some number of routers near its edge as a minimal set of admission control agents, the switched network typically uses some number of SBM-capable switches as its admission control agents. Since the 802 network is analogous to the diffserv network, many of the considerations and issues discussed in the context of the diffserv network apply to the 802 network. In the following sections we revisit some of those considerations and issues and note differences between the two network types.

8021.p Aggregate Traffic Handling

Modern LAN switches provide multiple forwarding queues on each interface. These effectively provide different per-hop behaviors3. A particular forwarding queue is selected in each device by the 802.1p tag included in the MAC header of packets submitted to the switch. The 802.1p tag carries one of eight priority values, corresponding to one of eight possible service levels in the network. The scope of these tags is the 802 subnet in which they are generated. 802.1p tags are not carried across layer 3 devices such as routers, but instead are dropped at the edge of the 802 network. As such, they are not carried across the routed networks illustrated at the center of the sample network illustrated previously.

Marking 802.1p Tags

As is the case with DSCPs, 802.1p tags can be generated either by the host transmitting a packet or by routers or switches in the network through which packets are carried. In either case, the device generating the tag may select a tag based on top-down provisioned criteria or, alternatively, may do so based on participation in RSVP signaling (or both - see section for related discussion). In the top-down provisioning model, some device near the edge of the 802 cloud (host, switch or router) would be configured with classification criteria (by which packets would be identified as belonging to a certain flow) and the corresponding tag. This mechanism inherits the common problems associated with top-down provisioning, namely, that the quality/efficiency product of the network is limited. In the alternate model, hosts generate RSVP signaling describing the traffic they will be sending and its requirements from the network. Hosts or network devices then use the results of this signaling to determine how to tag packets on particular flows. This mechanism supports a greater quality/efficiency product.

Certain applications will not generate signaling. As a result, it is likely that some combination of top-down provisioned and signaling-based mechanisms will be used to effect packet marking. As has been discussed previously, this requires the network administrator to consider the 802 network resources to be divided into pools. The set of tags allowed by top-down provisioning should not claim resources from the same pool as those tags that are allowed as a result of signaling.

Using RSVP Signaling for Admission to the 802 Network

RSVP signaling may be used in various forms for admission to 802 networks. In the simplest case, RSVP signaling is not actually processed by any device within the layer 2 subnetwork. Rather, devices sending into the network apply admission control by admitting or rejecting RSVP requests up to a provisioned limit. This is analogous to the example presented in the first paragraph of section 6.2.6, in which routers at the edges of a diffserv network are provisioned with a static SLA and admit or reject RSVP requests up to the limits specified in the SLA.

From a practical viewpoint, this approach is not really suitable for 802 networks. The primary reason is that 802 networks tend to be less formally provisioned than diffserv clouds (in part because bandwidth tends to be cheaper in the local area than in the wide area). The diffserv model presented assumes that the static SLA provisioned at ingress points to the diffserv network is reasonably reliable. The diffserv provider has incentive to carefully provision the network and to provide reliable SLAs because money changes hands based on the reliability of these SLAs. In addition, ingress and egress points to the diffserv network tend to be limited in number and carefully controlled. In layer 2 networks, the addition of ingress points is trivial and tends to happen more frequently than in a routed network. These concerns are particularly applicable in the common case of 802 networks that support large numbers of directly attached hosts. In this case, an SLA would be implicit for each host capable of transmitting into the 802 cloud.

The Role of the SBM in Providing Admission Control to 802 Networks

The SBM is a device capable of participating in an extended form of RSVP signaling that is suitable for shared networks. The SBM protocol can be enabled on devices in the 802 network at various densities, considering the same tradeoffs that result from enabling RSVP admission control agents in a diffserv network at various densities. At the lowest density, the network administrator may choose to enable a single switch in the core of the layer 2 network to act as the admission control agent for the entire layer 2 network. In this case, this device is the designated SBM (DSBM). At the other extreme, the network administrator may choose to enable every switch in the 802 network to act as admission control agents. In this case, the DSBM election protocol will result in the division of the 802 network into a number of managed segments, each managed by a DSBM. The denser the distribution of DSBMs, the higher the overhead associated with processing signaling messages, and the higher the quality/efficiency product which can be expected from the 802 subnetwork. The sparser the distribution, the lower the overhead and the lower the quality/efficiency product.

Mapping Intserv Requests to 802 Aggregate Service Levels

Admission control to the 802 network in response to signaling requests relies on a mapping of requested intserv service levels to the appropriate 802.1p tag. As is the case with mapping intserv to diffserv, a simple default mapping is assumed. DSBMs are able to override this default by appending a TCLASS object to RSVP RESV messages flowing through the DSBM en-route upstream. The TCLASS object is analogous to the DCLASS object described in section 6.2.6 and informs upstream devices of the 802.1p tag which should be used to mark packets sent on the admitted flow.

Beyond Aggregate Admission Control

Because SBMs are able to insert themselves in the RSVP control path it is possible for layer 2 devices to provide QoS functionality beyond the aggregate traffic handling and admission control described. SBMs can actually install aggregate or per-flow policers and finer-grain traffic handling, in response to RSVP signaling, thereby offering increased quality/efficiency product from the 802 subnetwork. However, because the incentive to achieve optimal efficiency in these networks is not high, it is unlikely that network administrators will choose to incur the associated overhead.

Behavior Expected When Sending onto 802 Shared Subnets

When an 802 subnet is managed by one or more DSBMs, the existence of the DSBM is advertised by the periodic transmission of I_AM_DSBM messages. Senders on shared subnets are expected to detect the presence of a DSBM by listening for these messages. When an SBM is detected, senders are expected to divert RSVP signaling messages to the DSBM, rather than to the next layer 3 hop to which the message would otherwise be directed. This is required in order for the DSBM to be able to manage resources on the shared subnet. This functionality is referred to as SBM Client functionality. In addition, senders are expected not to tag packets for 802.1p prioritization unless such tagging has been approved in response to signaling (see section 8.3).

The restrictions described so far prevent hosts from marking traffic without policy approval, but impose no restrictions on the transmission of unmarked (best-effort) traffic. So long as devices in the network are capable of traffic isolation (by the use of dedicated switch ports and separation by tag or mark), there is no need to prevent senders from sending best-effort traffic. However, under certain conditions, network administrators may wish to limit any traffic sent by the host without network approval. To this end, DSBMs may be configured to advertise a NonResvSendLimit on the managed subnet. This value specifies the maximum rate at which hosts may send in the absence of an approved reservation. See section

In order to maintain control of network resources, it is required that all senders sending onto a shared subnet implement full SBM client functionality. Senders not implementing this functionality should be isolated on separate subnets.

ATM Networks

ATM technology can be considered in the context of several types of subnetworks. For example, many providers offer large ATM based networks. In addition, ATM may be used as a campus backbone technology. The first example corresponds to the large routed networks illustrated at the center of the sample network. The second corresponds to the smaller ATM network illustrated in the customer domain at the lower-left corner of the sample network. When considered in the context of large provider networks, it is unlikely that ATM will be exposed directly to the customer as the QoS interface to the provider's network. It is more likely that ATM will be used to provision the provider's network such that it is able to provide a more abstract QoS interface, such as diffserv. One of the reasons for this is that the same scalability issues that apply to supporting per-conversation traffic handling in the form of per-conversation RSVP apply equally to ATM. Large provider's will not want to track per-conversation ATM VCs on behalf of customers. Instead, they are likely to provide VCs or VPs on a per-customer, per-aggregate service level basis.

ATM Per-Conversation or Aggregate Traffic Handling

In large provider networks, ATM VCs or VPs will likely be used as an aggregate traffic handling mechanism. Greater flexibility is possible when considering the use of ATM to provide QoS in smaller campus backbone type environments, where scalability is less of a concern. In these environments, the network administrator may map per-conversation intserv service requests to individual VCs. This approach is the current best practice recommended by the ISSLL working group of the IETF. It applies to switched VC environments, including LANE (ATM LAN emulation). Alternatively, the network administrator may choose to provision VCs or virtual paths (VP) to carry multiple conversations requiring the same service level, in so providing aggregate traffic handling.

ATM Edge Devices

ATM edge devices may provide varying degrees of QoS support. Regardless of the specific mechanism used, the edge device must address the fundamental problem of determining which traffic should be directed to which VC/VP. Several options are described below.

Dedicated Per-Conversation VCs
This mode of operation offers the highest quality/efficiency product from the ATM network but carries a cost in overhead. In this mode, it is necessary for an ATM edge device to initiate user network interface (UNI) signaling to establish a VC with the appropriate QoS parameters, for each conversation. Although this could be done implicitly, based on the arrival of packets corresponding to new conversations and a marked DSCP and/or 802.1p tag, there would be little point in doing so4. If per-conversation VCs are to be established then the edge device should do so in response to explicit RSVP signaling. In this case, the edge device would have to appear as a layer 3 RSVP-aware hop or alternatively, as a DSBM. In the case that the edge device separates one IP subnet from another IP subnet it should behave as a layer 3 RSVP-aware routing hop. In the case of a mixed layer 2 subnet (in which there exist both ATM and non-ATM segments in the same IP subnet), the edge device would intercept RSVP messages in its capacity as DSBM.

In either case, VCs are established in response to RSVP signaling. A mapping from intserv service type and intserv quantifiable parameters to ATM service types and quantifiable parameters is defined by the ISSLL working group of the IETF. In this example, admission control at the RSVP level simply reflects the results of lower level UNI signaling.

Aggregate Per-Service Level VCs
This mode of operation offers a lower quality/efficiency product but at significantly reduced overhead. Aggregate traffic handling in an ATM subnetwork is similar but not equivalent to aggregate traffic handling in a diffserv or an 802.1p subnetwork. Diffserv and 802.1p subnetworks offer aggregate traffic handling in the form of disjoint PHBs (or priority queues) that are invoked by the arrival of a packet with the appropriate mark or tag. On the other hand, ATM subnetworks offer aggregate traffic handling by establishing a VC of the appropriate ATM service type. In an ATM subnetwork, it is necessary to determine when to establish VCs, between which endpoints to establish them, and for how much capacity. This is similar to the diffserv network-provisioning problem discussed in section 6.2.4, but somewhat more complicated. It is more complicated because VCs must be established between specific pairs of endpoints whereas diffserv PHBs are provisioned at individual nodes.

One approach is to establish a mesh of PVCs at network provisioning time. The permanent virtual circuit (PVC) mesh can then be used to provide the equivalent of SLAs at the edges of the ATM network. Edge devices admit RSVP requests subject to these SLAs. Another alternative is to allow aggregate VCs to be established and torn down based on demand. Either approach can be applied to signaled flows as well as to non-signaled flows. In the case of signaled flows, this mode is similar to the mode of operation described in section In the case of non-signaled flows, VCs would be established on demand (as interpreted by the number of packets submitted for a specific service level to a specific destination). In the first case, packets are routed to a VC based on the intserv service type requested in the signaling messages for the associated flow. In the second case, packets are routed to a VC based on a mapping from DSCP, 802.1p or pre-provisioned classification criteria5.

Small Routed Networks

Our sample network illustrates a small routed network in the top-right customer network. Small routed networks can be operated as diffserv provider networks (in which case, many of the considerations discussed in the context of large diffserv provider networks apply). However, these networks may also be operated as per-conversation RSVP/intserv networks. Since these networks are smaller than the large provider networks discussed in the context of diffserv, the tradeoffs are somewhat different. Specifically, the number of conversations tends to be smaller, reducing the concerns regarding QoS overhead. In addition, the gain of over-provisioning may not be as high as it is in the large provider networks, due to the increased variance in resource usage. Therefore, efficiency might be more of a concern in these networks, arguing for support of a signaled QoS approach.

Hybrid of Signaled Per-Conversation and Aggregate QoS

A signaling-only approach precludes QoS for traffic generated by non-signaling applications. Therefore, such routed networks are likely to be operated using both signaled and provisioned QoS, just as the larger provider networks are operated. In the smaller networks, we are likely to see devices enabled to process RSVP signaling in greater densities than the provider networks. In addition, these devices will be configured to provide both per conversation traffic handling (based on signaled 5-tuple), in addition to aggregate traffic handling, (based on DSCP). Routers that are not enabled to process RSVP signaling will behave just as the routers in the core of the diffserv network, handling traffic based on DSCP exclusively. Thus, just as resource pools are separated in the large networks, between signaled and non-signaled traffic (by separation of DSCPs), they will be separated in smaller routed networks. Such hybrid functionality poses some interesting administration challenges and router functional requirements.

Required Router Functionality
In these hybrid networks, routers that are signaling-enabled are required to identify traffic that should be treated on a per-conversation basis as well as traffic that should be treated on an aggregate basis. These routers will classify arriving packets in a hierarchical manner. First, packets that match a signaled 5-tuple will be directed to the corresponding per-conversation traffic handling mechanism. Traffic that does not match a signaled 5-tuple will either be treated according to the DSCP marked in the submitted packet, will be re-marked based on some configured classification criteria, or will be treated as best-effort. How this traffic is treated at different routers in the small routed network, depends largely on the location of the device relative to trust boundaries and on the capabilities of hosts in the network.

Trust Boundaries
In smaller networks operated on behalf of a single administrative domain, trust boundaries tend to be vaguer than in the larger provider networks. In the larger networks, all customers submit traffic at well-defined ingress points, subject to SLAs. This is where money changes hands. Ingress devices to provider networks are either configured to remark all traffic, based on provisioned classification information, or to trust marked traffic but to police to per-service level aggregate limits negotiated in the SLA. In the smaller networks, under a single administrative domain, real money does not change hands within the network and policies tend to be more trusting. For example, routers in the engineering department may trust DSCPs marked in all submitted packets. Routers in the marketing department may do the same. Only routers at which traffic from multiple departments is merged would enforce a version of an internal SLA. Enforcement of the SLA would apply to traffic handled in aggregate. Traffic handled based on per-conversation reservations would be policed based on signaled per-conversation parameters.

Host Capabilities
Routers in the small routed network can be used to separate hosts of varying capabilities. (Note that similar considerations apply to smart switches in 802 LANs and 802.1p). As QoS functionality is rolled out, we can expect to see networks supporting hosts that:

  1. Provide no QoS functionality

  2. Mark DSCPs without signaling

  3. Signal and mark DSCPs based on the results of signaling

If hosts with the varying levels of capabilities are all supported by the same router, then this router must use fairly complex classification policies to recognize traffic sourced by the different types of hosts and to apply the appropriate marking and policing. Specifically, traffic for which signaling requests were generated should be policed based on 5-tuple (unless the router is configured for aggregate traffic handling, in which case, traffic should be policed based on DSCP). Traffic from hosts trusted to mark their own DSCP should be verified. Traffic from these hosts must be separated from traffic originating from hosts that are not trusted or not capable of marking their own DSCP.

Router marking and policing requirements can be simplified by separating different sets of hosts behind different routers (or switches with similar capabilities). In such a scenario, hosts providing no QoS functionality would be isolated behind routers that are configured to mark DSCPs on their behalf. QoS capable hosts would be placed behind routers that trust but verify marked DSCPs or respond to signaling requests.

Small Office and Home Networks

In the sample network diagram, we showed a couple of subnetworks as hosts, connected to the large provider network via a slow dial-up link. These can be considered to be small office or home PCs or networks (SOHO networks) connected to their ISP via a 56 Kbps modem link. From the perspective of the large provider network, the SOHO network is just another customer network, albeit a very small one. As such, much of the previous discussion regarding boundary functionality between providers and customers applies here. Beyond this however, the interface between the provider and the customer may be unique in that it may be a slow interface.

Aggregate Traffic Handling

For the foreseeable future, providers are unlikely to support signaling from SOHO network customers. Instead, they are likely to provide QoS by negotiating static SLAs with these customers, which will allow them to submit traffic marked for two or more aggregate service levels. Hosts in the customer networks may still generate RSVP signaling and may mark packets based on the results of this signaling. However, the provider will be unlikely to participate in the signaling process.


Slow links present problems when they are required to carry both interactive audio traffic and data traffic. For example, 1500 byte data packets submitted to a slow link will occupy the link for almost half a second. Any audio packets that need to be sent after a data packet has been submitted to the link are subjected to severe latencies. ISSLOW functionality on a transmitting interface fragments the larger data packets, allowing audio packets to be interspersed, thereby largely eliminating the latency problem. This is particularly useful for e-commerce applications in which a customer may be, for example, perusing catalog images over the web, while speaking with a sales representative. It is also useful in peer-to-peer video-conferencing scenarios. In order to be useful, ISSLOW must be supported at least on the provider's sending interface and ideally on the customer's as well. ISSLOW can be invoked in response to RSVP signaling from the customer, or based on heuristics. An example of such heuristics would be the detection of a conversation which carries audio-size packets (28 - 128 bytes) at typical audio rates (6 Kbps - 64 Kbps). Detection of such a conversation would cause other traffic to be fragmented.

ISSLOW fragmentation is based on the relatively common PPP multilink protocol. Because it fragments at the link layer, it imposes relatively low overhead.

1 When lower quality guarantees are expected, then the constraints can be relaxed accordingly.
2 In a sense, even static SLAs make use of a signaling protocol between customer and provider. In this case, the protocol consists of periodic change order requests (typically in the form of a phone call) from customer to provider to modify parameters of the SLA. The management burden associated with these requests may be significant, especially if services such as provider marking are involved. These requests may be followed by a lengthy period of negotiation and internal re-provisioning before the modified SLA terms are actually available to the customer.
3 Per-hop behaviours is a term borrowed from diffserv and should be used carefully when applied to switches. Commonly, the different queues in 802.1p switches are related based on strict priority. However other behaviours may be implemented.
4 Presumably, the goal of per-conversation VCs (as opposed to aggregate VCs) is good traffic isolation based on the resource requirements of each flow. However, in this case, the requirements of the flow are only indicated via an aggregate service level (in the form of DSCP or 802.1p tag). Therefore, the edge device would not know the appropriate parameters to use in establishing a dedicated VC.
5 Note that mappings from DSCP (or 802.1p) to ATM service type are implied by the existence of mappings from intserv service types to each of these. In other words, we assume that intserv is a unifying abstraction for service types. Thus, any layer two medium-specific set of services should have a corresponding mapping from intserv services. This mapping can then be used to deduce mappings from one layer two medium to another. Thus, if there exist N interesting media and associated sets of services, only N mappings are required, rather than N squared mappings.