A Short Overview of QoS Mechanisms and Their Interoperation
During the past several years, numerous mechanisms have surfaced for providing quality of service (QoS) networks. The ultimate goal of these mechanisms is to provide improved network "service" to the applications at the edges of the network. In this white paper, we briefly discuss the benefits of QoS in general. We then discuss available QoS mechanisms and how they interoperate.
On This Page
2 The Benefits of QoS
3 How QoS Works
4 QoS Technologies
5 Qualities of Guarantees and the Quality/Efficiency Product
6 Policy and Policy Servers
During the past several years, numerous mechanisms have surfaced for providing quality of service (QoS) networks. The ultimate goal of these mechanisms is to provide improved network "service" to the applications at the edges of the network. In this white paper, we briefly discuss the benefits of QoS in general. We then discuss available QoS mechanisms and how they interoperate. Part 2 of this document, Microsoft's QoS Components, discusses Microsoft-specific QoS mechanisms and is available separately.
2 The Benefits of QoS
Recent years have witnessed rapid growth rates in computer network traffic. Network administrators scramble to keep pace with increasing demands by continually adding capacity. Still, network customers are often dissatisfied with the performance of the network. The growth in popularity of a new breed of resource hungry multimedia applications promises to exacerbate this condition. QoS mechanisms provide a set of tools that can be used by the network administrator to manage the use of network resources in a controlled and efficient manner. These will have the effect of offering improved service to mission critical applications and users, while simultaneously stemming the rate at which capacity must be increased. In other words, QoS can help improve service to the network users while reducing the costs of providing these services.
2.1 Specific Examples
The following paragraphs describe several specific examples of benefits that can be expected as a result of QoS deployment.
Improved Performance of Mission Critical Applications Over WAN Links
Applications such as SAP and PeopleSoft are often used to provide mission-critical services across wide area intranets. These links are particularly susceptible to congestion, which results in sluggish application response or session timeouts, which can be costly. QoS enables the network administrator to prioritize mission-critical traffic such that it is immune to congestion on WAN links. This can be achieved at minimal cost to less important, competing applications. The QoS remedy is analogous to providing commuter lanes on busy highways. Mission critical traffic is directed to these "lanes."
Controlling the Impact of Multimedia Traffic on the Network
Multimedia streaming applications such as Windows Media™ Technologies, NetMeeting® conferencing software, RealAudio, and applications based on TAPI 3.0 are gaining popularity among network users. These generate large volumes of UDP traffic. This traffic is not network friendly in the sense that it does not "back-off" in the face of congestion. Because of the potential impact of this type of traffic on network resources, network administrators are prohibiting or limiting the deployment of multimedia applications on their networks. QoS mechanisms enable the network administrator to control the impact of these applications on the network.
In the previous example, we discussed the use of QoS to control the impact of streaming media applications on network resources without concern for the service actually provided to the multimedia application. QoS can be applied to actually guarantee specific service quality to certain streaming media applications. QoS in this case enables the true convergence of multimedia and data networks. The benefits of such convergence include (among others) useable IP telephony with the commensurate cost savings.
3 How QoS Works
Applications generate traffic at varying rates and generally require that the network be able to carry traffic at the rate at which they generate it. In addition, applications are more or less tolerant of traffic delays in the network and of variation in traffic delay. Certain applications can tolerate some degree of traffic loss, while others cannot. If infinite network resources were available, then all application traffic could be carried at the application's required rate, with zero latency, and zero packet loss. However, network resources are not infinite. As a result, there are parts of the network in which resources are unable to meet demand.
Networks are built by concatenating network devices such as switches and routers. These forward traffic among themselves using interfaces. If the rate at which traffic arrives at an interface exceeds the rate at which that interface can forward traffic to the next device, then congestion occurs. Thus, the capacity of an interface to forward traffic is a fundamental network resource. QoS mechanisms work by allotting this resource preferentially to certain traffic over other traffic.
In order to do so, it is first necessary to identify different traffic. Traffic arriving at network devices is separated into distinct flows via the process of packet classification. Traffic from each flow is then directed to a corresponding queue on the forwarding interface. Queues on each interface are serviced according to some algorithm. The queue-servicing algorithm determines the rate at which traffic from each queue is forwarded, thereby determining the resources that are allotted to each queue and to the corresponding flows. Thus, in order to provide network QoS, it is necessary to provision or configure the following in network devices:
Classification information by which devices separate traffic into flows.
Queues and queue servicing algorithms that handle traffic from the separate flows.
We will refer to these jointly as traffic handling mechanisms. In isolation, traffic handling mechanisms are not useful. They must be provisioned or configured across many devices in a coordinated manner that provides useful end-to-end services across a network. To provide useful services, therefore, requires both traffic handling mechanisms and provisioning and configuration mechanisms.
4 QoS Technologies
In the following sections, we review the important traffic handling mechanisms and the important provisioning and configuration mechanisms that are used to provide QoS.
4.1 Traffic Handling Mechanisms
Many traffic handling mechanisms are available. In this section, we focus on several key mechanisms, including differentiated services (diffserv), 802.1p, integrated services (intserv), ATM, and ISSLOW. Note that traffic handling mechanisms can be categorized as per-conversation mechanisms or aggregate mechanisms. Per-conversation mechanisms treat each traffic flow of each conversation in isolation. Aggregate mechanisms group many traffic flows into a single aggregate class. The distinction is analogous to the handling of airline passengers. Typically, passengers are "marked" as first class, business class or coach class. All passengers of the same class are handled together. This is aggregate handling. Per-conversation handling is analogous to providing a dedicated airplane to each passenger — luxurious but expensive.
4.1.1 Differentiated Services (Diffserv)
Diffserv is an aggregate traffic handling mechanism suitable for use in large routed networks. These networks may carry many thousands of conversations. Therefore it is not practical to handle traffic on a per-conversation basis. Diffserv defines a field in packets' IP headers, called the diffserv codepoint (DSCP)1. Hosts or routers sending traffic into a diffserv network mark each transmitted packet with a DSCP value. Routers within the diffserv network use the DSCP to classify packets and apply specific queuing behaviour based on the results of the classification. Traffic from many flows having similar QoS requirements is marked with the same DSCP, in so aggregating the flows to a common queue or scheduling behaviour.
802.1p is an aggregate traffic handling mechanism suitable for use in many local area networks (LANs).
It defines a field in the media access (MAC) header of Ethernet packets, which can carry one of eight priority values. Hosts or routers sending traffic into a LAN will mark each transmitted packet with the appropriate priority value. LAN devices, such as switches, bridges, and hubs are expected to treat the packets accordingly. The scope of the 802.1p priority mark is limited to the LAN.
4.1.3 Integrated Services (Intserv)
Intserv is a framework for defining services. As such, it implies a set of underlying traffic handling mechanisms. Intserv services are typically considered to be applied on a per-conversation basis. Intserv is typically, but not necessarily, associated with the RSVP signaling protocol (discussed subsequently under provisioning and configuration mechanisms).
4.1.4 ATM, ISSLOW and Others
ATM is a link layer technology that offers high quality traffic handling. ATM fragments packets into link layer cells, which are then queued and serviced using queue-servicing algorithms appropriate for one of several ATM services.
ISSLOW is a technique for fragmenting IP packets as they are transmitted over relatively slow speed links such as dial-up modems. When audio and data are mixed over these links, audio latencies may be significant, impacting the usability of the application. ISSLOW can be used to reduce audio latencies in these applications.
Other traffic handling mechanisms have been defined for various media, including cable modems, hybrid fiber coax (HFC) plants, P1394, and so on. These may use low level, link-layer specific signaling mechanisms ATM, for example, uses UNI signaling).
4.2 Provisioning and Configuration Mechanisms
In order to be effective in providing network QoS, it is necessary to effect the provisioning and configuration of the traffic handling mechanisms described consistently, across multiple network devices. Provisioning and configuration mechanisms can be categorized as top-down or signaled.
4.2.1 Top-Down Provisioning
In top-down provisioning, a network management system is used to "push" traffic handling configuration to a set of network devices. Typically, queuing mechanisms are configured on device interfaces. Then classification criteria are configured to determine which packets are directed to different queues in the device. Classification criteria may classify packets based on the IP 5-tuple (source and destination IP addresses and ports and the IP protocol) or the DSCP and 802.1p aggregate "marks" in packet headers. Masked 5-tuples may be used. Classification criteria may specify only a subset of the IP 5-tuple, such as "all packets having a source IP address of 2.2.2.X," where "X" may be any value. If DSCP or 802.1p are specified as classification criteria, then it is necessary to "mark" the DSCP or 802.1p marks in packets somewhere upstream of the classifying device. This may be done by hosts or by network devices close to the edge of the network. In the latter case, the marking network devices would be configured to mark based on classification criteria of their own, typically, the 5-tuple (or some subset of it).
220.127.116.11 Challenges in Top-Down Provisioning
Determining the appropriate classification criteria to use can be quite challenging. Network administrators would like to use QoS to assign resources to the traffic of certain applications or users, rather than fields in packet headers such as IP addresses and ports. Top-down provisioning systems attempt to assist the network administrator by creating bindings between applications and IP ports and between users and IP addresses. Unfortunately, these are often unreliable. Applications may use transient ports or may source multiple traffic flows (requiring differing QoS) on the same port. Users' IP addresses may change as a result of DHCP. Multi-user machines may use the same IP address for multiple users. IPSec encryption may encrypt IP ports, rendering them useless as classification criteria.
An additional challenge in top-down provisioning is the anticipation of traffic volumes at various nodes in the network. For example, a management system may be used to configure a low latency queue in each network device, with a capacity to handle ten simultaneous IP telephony sessions with some specified latency bound. Classification criteria are then configured in each device to direct IP telephony traffic to the low latency queues. This works so long as the telephony traffic arriving at each device is limited to ten sessions. However, if an eleventh telephony session is established which traverses one of the configured devices, it will congest the low latency queue, raising the latency above the configured bound. As a result, service will be compromised not only to the eleventh session but also to the ten existing sessions. This is due to the relatively static nature of top-down provisioning and the fact that the management system is not directly aware of current traffic patterns.
4.2.2 RSVP Signaling as a Configuration Mechanism
RSVP signaling may be used to complement top-down provisioning mechanisms. In this case, hosts generate signaling messages that describe the data traffic associated with a specific conversation. These messages flow along the same path that the data traffic would take through the network. RSVP messages offer the following information to the network:
What I am — originating application and sub-flow (such as print flow vs. time-critical transaction).
Who I am — authenticated user ID.
What I want — the type of QoS service needed.
How much I want — certain applications quantify their resource requirements precisely.
How I can be recognized — the 5-tuple classification criteria by which the data traffic can be recognized.
Which network devices resources will be impacted by the associated data traffic.
Such host-based signaling brings significant benefits to QoS management systems. One clear benefit of host-based signaling is that it provides robust bindings between classification information and users and applications. Beyond this, host-based signaling brings topology aware dynamic admission control. This feature is key to solving the "eleventh telephony session" described previously. RSVP signaling delivers a message regarding required resources to devices along the data path. Therefore, RSVP-aware devices are able to dynamically evaluate the impact that associated data traffic would have on their resources and to notify upstream devices when they do not have the resources to handle additional traffic flows. In the case of the "eleventh telephony session," network devices would reject admission of the eleventh traffic flow to the low latency queue, thus protecting the ten existing sessions. It's important to note that host-based signaling does not defeat the network administrator's control over network resources. It merely offers information to the network that can be used to facilitate the management of network resources.
5 Qualities of Guarantees and the Quality/Efficiency Product
Telephony traffic is characterized by the need for high quality guarantees. It has quantifiable requirements and its value depends on these requirements being strictly met. High quality guarantees are typically required by multimedia applications. Not all applications require high quality guarantees. For example, client/server database transactions cannot precisely quantify their resource requirements and as such, do not expect quantifiable guarantees. These applications can benefit from lower quality guarantees that promise to reduce latency but may not offer a strict latency bound.
One way to provide high quality guarantees is to significantly over-provision a network. For example, if the network devices described in the IP telephony example were provisioned to support all potential IP telephony sessions, the "eleventh telephony session" problem could have been avoided. However, if there are one thousand potential sessions but on the average only ten simultaneous sessions, it would be necessary to over-provision network devices by a factor of one hundred in order to support high quality guarantees. This is clearly an inefficient use of network resources. In general, there is a tradeoff between the ability of a network to offer high quality guarantees and the efficiency with which network resources can be used. We say that a network can be characterized by a constant quality/efficiency product (QE product). Offering higher quality guarantees requires a compromise in efficiency and vice versa.
An alternate mechanism to provide high quality guarantees is to employ RSVP signaling, as described previously. By using RSVP signaling, network devices can be provisioned for the average expected load. In the rare occasion that load exceeds expectations, additional sessions will be rejected, but the integrity of the guarantees offered to existing sessions will be maintained. In effect, by employing RSVP signaling, we are able to raise the QE product of the network, simultaneously offering higher quality guarantees and using network resources more efficiently. In general, the more sophisticated a QoS mechanism, the more it stands to raise the QE product of a given network. It would seem to follow that all network devices should implement the most sophisticated QoS mechanisms available. However, QoS mechanisms come at a cost in increased overhead, associated with supporting the QoS mechanism itself. In the case of signaling, this overhead takes the form of processing resources in network devices. This leads us to a very important point — any QoS mechanism should be evaluated in terms of the benefit it brings in terms of increased QE product versus the cost it brings in terms of increased overhead.
The following table illustrates this concept in terms of actual QoS mechanisms:
Table rows correspond to increasing levels of sophistication in traffic handling mechanisms. Table columns correspond to increasing levels of sophistication in provisioning and configuration mechanisms. Note the top left cell, which represents no QoS mechanisms and offers a very poor QE product. An example of such a network is an over-provisioned LAN. At the other extreme, note the lower right cell, which represents a network in which every network element processes per-conversation RSVP signaling and applies per-conversation intserv traffic handling. Intermediate cells represent compromises between increased QE product and level of overhead. The cell representing a combination of per-conversation admission control with aggregate traffic handling is of particular interest.
This combination is illustrated in the following diagram:
This diagram illustrates a sending host on the far left. The host sends into a large routed network, towards a receiving host on the far right. Several routers are traversed in the routed network. These provide an aggregate form of traffic handling (such as diffserv). The ingress router to the routed network is appointed as an admission control agent. It processes per-conversation RSVP signaling messages from the sending host and determines whether or not to admit the signaled host conversation's traffic to the high priority aggregate traffic handling queue in the routed network.
Note that although signaling messages traverse the network end-to-end, they are processed only in the hosts and in the router that is appointed as admission control agent for the routed network, as illustrated by the arrows. Routers in the core of the routed network apply aggregate traffic handling and do not process signaling messages. This model of per-conversation signaling at the edge of the network and aggregate traffic handling in the core, yields a good tradeoff between QE product and QoS related overhead. It is extensible to arbitrarily complex network topologies. In general, by enabling a higher density of admission control agents in a network, the QE product can be increased at the cost of increased overhead. We expect a similar approach to be used by ISPs to offer VPN based QoS services.
5.1 Simultaneous Use of Signaling and Top-Down Provisioning
Real networks are required to support a combination of application traffic requiring a range of quality guarantees. In order to be of use to the end customers, these guarantees need to be valid from one end of the network to the other. Parts of the network will be resource-constrained and will have to be provisioned efficiently. Other parts may be over-provisioned. In order to optimally support high and medium quality guarantees, signaling messages must be available in various parts of the network. To this end, hosts will generate signaling for a range of applications including both multimedia applications and qualitative mission-critical applications. Network administrators may then appoint admission control agents at the appropriate points in their network, based on the tradeoffs discussed previously.
There will also be applications for which signaling is less useful. In particular, it is inefficient to incur the overhead associated with signaling for applications that are not session-oriented and do not generate persistent traffic flows. Resources for these applications must be provisioned in a top-down manner. Thus, as QoS mechanisms are deployed, we will see a combination of signaling-based provisioning and top-down provisioning. Since both mechanisms allocate resources from the same network, they must be coordinated in some manner. This coordination point is the policy server.
6 Policy and Policy Servers
The industry standard QoS policy model defines policy enforcement points (PEPs) and policy decision points (PDPs). PEPs include routers, switches and other devices that are able to act as admission control agents. Typically, PEPs work together with PDPs to apply the network administrator's QoS policies. PDPs provide the higher layer intelligence required to process abstract policies. PDPs review RSVP signaling messages arriving at various PEPs and decide whether or not the corresponding traffic can be admitted. PDPs also use top-down provisioning to "push" to PEPs configuration information regarding non-signaled traffic flows. PDPs typically rely on a policy data store. This data store may take the form of a distributed directory.
QoS mechanisms provide improved service to network users while enabling the network administrator to manage network resources efficiently. These mechanisms include both traffic handling mechanisms and provisioning and configuration mechanisms. Traffic handling mechanisms include queuing algorithms and packet classification. These may be applied to aggregates of traffic or to per-conversation traffic flows. Provisioning and configuration mechanisms may be top-down or may be host signaled. Top-down provisioning presents challenges in traffic classification and is generally insufficient to simultaneously offer high quality guarantees and efficient use of network resources (high quality/efficiency product). Host based signaling offers information to the network that significantly facilitates the association of network resources with specific users and applications and enables the network administrator to realize an improved QE product, as appropriate. In emerging QoS networks, we can expect to see a combination of signaled and top-down provisioned QoS mechanisms. Policy decision points provide unified management of these QoS mechanisms.
For More Information
For the latest information on Windows® 2000 Server, check out Microsoft TechNet or our Web site at http://www.microsoft.com/windows2000.
|1||The DSCP is a six-bit field, spanning the fields formerly known as the type-of-service (TOS) fields and the IP precedence fields.|