Home > Articles > Cisco > CCNP Routing and Switching

This chapter is from the book

Catalyst QoS Fundamentals

Figure 10-4 illustrates the queuing components of a QoS-enabled Cisco IOS–based Catalyst switch. The figure illustrates the classification that occurs on ingress packets. After the switch classifies a packet, the switch determines whether to place the packet into a queue or drop the packet. Queuing mechanisms drop packets only if the corresponding queue is full without the use of congestion avoidance.

Figure 4Figure 10-4 Queuing Components

As illustrated in Figure 10-4, the queuing mechanism on Cisco IOS–based Catalyst switches has the following main components:

  • Classification

  • Marking

  • Traffic conditioning: policing and shaping

  • Congestion management

  • Congestion avoidance

On the current Catalyst family of switches listed in Table 10-1, classification defines an internal DSCP value. The internal DSCP value is the classification value the Catalyst switches use in determining the egress marking, output scheduling, policing, congestion management, and congestion avoidance behavior of frames as the frames traverse and exit the switch. Marking and policing may alter this internal DSCP. Figure 10-5 illustrates how Catalyst switches use internal DSCP for QoS.

Figure 5Figure 10-5 Logical Depiction of Internal DSCP

Figure 10-5 is an abbreviated logical depiction of QoS packet handling in Catalyst switches because it does not illustrate mapping tables and other QoS features. Nonetheless, the figure illustrates the basic QoS packet handling of Catalyst switches.

Example 10-2 introduces QoS on a Catalyst switch. In this simple example, the intended purpose of QoS is to provide a higher degree of service to VoIP traffic between Cisco IP Phones. The Catalyst switch used in this example is a Catalyst 3550 running Cisco IOS. To accomplish this configuration of applying differentiated service to VoIP traffic, the switch must classify voice frames with a DSCP value sufficient for applying high priority to the VoIP frames. By default, Cisco IP Phones mark voice frames with a DSCP value of 46; as a result, trusting is a valid option applying high priority.

A valid and simple method of classification for this example is to trust the DSCP value of ingress frames based on whether a Cisco IP Phone is attached to a Catalyst switch interface. The trust qos trust device cisco-phone interface configuration command accomplishes this configuration. Furthermore, the Cisco Catalyst 3550 family of switches uses four output queues. The default queue for frames with a DSCP value of 46 is queue 3. The Cisco Catalyst family of switches supports a priority queue option specifically for frames in queue 4. Therefore, the switch needs to map the respective CoS value of 5 (mapped from internal DSCP of 46) to queue 4 for priority queuing of the voice frames. The Catalyst 3550 interface command priority-queue out enables strict-priority queues on an interface such that the switch transmits frames out of queue 4 before servicing any other queue.

Example 10-2 illustrates the resulting configuration. Although this is a very simplistic example of classification and output scheduling, it provides a brief overview of how to apply QoS. The wrr-queue cos-map commands in Example 10-2 are explained later in this section.

Example 10-2 Basic Catalyst Switch QoS Configuration Applying Classification and Congestion Management

(text deleted)
!
mls qos
!
!
(text deleted)
!
interface FastEthernet0/2
 switchport mode access
 mls qos trust device cisco-phone
 wrr-queue cos-map 1 0 1
 wrr-queue cos-map 2 2 3
 wrr-queue cos-map 3 4 6 7
 wrr-queue cos-map 4 5 
 priority-queue out
 spanning-tree portfast
!
!
interface FastEthernet0/3
 switchport mode access
 mls qos trust device cisco-phone
 wrr-queue cos-map 1 0 1
 wrr-queue cos-map 2 2 3
 wrr-queue cos-map 3 4 6 7
 wrr-queue cos-map 4 5 
 priority-queue out
 spanning-tree portfast
!
(text deleted)
!

The following subsections discuss each QoS component on Catalyst switches for high-speed Ethernet interfaces. In addition, the remainder of this chapter focuses on the Catalyst switches listed in Table 10-1 running Cisco IOS. The only Cisco CatOS switch to support a wide range of QoS features is the Catalyst 6500 with an MSFC.

For a complete understanding and an overview of configuring QoS with Cisco switches running CatOS, refer to Cisco.com. In addition, the following sections on QoS components cover Cisco IOS QoS in general. Not all of the switches listed in Table 10-1 support all of the features discussed in this chapter.

Classification

Classification distinguishes a frame or packet with a specific priority or predetermined criteria. In the case of Catalyst switches, classification determines the internal DSCP value on frames. Catalyst switches use this internal DSCP value for QoS packet handling, including policing and scheduling as frames traverse the switch.

The first task of any QoS policy is to identify traffic that requires classification. With QoS enabled and no other QoS configurations, all Cisco routers and switches treat traffic with a default classification. With respect to DSCP values, the default classification for ingress frames is a DSCP value of 0. The Catalyst switches listed in Table 10-1 use an internal DSCP of 0, by default, for ingress frames regardless of the value of DSCP in the ingress frame with a default QoS configuration. The terminology used to describe an interface configured for treating all ingress frames with a DSCP of 0 is untrusted. The following subsection discusses trusted and untrusted interfaces in more detail. Figure 10-6 simplistically illustrates classification and marking.

Figure 06Figure 10-6 Representation of Classification and Marking

On Cisco Catalyst switches, the following methods of packet classification are available:

  • Per-interface trust modes

  • Per-interface manual classification using specific DSCP, IP Precedence, or CoS values

  • Per-packet based on access lists

  • Network-Based Application Recognition (NBAR)

In multilayer switched networks, always apply QoS classification as close to the edge as possible. This application allows for end-to-end QoS with ease of management. The following sections discuss these methods of classification. The first section, "Trust Boundaries and Configurations," covers classification based on interface trust modes, interface manual classification, and packet-based access control lists. Classification using NBAR is then covered in the "NBAR" section, followed by a section on classification with policy-based routing.

Trust Boundaries and Configurations

Trust configurations on Catalyst switches allow trusting for ingress classification. For example, when a switch configured for trusting DSCP receives a packet with a DSCP value of 46, the switch accepts the ingress DSCP of the frame and uses the DSCP value of 46 for internal DSCP. Despite the ingress classification configuration, the switch may alter the DSCP and CoS values of egress frames by policing and egress marking.

The Catalyst switches support trusting via DSCP, IP Precedence, or CoS values on ingress frames. When trusting CoS or IP Precedence, Catalyst switches map an ingress packet's CoS or IP Precedence to an internal DSCP value. Tables 10-4 and 10-5 illustrate the default mapping tables for CoS-to-DSCP and IP Precedence-to-DSCP, respectively. These mapping tables are configurable.

Table 10-4 Default CoS-to-DSCP Mapping Table

CoS

0

1

2

3

4

5

6

7

DSCP

0

8

16

24

32

40

48

56


Table 10-5 Default IP Precedence-to-DSCP Mapping Table

IP Precedence

0

1

2

3

4

5

6

7

DSCP

0

8

16

24

32

40

48

56


Figure 10-7 illustrates the Catalyst QoS trust concept using port trusting. When the Catalyst switch trusts CoS on ingress packets on a port basis, the switch maps the ingress value to the respective DSCP value in Table 10-4. When the ingress interface QoS configuration is untrusted, a switch uses 0 for the internal DSCP value for all ingress packets. Recall that switches use the internal DSCP values for policing, egress polices, congestion management, congestion avoidance, and the respective CoS and DSCP values of egress frames.

Figure 7Figure 10-7 Catalyst QoS Trust Concept

To configure a Cisco IOS–based Catalyst switch for trusting using DSCP, CoS, or IP Precedence, use the following interface command:

mls qos trust [dscp | cos | ip-precedence]

Example 10-3 illustrates a sample configuration of trusting DSCP in Cisco IOS.

Example 10-3 Sample Interface Configuration for Trusting DSCP

(text deleted)
!
mls qos
!
(text deleted)
!
interface FastEthernet0/16
 switchport mode dynamic desirable
 mls qos trust dscp
!
(text deleted)
!

Because an internal DSCP value defines how the switches handle a packet internally, the CoS-to-DSCP and IP Precedence-to-DSCP mapping tables are configurable. To configure CoS-to-DSCP mapping, use the following command:

mls qos map cos-dscp values 

values represents eight DSCP values, separated by spaces, corresponding to the CoS values 0 through 7; valid values are from 0 to 63. Example 10-4 illustrates an example of configuring CoS-to-DSCP mapping. In this example, the mapping table maps the CoS value of 0 to 0, 1 to 8, 2 to 16, 3 to 26, 4 to 34, 5 to 46, 6 to 48, and 7 to 56.

Example 10-4 Sample CoS-to-DSCP Mapping Table Configuration

(text deleted)
!
mls qos
!
(text deleted)
mls qos map cos-dscp 0 8 16 26 34 46 48 56
(text deleted)
!

To configure IP Precedence-to-DSCP mapping, use the following command:

mls qos map ip-prec-dscp dscp-values

dscp-values represents DSCP values corresponding to IP Precedence values 0 through 7; valid values for dscp-values are from 0 to 63.

Furthermore, it is possible to map ingress DSCP values to different internal DSCP values using DSCP mutation or a policy map. Consult the configuration guide on Cisco.com for more details on configuring DSCP mutation.

Another method of trusting ingress frames is to trust DSCP or CoS based on whether the switch learns of a Cisco IP Phone attached to an interface through CDP. To configure this behavior of trusting ingress frames based on the switch learning that a Cisco IP Phone is attached, use the mls qos trust device cisco-phone interface configuration command in conjunction with the mls qos trust dscp or mls qos trust cos command.

Example 10-2 in the section "Catalyst QoS Fundamentals," earlier in this chapter, illustrates a sample configuration of trusting based on whether a Cisco IP Phone is attached to an interface.

The mls qos trust dscp and mls qos trust cos configuration options configure the switch to trust all ingress traffic, regardless. To trust only particular traffic on ingress, use traffic classes. Traffic classes apply ACLs to specific QoS functions, such as classification, marking, and policing. To configure a traffic class on a Cisco IOS–based Catalyst switch, perform the following steps:

Step 1 Create a class map with a user-defined class-name. Optionally, specify the class map with the match-all or match-any option.

Switch(config)#class-map [match-any | match-all] class-name

Step 2 Configure the class-map clause. Class-map clauses include matching against ACLs, the input interface, specific IP values, etc. Note: CLI may show unsupported match clauses.

Switch(config-cmap)#match ?
 access-group     Access group
 any         Any packets
 class-map      Class map
 destination-address Destination address
 input-interface   Select an input interface to match
 ip          IP specific values
 mpls         Multi Protocol Label Switching specific values
 not         Negate this match result
 protocol       Protocol
 source-address    Source address
 vlan         VLANs to match

Switch(config-cmap)#match access-group acl_number

Class maps only define traffic profiles. Policy maps apply class maps to QoS functions. To define a policy map and tie a class map to a policy map, perform the following steps:

Step 1 Create a policy map with a user-defined name.

Switch(config)#policy-map policy-name

Step 2 Apply the class-map clause.

Switch(config-pmap)#class class-map

Step 3 Configure policy map QoS actions.

Switch(config-pmap-c)#?
QoS policy-map class configuration commands:
 bandwidth Bandwidth
 exit    Exit from QoS class action configuration mode
 no     Negate or set default values of a command
 trust   Set trust value for the class
 <cr>
 police   Police
 set    Set QoS values

Switch(config-pmap-c)#bandwidth ?
 <8-2000000> Kilo Bits per second
 percent   % of Available Bandwidth

Switch(config-pmap-c)#trust ?
 cos      Trust COS in classified packets
 dscp      Trust DSCP in classified packets
 ip-precedence Trust IP precedence in classified packets[0801]
 <cr>

Switch(config-pmap-c)#set ?
 cos  Set IEEE 802.1Q/ISL class of service/user priority
 ip  Set IP specific values
 mpls Set MPLS specific values

Step 4 Apply policy maps to interfaces on an ingress or egress basis. Not all Catalyst switches, including the Catalyst 6500 with a Supervisor I or II engine, support egress policing.

Switch(config)#interface {vlan vlan-id | {FastEthernet | GigabitEthernet} slot/interface | port-channel number}
Switch(config-if)#service-policy {input | output} policy-name

Step 5 Configure the ingress trust state of the applicable interface. The trust state preserves ingress DSCP, CoS, or IP Precedence values for application to the policy map.

Switch(config)#interface {vlan vlan-id | {FastEthernet | GigabitEthernet} slot/interface | port-channel number}
Switch(config-if)#mls qos trust [dscp | cos | ip-precedence]

Step 6 Enable QoS globally, if not previously configured.

Switch(config)#mls qos

NOTE

Always enable QoS globally using the mls qos or qos global command to enable a Catalyst switch to enact QoS configurations.

This example used a Cisco Catalyst 3550 for demonstration. Different Catalyst switches support additional or fewer options for policy maps and class maps depending on platform and software version. Check the product configuration guides on Cisco.com for exact product support of policy map and class map options.

The Catalyst 4000 and 4500 families of switches running Cisco IOS do not prefix QoS configuration commands with the keyword mls. For example, to configure an interface to trust DSCP on the Catalyst 4000 and 4500 families of switches running Cisco IOS, the command is qos trust dscp instead of mls qos trust dscp.

Example 10-5 illustrates a sample configuration for a policy map and class map configuration. In this example, the switch trusts DSCP of all ingress TCP traffic destined to the 10.1.1.0/24 subnet received on interface FastEthernet0/1. The switch maps all other ingress traffic with the interface default DSCP of 0 for internal DSCP.

Example 10-5 Sample Configuration of Policy Map and Class Map for Trusting

(text deleted)
!
mls qos
!
class-map match-all Voice_subnet
 match access-group 100
!
!
policy-map Ingress-Policy
 class Voice_subnet
  trust dscp
!
(text deleted)
!
interface FastEthernet0/1
 switchport mode access
 service-policy input Ingress-Policy
!
(text deleted)
!
access-list 100 permit tcp any 10.1.1.0 0.0.0.255

NBAR

Network-Based Application Recognition adds intelligent network classification to switches and routers. ACL-based classification uses Layer 3 and Layer 4 properties of packets, such as IP address and TCP or UDP ports, to classify packets. NBAR can classify frames based on Layer 7 information, such as application type, URL, and other protocols that use dynamic TCP and UDP assignments. In brief, NBAR supports these types of classification features:

  • Classification of applications that dynamically assign TCP and UDP ports

  • Classification of HTTP traffic by URL, host, or Multipurpose Internet Mail Extensions (MIME) type

  • Classification of Citrix Independent Computer Architecture (ICA) traffic by application name

  • Classification of applications using subport information

Example 10-6 illustrates several protocols that are available for NBAR classification on a Catalyst 6500 running Cisco IOS 12.2(17a)SX1 with a Supervisor Engine 720. Always refer to the product documentation for the latest supported NBAR protocol list.

Example 10-6 Several Available NBAR Protocols on Catalyst 6500 Running Cisco IOS Version 12.2(17a)SX1

Switch(config-cmap)#match protocol ?
(text deleted)
 appletalk     AppleTalk
 arp        IP ARP
 bgp        Border Gateway Protocol
(text deleted)
 eigrp       Enhanced Interior Gateway Routing Protocol
 exchange     MS-RPC for Exchange
 finger      Finger
 ftp        File Transfer Protocol
 gopher      Gopher
 gre        Generic Routing Encapsulation
 http       World Wide Web traffic
 icmp       Internet Control Message
 imap       Internet Message Access Protocol
 ip        IP
 ipinip      IP in IP (encapsulation)
 ipsec       IP Security Protocol (ESP/AH)
 ipx        Novell IPX
(text deleted)
 nfs        Network File System
 nntp       Network News Transfer Protocol
(text deleted)
 realaudio     Real Audio streaming protocol
 rip        Routing Information Protocol
 rsrb       Remote Source-Route Bridging
 rsvp       Resource Reservation Protocol
 secure-ftp    FTP over TLS/SSL
 secure-http    Secured HTTP
(text deleted)
 ssh        Secured Shell
(text deleted)

For a complete and current list of protocols and applications that NBAR recognizes, consult Cisco.com.

NBAR configuration uses policy maps and class maps as with Cisco IOS–based classification. As such, use the following steps to configure NBAR-based classification:

Step 1 Specify the user-defined name of the class map.

Switch#configure terminal 
Switch(config)#class-map [match-all | match-any] class-name

Step 2 Specify a protocol supported by NBAR as a matching criterion.

Switch(config-cmap)#match protocol protocol-name

Step 3 Create a traffic policy by associating the traffic class with one or more QoS features in a policy map.

Switch(config)#policy-map policy-name

Step 4 Specify the name of a predefined class.

Switch(config-pmap)#class class-name

Step 5 Enter QoS-supported parameters in the policy map class configuration mode. These parameters include marking the DSCP value, traffic policing, etc., and vary on a Catalyst switch basis.

Switch(config-pmap-c)#?
QoS policy-map class configuration commands:
 bandwidth   Bandwidth
 exit      Exit from QoS class action configuration mode
 no       Negate or set default values of a command
 police     Police
 <cr>
 fair-queue   Flow-based Fair Queueing
 priority    Low Latency Queueing
 queue-limit  Queue Max Threshold for Tail Drop
 random-detect Weighted Random Early Detect (Precedence based)
 set      Set QoS values
 shape     Traffic Shaping
 trust     Set trust value for the class

Switch(config-pmap-c)#exit
Switch(config-pmap)#exit

Step 6 Attach the traffic policy to the interface for ingress or egress application.

Switch(config)#interface {vlan vlan-id | {FastEthernet | GigabitEthernet} slot/interface | port-channel number}
Switch(config-if)#service-policy [input | output] policy-name

Classification with Policy-Based Routing

Policy-based routing (PBR) provides a flexible means of routing packets by allowing you to configure a defined policy for traffic flows, lessening reliance on routes derived from routing protocols. It gives you more control over routing by extending and complementing the existing mechanisms provided by routing protocols. In addition, PBR allows you to set the IP Precedence bits. It also allows you to specify a path for certain traffic, such as priority traffic over a high-cost link.

In brief, PBR supports the following QoS features:

  • Classifying traffic based on extended access list criteria

  • Setting IP Precedence bits, giving the network the ability to enable differentiated classes of service

  • Routing packets to specific traffic-engineered paths

Policies are based on IP addresses, port numbers, protocols, or size of packets. For a simple policy, you can use any one of these descriptors; for a complicated policy, you can use all of them.

For example, classification of traffic through PBR allows you to identify traffic for different classes of service at the edge of the network and then implement QoS defined for each CoS in the core of the network, using priority queuing (PQ), custom queuing (CQ), or weighted fair queuing (WFQ) techniques. This process obviates the need to classify traffic explicitly at each WAN interface in the backbone network.

PBR classification does not scale well in enterprise networks. The current recommendation is to limit the use of PBR classification to WAN routers. Furthermore, the Catalyst 6500 family of switches does not support all of the PBR features using hardware switching; consult Cisco.com for more details. In addition, the other Catalyst families of switches listed in Table 10-1 do not support PBR or do not support PBR in hardware at the time of publication of this text. For more details on PBR restrictions on the Catalyst 6500 family of switches, refer to the following white paper on Cisco.com:

"Understanding ACL Merge Algorithms and ACL Hardware Resources on Cisco Catalyst 6500 Switches"

Marking

Marking in reference to QoS on Catalyst switches refers to changing the DSCP, CoS, or IP Precedence bits on ingress frames. Marking is configurable on a per-interface basis or via a policy map. Marking alters the DSCP value of packets, which in turn affects the internal DSCP. For example, configuring a policy map to mark all frames from a video server on a per-interface basis to a DSCP value of 40 results in an internal DSCP value of 40 as well. Marking also may be a result of a policer. An example of marking using a policer is a Catalyst switch marking DSCP to a lower value for frames above a specified rate.

Figures 10-8 and 10-9 review the associated CoS and DSCP bits in frame headers for Layer 2 and Layer 3 marking, respectively. In deploying or designing new networks, use Layer 3 whenever possible. Note: The COS field is only applicable to 802.1Q tagged frames.

Figure 8Figure 10-8 Layer 2 CoS Field of a 802.1Q Frame

Figure 9Figure 10-9 Layer 3 IP ToS Byte

To configure DSCP and CoS marking on the ingress queue of a Cisco IOS–based Catalyst switch, use the following commands, respectively:

mls qos dscp dscp-value
mls qos cos cos-value

dscp-value represents a DSCP value from 0 to 63, while cos-value represents a CoS value between 0 and 7. These commands effectively yield classification as the switches use the new DSCP or CoS values for determining the internal DSCP value overriding the existing DSCP or CoS value. Example 10-7 illustrates marking CoS on ingress on a per-interface basis on a Cisco IOS–based Catalyst switch.

Example 10-7 Marking CoS on Ingress Frames

(text deleted)
!
mls qos
!
(text deleted)
!
interface FastEthernet0/20
 switchport mode dynamic desirable
 mls qos cos 5
!
(text deleted)
!

To configure marking as part of the policy map for classification based on ACLs, use any of the following class-map action commands, depending on application:

set ip dscp ip-dscp-value
set ip precedence ip-precedence-value
set cos cos-value

Example 10-8 illustrates an example of a policy map with a class-map clause of marking frames with an IP DSCP value of 45.

Example 10-8 Sample Configuration of Policy Map and Class Map for Marking

(text deleted)
!
mls qos
!
class-map match-all Voice_subnet
 match access-group 100
!
!
policy-map Ingress_Policy
 class Voice_subnet
  set ip dscp 45
!
(text deleted)

interface FastEthernet0/1
 switchport mode access
 service-policy input Ingress_Policy
!
(text deleted)
!
access-list 100 permit tcp any 10.1.1.0 0.0.0.255
!

Traffic Conditioning: Policing and Shaping

Cisco routers that run Cisco IOS support two traffic-shaping methods: generic traffic shaping (GTS) and Frame Relay traffic shaping (FRTS). Cisco routers that run Cisco IOS support policing using the committed access rate (CAR) tool. Cisco Catalyst switches that run Cisco IOS support policing and shaping via slightly different methods and configurations compared to Cisco IOS on Cisco routers. The following sections discuss policing and shaping on Catalyst switches in more detail.

Shaping

Both shaping and policing mechanisms control the rate at which traffic flows through a switch. Both mechanisms use classification to differentiate traffic. Nevertheless, there is a fundamental and significant difference between shaping and policing.

Shaping meters traffic rates and delays (buffers) excessive traffic so that the traffic rates stay within a desired rate limit. As a result, shaping smoothes excessive bursts to produce a steady flow of data. Reducing bursts decreases congestion in downstream routers and switches and, consequently, reduces the number of frames dropped by downstream routers and switches. Because shaping delays traffic, it is not useful for delay-sensitive traffic flows such as voice, video, or storage, but it is useful for typical, bursty TCP flows. Figure 10-10 illustrates an example of traffic shaping applied to TCP data traffic.

Figure 10Figure 10-10 Traffic-Shaping Example

Policing

In contrast to shaping, policing takes a specific action for out-of-profile traffic above a specified rate. Policing does not delay or buffer any traffic. The action for traffic that exceeds a specified rate is usually drop; however, other actions are permissible, such as trusting and marking.

Policing on Catalyst switches follows the leaky token bucket algorithm, which allows for bursts of traffic compared to rate limiting. The leaky token bucket algorithm is as effective at handling TCP as it is at handling bursts of TCP flows. Figure 10-11 illustrates the leaky token bucket algorithm.

Figure 11Figure 10-11 Leaky Token Bucket

When switches apply policing to incoming traffic, they place a number of tokens proportional to the incoming traffic packet sizes into a token bucket in which the number of tokens equals the size of the packet. At a regular interval, the switch removes a defined number of tokens, determined by the configured rate, from the bucket. If the bucket is full and cannot accommodate an ingress packet, the switch determines that the packet is out of profile. The switch subsequently drops or marks out-of-profile packets according to the configured policing action.

It is important to note that the leaky bucket does not actually buffer packets, although the diagram in Figure 10-11 alludes to this point. The traffic is not actually flowing through the bucket; Catalyst switches simply use the bucket to determine out-of-profile packets. Furthermore, each Catalyst switch's hardware implementation of policing is different; therefore, only use the leaky token bucket explanation as a guide to understanding the difference between policing and shaping and how policing limits traffic.

A complete discussion of the leaky token bucket algorithm is outside the scope of this book. Consult the following document on Cisco.com for more information about the leaky token bucket algorithm:

"Understanding QoS Policing and Marking on the Catalyst 3550"

Policing is configured using several parameters. Policing configurations apply the following parameters (not all parameters are configurable on all Catalyst switches):

  • Rate—The effective policing rate in terms of bits per second (bps). Each Catalyst switch supports different rates and different rate increments.

  • Burst—The number of packets that switches allow in the bucket before determining that the packet is out of profile. Various Catalyst switches support various burst ranges with various increments.

  • Conforming action—Depending on the Catalyst switch model, optional supported conforming actions include drop, transmit, and mark.

  • Exceed action—Depending on the Catalyst switch model, optional supported exceed actions for out-of-profile packets are drop, transmit, and mark.

  • Violate action—Applies to Catalyst switches that support two-rate policers, where there is a second bucket in the leaky token bucket algorithm. The violate action adds a third measurement for out-of-profile traffic. Applicable violate actions are drop, transmit, and mark. RFC 2698 discusses three-color marking, the basis for the addition of violate action on Cisco Catalyst switches.

There are many white papers, books, and tech notes on how to correctly configure the burst size to handle TCP traffic effectively. One leading recommended formula for configuring burst is as follows:

<burst_size> = 2 ? <RTT> ? rate

RTT is the round trip time for a TCP session. RTT can be determined from sophisticated methods such as traffic analysis tools to simple tools such as ping. Rate is throughput end-to-end. For more information on configuring recommended burst sizes, refer to Cisco.com.

In addition, there are three types of policing on Catalyst switches:

  • Individual policers—An individual policer is a per-interface policer where the switch applies specified policing parameters on a per-interface basis. Applying a policer that limits traffic to 75 Mbps on each interface is an example of an individual policer.

  • Aggregate policers—Aggregate policers are policers that apply policing parameters on a group of interfaces. For example, an aggregate policer that is defined to limit traffic to 75 Mbps to a group of interfaces limits the total traffic for all interfaces to 75 Mbps. As a result, the group of interfaces can achieve only 75 Mbps among all members with an aggregate policer, whereas an individual policer applies policing parameters on a per-interface basis.

  • Microflow policing—Microflow policing is per-flow policing where a switch applies policing parameters to each class within a policy map.

NOTE

Several models of Catalyst switches support application of policing not only on a per-interface basis but also on a per-VLAN basis. Check the product configuration guide for support applications of policing on a per-VLAN basis.

Configuring individual or microflow policers on Catalyst switches is via policy maps. To specify an individual policer or microflow policer as a class-map action clause, use the following command:

police [flow] bits-per-second normal-burst-bytes [extended-burst-bytes] 
  [pir peak-rate-bps] [{conform-action action} {drop [exceed-action action]} | 
  {set-dscp-transmit [new-dscp]} | {set-prec-transmit [new-precedence]} | 
  {transmit [{exceed-action action} | {violate-action action}]] 

Not all families of Catalyst switches and software versions support all the options listed in the preceding police command; always check the configuration guides on Cisco.com for supported actions.

To define an aggregate policer, use the following global command:

mls qos aggregate-policer policer_name bits_per_second normal_burst_bytes 
  [maximum_burst_bytes] [pir peak_rate_bps] [[[conform-action {drop | 
  set-dscp-transmit dscp_value | set-prec-transmit ip_precedence_value | 
  transmit}] exceed-action {drop | policed-dscp | transmit}] 
  violate-action {drop | policed-dscp | transmit}]

As with the policy-map configuration, not all families of Catalyst switches support the entire options list with the mls qos aggregate-policer command. To tie an aggregate policer to a class-map action clause in a policy map, use the following command:

police aggregate aggregate_policer_name

Example 10-9 illustrates a sample configuration of an individual policer on a Catalyst 3550 switch.

Example 10-9 Sample Configuration of Policing

!
mls qos
!
class-map match-all MATCH-UDP
 match access-group 101
!
!
policy-map LIMIT-UDP
 class MATCH-UDP
  police 1536000 20000 exceed-action drop
!
(text deleted)
!
interface FastEthernet0/3
 switchport mode dynamic desirable
 service-policy input LIMIT-UDP
!
(text deleted)
!

The following sections discuss congestion management and congestion avoidance. Congestion management is the key feature of QoS because it applies scheduling to egress queues.

Congestion Management

Catalyst switches use multiple egress queues for application of the congestion-management and congestion-avoidance QoS features. Both congestion management and congestion avoidance are a per-queue feature. For example, congestion-avoidance threshold configurations are per queue, and each queue may have its own configuration for congestion management and avoidance. In addition, each Catalyst switch has a unique hardware implementation for egress queues. For example, the Catalyst 3550 family of switches has four egress queues, while the Catalyst 6500 family of switches uses either one or two egress queues depending on line module.

Cisco IOS uses specific nomenclature for referencing egress queues. For a queue system XpYqZt, the following applies:

  • X indicates the number of priority queues

  • Y indicates the number of queues other than the priority queues

  • Z indicates the configurable tail-drop or WRED thresholds per queues

For example, the Catalyst 4000 and 4500 families of switches use either a 1p3q1t or 4q1t queuing system, depending on configuration. For the 1p3q1t queuing system, the switch uses a total of four egress queues, one of which is a priority queue, all with a single congestion-avoidance threshold.

Moreover, classification and marking have little meaning without congestion management. Switches use congestion-management configurations to schedule packets appropriately from output queues once congestion occurs. Catalyst switches support a variety of scheduling and queuing algorithms. Each queuing algorithm solves a specific type of network traffic condition.

Catalyst switches transmit frames on egress with a DSCP value mapped directly from the internal DSCP value. The CoS value of egress also maps directly from the internal DSCP value where the DSCP-to-CoS value for egress frames is configurable. Table 10-6 illustrates the default DSCP-to-CoS mapping table.

Table 10-6 DSCP-to-CoS Mapping Table

DSCP Value

0–7

8–15

16–23

24–31

32–39

40–47

48–55

56–63

CoS Value

0

1

2

3

4

5

6

7


To configure the DSCP-to-CoS mapping table, use the following command:

mls qos map dscp-cos dscp-values to cos_value

Congestion management comprises several queuing mechanisms, including the following:

  • FIFO queuing

  • Weighted round robin (WRR) queuing

  • Priority queuing

  • Custom queuing

The following subsections discuss these queuing mechanisms in more detail.

FIFO Queuing

The default method of queuing frames is FIFO queuing, in which the switch places all egress frames into the same queue, regardless of classification. Essentially, FIFO queuing does not use classification and all packets are treated as if they belong to the same class. The switch schedules packets from the queue for transmission in the order in which they are received. This behavior is the default behavior of a Cisco IOS–based Catalyst switch without QoS enabled. Figure 10-12 illustrates the behavior of FIFO queuing.

Figure 12Figure 10-12 FIFO Queuing

Because FIFO queuing is the default configuration of Catalyst switches, FIFO queuing does not require any configuration commands.

Weighted Round Robin Queuing

Scheduling packets from egress queues using WRR is a popular and simple method of differentiating service among traffic classes. With WRR, the switch uses a configured weight value for each egress queue. This weight value determines the implied bandwidth of each queue. The higher the weight value, the higher the priority that the switch applies to the egress queue. For example, consider the case of a Catalyst 3550 switch configured for QoS and WRR. The Catalyst 3550 uses four egress queues. If queues 1 through 4 are configured with weights 50, 10, 25, and 15, respectively, queue 1 can utilize 50 percent of the bandwidth when there is congestion. Queues 2 through 4 can utilize 10, 25, and 15 percent of the bandwidth, respectively, when congestion exists. Figure 10-13 illustrates WRR behavior with eight egress queues. Figure 10-13 also illustrates tail-drop and WRED properties, which are explained in later sections.

Figure 13Figure 10-13 Weighted Round Robin

Although the queues utilize a percentage of bandwidth, the switch does not actually assign specific bandwidth to each queue when using WRR. The switch uses WRR to schedule packets from egress queues only under congestion. Another noteworthy aspect of WRR is that it does not starve lower-priority queues, because the switch services all queues during a finite time period.

To configure WRR on the Catalyst 6500 family of switches running Cisco IOS, perform the following steps:

Step 1 Enable QoS globally, if not previously configured.

Switch(config)#mls qos

Step 2 Select an interface to configure.

Switch(config)#interface {ethernet | fastethernet | gigabitethernet | tengigabitethernet} slot/port 

Step 3 Assign egress CoS values to queues and queue thresholds. The switches use the CoS mapping for congestion avoidance thresholds, which are discussed in the next section.

Switch(config-if)#wrr-queue cos-map queue-id threshold-id cos-1...cos-n

You must assign the CoS-to-queue threshold for all queue types. The queues are always numbered starting with the lowest-priority queue possible, and ending with the strict-priority queue, if one is available. A generic example follows using variable number queues, termed n:

  • — Queue 1 will be the low-priority WRR queue

  • — Queue 2 will be the high-priority WRR queue

  • — Queue n will be the strict-priority queue

The strict-priority queue is configured by using the wrr-queue priority-queue command, and the wrr-queue cos-map command configures the CoS to egress queue mapping. Repeat Step 3 for each queue or keep the default CoS assignments.

Step 4 Configure the WRR weights for the WRR queues.

Switch(config-if)#wrr-queue bandwidth weight for Q1 weight for Q2 weight for Qn

Weight for Q1 relates to queue 1, which should be the low-priority WRR queue. Keep this weight at a level lower than the weight for Q2. The weight can take any value between 1 and 255. Assign the ratio by using the following formulas:

  • — To queue 1: [weight 1 / sum(weights)]

  • — To queue 2: [weight 2 / sum(weights)]

  • — To queue n: [weight n / sum(weights)]

You must define the weight for all types of queues. These weight types do not need to be the same.

Step 5 Define the transmit queue ratio. The weights are configurable percentages between 1 and 100.

Router(config-if)# wrr-queue queue-limit low-priority-queue-weight [medium-priorityqueue-weights(s)] high-priority-queue-weights

The transmit queue ratio determines the way that the buffers are split among the different queues. If you have multiple queues with a priority queue, the configuration requires the same weight on the high-priority WRR queues and for the strict-priority queue. These levels cannot be different on the Catalyst 6500 family of switches for hardware reasons. Generally, high-priority queues do not require a large amount of memory for queuing because traffic destined for high-priority queues is delay sensitive and often low volume. As a result, large queue sizes for high- and strict-priority queues are not necessary. The recommendation is to use memory space for the low-priority queues that generally contain data traffic that is not delay sensitive to buffering. The next section discusses strict-priority queuing with WRR.

NOTE

The Catalyst 2950 family of switches applies WRR configuration on a global basis and not on a per-interface basis, as do the Catalyst 6500 and 3550 families of switches. The preceding steps are applicable to the Catalyst 2950 except on a per-switch basis.

The Catalyst 2970 and 3750 families of switches use a specialized WRR feature referred to as shaped round robin (SRR) for congestion management. Refer to the Catalyst 2750 and 3750 configuration guides for more details.

The Catalyst 4000 and 4500 families of switches running Cisco IOS do not use WRR, specifically, for congestion management. These switches use sharing and shaping instead. Sharing is different from WRR; it differentiates services by guaranteeing bandwidth per queue. These switches do support strict-priority queuing on queue 3.

Example 10-10 illustrates a sample congestion-management configuration on a Catalyst 6500 switch running Cisco IOS. In this configuration, egress CoS values 0 through 2 map to queue 1 threshold 1, CoS value 3 maps to queue 2 threshold 2, CoS value 4 maps to queue 2 threshold 1, and CoS maps to queue 2 threshold 2. The egress CoS values of 5 and 7 map to the priority queue, which is referenced as queue 1 and threshold 1.

Example 10-10 Sample Configuration for Congestion Management for a Catalyst 6500 Switch Running Cisco IOS

!
interface GigabitEthernet1/1
 no ip address
 wrr-queue bandwidth 50 75
 wrr-queue queue-limit 100 50
 wrr-queue cos-map 1 1 0 2
 wrr-queue cos-map 1 2 3
 wrr-queue cos-map 2 1 4
 wrr-queue cos-map 2 2 6
 priority-queue cos-map 1 1 5 7
 switchport
!

Priority Queuing

One method of prioritizing and scheduling frames from egress queues is to use priority queuing. Earlier sections noted that enabling QoS globally on Cisco IOS–based Catalyst switches enables the use of egress queues. When applying strict priority to one of these queues, the switch schedules frames from that queue as long as there are frames in that queue, before servicing any other queue. Catalyst switches ignore WRR scheduling weights for queues configured as priority queues; most Catalyst switches support the designation of a single egress queue as a priority queue.

Priority queuing is useful for voice applications where voice traffic occupies the priority queue. However, this type of scheduling may result in queue starvation in the nonpriority queue. The remaining nonpriority queues are subject to the WRR configurations.

Catalyst switches, in terms of configuration, refer to priority queuing as expedite queuing or strict-priority queuing, depending on the model. To configure the strict-priority queue on Catalyst 6500 switches running Cisco IOS, use the priority-queue cos-map command and assign the appropriate CoS values to the priority queue. Because voice traffic usually carries a DSCP value of 46, which maps to a CoS value of 5, the ideal configuration for priority queuing is to map the CoS value of 5 to the strict-priority queue.

On the Catalyst 3550 family of switches, use the priority-queue out command to enable strict-priority queuing on queue 4. For the Catalyst 4000 and 4500 families of switches, use the priority high command on the tx-queue 3 interface to enable strict-priority queuing on queue 3. Note that not all line modules for the Catalyst support egress priority queuing. The Catalyst 3750 uses ingress priority along with SRR instead of egress priority scheduling for priority queuing–type configurations. See the configuration guides for these switches for more details regarding strict-priority queuing configurations. Example 10-10 also illustrates the priority-queuing configuration for an interface of Catalyst 6500 running Cisco IOS using the priority-queue cos-map command.

Custom Queuing

Another method of queuing available on Catalyst switches strictly for WAN interfaces is custom queuing. Custom queuing (CQ) reserves a percentage of available bandwidth for an interface for each selected traffic type. If a particular type of traffic is not using the reserved bandwidth, other queues and types of traffic may use the remaining bandwidth.

CQ is statically configured and does not provide for automatic adaptation for changing network conditions. In addition, CQ is not popular on high-speed WAN interfaces; refer to the configuration guides for CQ support on LAN interfaces and configuration details. See the configuration guide for each Catalyst switch for supported CQ configurations.

Other Congestion-Management Features and Components

This section highlights the most significant features of congestion management on Cisco IOS–based Catalyst switches, specifically focusing on the Catalyst 6500 family of switches. Each Catalyst switch is unique in configuration and supported congestion-management features; refer to the configuration guides and product documentation for more details.

In brief, the following additional congestion-management features are available on various Catalyst switches:

  • Internal DSCP to egress queue mapping

  • Sharing

  • Transmit queue size per Catalyst switch and interface type

  • Shaped round robin (SRR) on the Catalyst 2970 and 3750 families of switches

Congestion Avoidance

Congestion-avoidance techniques monitor network traffic loads in an effort to anticipate and avoid congestion at common network bottleneck points. Switches and routers achieve congestion avoidance through packet dropping using complex algorithms (versus the simple tail-drop algorithm). Campus networks more commonly use congestion-avoidance techniques on WAN interfaces (versus Ethernet interfaces) because of the limited bandwidth of WAN interfaces. However, for Ethernet interfaces of considerable congestion, congestion avoidance is very useful.

Tail Drop

When an interface of a router or switch cannot transmit a packet immediately because of congestion, the router or switch queues the packet. The router or switch eventually transmits the packet from the queue. If the arrival rate of packets for transmission on an interface exceeds the router's or switch's ability to buffer the traffic, the router or switch simply drops the packets. This behavior is called "tail drop" because all packets for transmission attempting to enter an egress queue are dropped until the there is space in the queue for another packet. Tail drop is the default behavior on Cisco Catalyst switch interfaces.

Tail drop treats all traffic equally regardless of internal DSCP in the case of a Catalyst switch. For environments with a large number of TCP flows or flows where selective packet drops are detrimental, tail drop is not the best approach to dropping frames. Moreover, tail drop has these shortcomings with respect to TCP flows:

  • The dropping of frames usually affects ongoing TCP sessions. Arbitrary dropping of frames with a TCP session results in concurrent TCP sessions simultaneously backing off and restarting, yielding a "saw-tooth" effect. As a result, inefficient link utilization occurs at the congestion point (TCP global synchronization).

  • Aggressive TCP flows may seize all space in output queues over normal TCP flow as a result of tail drop.

  • Excessive queuing of packets in the output queues at the point of congestion results in delay and jitter as packets await transmission.

  • No differentiated drop mechanism exists; premium traffic is dropped in the same manner as best-effort traffic.

  • Even in the event of a single TCP stream across an interface, the presence of other non-TCP traffic may congest the interface. In this scenario, the feedback to the TCP protocol is very poor; as a result, TCP cannot adapt properly to the congested network.

Recall that TCP increases the window size slowly and linearly until it loses traffic. It then decreases the window size logarithmically. If there are many flows that start slowly, each flow will increase the window size until congestion occurs and then all will fall back at the same time. As the flows become synchronized, the link is used less efficiently.

Because routers and switches handle multiple concurrent TCP sessions, and because TCP flows are generally bursty, when egress traffic exceeds the buffer limit for egress queues, it vastly exceeds the buffer limit for the egress queues. In addition, the burstiness of TCP is of short duration and generally does not result in periods of prolonged congestion. Recall that tail-drop algorithms drop all traffic that exceeds the buffer limit by default. As a result of multiple TCP flows vastly exceeding the buffer limit, multiple TCP sessions simultaneously go into TCP slow start. Consequently, all TCP traffic slows down and then slow-starts again. This behavior creates a condition known as global synchronization, which occurs as waves of congestion crest only to be followed by troughs during which link utilization is not fully utilized.

One method of handling global synchronization is to apply weighted fair queuing (WFQ). WFQ uses an elaborate scheme for dropping traffic, because it can control aggressive TCP flows via its Congestion Discard Threshold (CDT)–based dropping algorithm. However, WFQ does not scale to the backbone speeds used in multilayer switched networks; instead, Catalyst switches use weighted random early detection (WRED) for congestion avoidance. The next subsection discusses this feature.

Tail drop is a congestion-avoidance mechanism when applied with classification to multiple thresholds. An example of congestion avoidance is configuring a Catalyst switch to tail-drop packets with DSCP values between 0 and 5 at a 50 percent queue full threshold compared to tail-dropping packets with DSCP values between 6 and 10 at a 100 percent queue full threshold. In this configuration, the switch drops packets with DSCP values between 0 and 5 at a lower threshold to avoid congestion on packets with DSCP values between 5 and 10.

Weighted Random Early Detection

WRED is a congestion-avoidance mechanism that is useful for backbone speeds. WRED attempts to avoid congestion by randomly dropping packets with a certain classification when output buffers reach a specific threshold. WRED is essentially a combination of two QoS features: random early detection (RED) and WRR.

Figure 10-14 illustrates the behavior of TCP with and without RED. As illustrated in the diagram, RED smoothes TCP sessions because it randomly drops packets, which ultimately reduces TCP windows. Without RED, TCP flows go through slow start simultaneously. The end result of RED is better link utilization.

Figure 14xFigure 10-14 Link Utilization Optimization with Congestion Avoidance

RED randomly drops packets at configured threshold values (percentage full) of output buffers. As more packets fill the output queues, the switch randomly drops frames in an attempt to avoid congestion without the "saw-tooth" TCP problem. RED only works when the output queue is not full; when the output queue is full, the switch tail-drops any additional packets in an attempt to occupy the output queue. However, the probability of dropping a packet rises linearly as the output queue begins to fill above the RED threshold.

RED works very well for TCP flows but not for other types of traffic such as UDP flows and voice traffic. WRED is similar to RED except that WRED takes into account classification of frames. For example, for a single output queue, a switch configuration may consist of a WRED threshold of 50 percent for all best-effort traffic for DSCP values up to 20, and 80 percent for all traffic with a DSCP value between 20 and 31. In this example, the switch drops packets with a DSCP of 0 to 20 when the output queue reaches 50 percent. If the queue continues to fill to above 80 percent, the switch then begins to drop packets with DSCP values above 20. The end result is that the switch is less likely to drop packets with the higher priority (higher DSCP value). Figure 10-15 illustrates the WRED algorithm, pictorially.

Figure 15Figure 10-15 Weighted Random Early Detection

On most Catalyst switches, WRED is configurable per queue, with all the switches in Table 10-1 using four queues except the Catalyst 6500, for which the number of output queues varies per line card. Nevertheless, it is possible to use WRR and WRED together. A best-practice recommendation is to designate a strict-priority queue for high-priority traffic and use WRED for the remaining queues designated for data traffic.

For switches that support tail-drop and WRED configurations, the configurations vary depending on the number of output queues and whether the line modules support minimum configurable thresholds. Minimum thresholds specify the queue depth at which to not drop traffic. To configure tail-drop thresholds on the Catalyst 6500 family of switches running Cisco IOS for 1q4t or 2q2t interfaces, use the following command:

wrr-queue threshold queue-id thr1% thr2% 

queue-id specifies the respective queue number, and thr1% and thr2% specify the output queue full percentage at which to start dropping traffic and the maximum queue full percentage at which to apply tail-drop, respectively. Always set thr2% to 100 percent for tail-drop configurations. To configure WRED on the Catalyst 6500 family of switches running Cisco IOS for 1p2q2t, 1p3q1t, 1p2q1t, and 1p1q8t, use the following commands:

wrr-queue random-detect min-threshold queue-id thr1% [thr2% [thr3% thr4% thr5% thr6% thr7% thr8%]]
wrr-queue random-detect max-threshold queue-id thr1% [thr2% [thr3% thr4% thr5% thr6% thr7% thr8%]]

The min-threshold command specifies the low-WRED output queue percentage value for which the switch does not drop any frames. The max-threshold command specifies the high-WRED queue percentage value for which to drop all frames in the queue. Example 10-11 illustrates an example of WRED on the Catalyst 6500 family of switches. In this example, the switch only applies WRED when queue 1 reaches 50 percent full for threshold 1 and 70 percent full for threshold 2. When the queue is 75 percent full in threshold 1, the switch drops all frames in the queue for this threshold.

Example 10-11 Sample WRED Configuration on Catalyst 6500 Switch Running Cisco IOS

!
interface GigabitEthernet1/1
 no ip address
 wrr-queue bandwidth 50 75 
 wrr-queue queue-limit 100 50 
 wrr-queue random-detect min-threshold 1 50 70 
 wrr-queue random-detect max-threshold 1 75 100 
 wrr-queue cos-map 1 1 0 2 
 wrr-queue cos-map 1 2 3 
 wrr-queue cos-map 2 1 4 
 wrr-queue cos-map 2 2 6 
 priority-queue cos-map 1 1 5 7 
 rcv-queue cos-map 1 1 0 
 switchport
!

To configure all other interface output queue types on the Catalyst 6500 family of switches and all other Catalyst switches, refer to the product configuration guides for the respective Catalyst switch on Cisco.com.

Catalyst switches support WRED on ingress receive queues as well. Consult the configuration guides on Cisco.com for additional details on configuring WRED for ingress receive queues.

Pearson IT Certification Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from Pearson IT Certification and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about Pearson IT Certification products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites; develop new products and services; conduct educational research; and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by Adobe Press. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.pearsonitcertification.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020