Packet Switching Methods on Cisco Networks
As many network engineering students have found, a number of different protocols and concepts must be learned in a specific sequence in order to understand how they work with each other. This fact is very apparent when learning about simple traffic forwarding. Initially, students learn about the basics of LANs and switched networks, as well as how devices communicate with each other without using routers. Once students understand this background information, the lessons move toward learning what routers do and how packets are routed. This article takes a small step past this point to talk about how Cisco devices, in both older and more modern hardware, speed up packet forwarding by using packet-switching methods on such devices.
A Little History Lesson
A number of different methods have been developed to improve the performance of networking devices, both by increasing packet-forwarding speed and by decreasing packet delay through a device. Some higher-level methods focus on decreasing the amount of time needed for the routing process to converge; for example, by optimizing the timers used with the Open Shortest Path First (OSPF) protocol or the Enhanced Interior Gateway Routing Protocol (EIGRP).
Optimizations are also possible at lower levels, such as by optimizing how a device switches packets, or how processes are handled. This article focuses at this lower level, specifically by examining how vendors can decrease forwarding time through the development and implementation of optimized packet-switching methods.
The three main switching methods that Cisco has used over the last 20 years are process switching, fast switching, and Cisco Express Forwarding (CEF). Let’s take a brief look at these three methods.
Of the three methods, process switching is the easiest to explain. When using only process switching, all packets are forwarded from their respective line cards or interfaces to the device’s processor, where a forwarding/routing and switching decision is made. Based on this decision, the packet is sent to the outbound line card/interface. This is the slowest method of packet switching because it requires the processor to be directly involved with every packet that comes in and goes out of the device. This processing adds delay to the packet. For the most part, process switching is used only in special circumstances on modern equipment; it should not be considered the primary switching method.
After process switching, fast switching was Cisco’s next evolution in packet switching. Fast switching works by implementing a high-speed cache, which is used by the device to increase the speed of packet processing. This fast cache is populated by a device’s processor. When using fast switching, the first packet for a specific destination is forwarded to the processor for a switching decision (process switching). When the processor completes its processing, it adds a forwarding entry for the destination to the fast cache. When the next packet for that specific destination comes into the device, the packet is forwarded using the information stored in the fast cache—without directly involving the processor. This approach lowers the packet switching delay as well as processor utilization of the device.
For most devices, fast caching is enabled by default on all interfaces.
Cisco Express Forwarding (CEF)
Cisco’s next evolution of packet switching was the development of Cisco Express Forwarding. This switching method is used by default on most modern devices, with fast switching being enabled as a secondary method.
CEF operates through the creation and reference of two new components: the CEF Forwarding Information Base (FIB) and the CEF Adjacency table. The FIB is built based on the current contents of a device’s IP routing table. When the routing table changes, so does the CEF FIB. The FIB’s functionality is very basic: It contains a list of all the known destination prefixes and how to handle switching them. The Adjacency table contains a list of the directly connected devices and how to reach them; adjacencies are found using protocols such as the Address Resolution Protocol (ARP).
These tables are stored in the main memory of smaller devices, or in the memory of a device’s route processor on larger devices; this mode of operation is called Central CEF.
An additional advantage when using CEF on supported larger Cisco devices is that the CEF tables on those devices can be copied and maintained on specific line cards; this mode of operation is called Distributed CEF (dCEF). When using dCEF, the packet switching decision doesn’t have to wait for the Central CEF lookup information; these decisions can be made directly on the line card, thus increasing the switching speed of the traffic going from interface to interface on any of the supporting line cards. This design results in decreased utilization of the backplane between the line card and the route processor, providing additional room for other traffic.
One question I always had when I was learning this stuff for the first time was, “Why should I care?” As a network engineer, most of these things would be transparent in my day-to-day activities. Most people only cared whether the installed device processed the packets at the device’s top-rated speed.
However, any good network engineer will tell you that it’s always best to have at least a cursory idea of how devices handle traffic, from the lowest layer on the wire or cable to the highest level shown to a user. Most experienced engineers don’t need these concepts and this knowledge day to day, but only when implementing a new feature or troubleshooting a hard-to-find problem. For new students, however, this information is important, as many tests will cover this material.
I hope the information in this article will help new students who are just learning about these methods, and that it will also serve as a reference for experienced engineers who need a little tune-up on packet switching methods.