On the surface, a switch looks much like a hub. Despite their similar appearance, switches are far more efficient than hubs and are far more desirable for today’s network environments. Figure 3.4 shows an example of a 32-port Ethernet switch. If you refer to Figure 3.2, you’ll notice few differences in the appearance of the high-density hub and this switch.
As with a hub, computers connect to a switch via a length of twisted-pair cable. Multiple switches are often interconnected to create larger networks. Despite their similarity in appearance and their identical physical connections to computers, switches offer significant operational advantages over hubs.
As discussed earlier in the chapter, a hub forwards data to all ports, regardless of whether the data is intended for the system connected to the port. This arrangement is inefficient; however, it requires little intelligence on the part of the hub, which is why hubs are inexpensive.
Rather than forwarding data to all the connected ports, a switch forwards data only to the port on which the destination system is connected. It looks at the Media Access Control (MAC) addresses of the devices connected to it to determine the correct port. A MAC address is a unique number that is stamped into every NIC. By forwarding data only to the system to which the data is addressed, the switch decreases the amount of traffic on each network link dramatically. In effect, the switch literally channels (or switches, if you prefer) data between the ports. Figure 3.5 illustrates how a switch works.
Figure 3.5 How a switch works.
You might recall from the discussions of Ethernet networking in Chapter 2, "Cabling Standards, Media, and Connectors," that collisions occur on the network when two devices attempt to transmit at the same time. Such collisions cause the performance of the network to degrade. By channeling data only to the connections that should receive it, switches reduce the number of collisions that occur on the network. As a result, switches provide significant performance improvements over hubs.
Switches can also further improve performance over the performance of hubs by using a mechanism called full-duplex. On a standard network connection, the communication between the system and the switch or hub is said to be half-duplex. In a half-duplex connection, data can be either sent or received on the wire but not at the same time. Because switches manage the data flow on the connection, a switch can operate in full-duplex mode—it can send and receive data on the connection at the same time. In a full-duplex connection, the maximum data throughput is double that for a half-duplex connection—for example, 10Mbps becomes 20Mbps, and 100Mbps becomes 200Mbps. As you can imagine, the difference in performance between a 100Mbps network connection and a 200Mbps connection is considerable.
The secret of full-duplex lies in the switch. As discussed previously in this section, switches can isolate each port and effectively create a single segment for each port on the switch. Because only two devices are on each segment (the system and the switch), and because the switch is calling the shots, there are no collisions. No collisions means no need to detect collisions—thus, a collision-detection system is not needed with switches. The switch drops the conventional carrier-sense multiple-access with collision detection (CSMA/CD) media access method and adopts a far more selfish (and therefore efficient) communication method.
To use a full-duplex connection, you basically need three things: a switch, the appropriate cable, and a NIC (and driver) that supports full-duplex communication. Given these requirements, and the fact that most modern NICs are full-duplex-ready, you might think everyone would be using full-duplex connections. However, the reality is a little different. In some cases, the NIC is simply not configured to use the driver.
Switches use three methods to deal with data as it arrives:
Cut-through—In a cut-through configuration, the switch begins to forward the packet as soon as it is received. No error checking is performed on the packet, so the packet is moved through quickly. The downside of cut-through is that because the integrity of the packet is not checked, the switch can propagate errors.
Store-and-forward—In a store-and-forward configuration, the switch waits to receive the entire packet before beginning to forward it. It also performs basic error checking.
Fragment-free—Building on the speed advantages of cut-through switching, fragment-free switching works by reading only the part of the packet that enables it to identify fragments of a transmission.
As you might expect, the store-and-forward process takes longer than the cut-through method, but it is more reliable. In addition, the delay caused by store-and-forward switching increases with the packet size. The delay caused by cut-through switching is always the same—only the address portion of the packet is read, and this is always the same size, regardless of the size of the data packet. The difference in delay between the two protocols is high. On average, cut-through switching is 30 times faster than store-and-forward switching.
It might seem that cut-through switching is the obvious choice, but today’s switches are fast enough to be able to use store-and-forward switching and still deliver high performance levels. On some managed switches, you can select the switching method you want to use.