Ethernet is a LAN standard that has become one of the most used LAN media today for many reasons. Ethernet is one of the cheapest and most widely available LAN media, and it has the ability to carry high-speed transmission. One of the most important reasons Ethernet is the most popular LAN medium is that there is a large platoon of skilled network administrators that know how to implement, administer, and troubleshoot Ethernet networks.
When you connect devices to an IEEE 802.3 Ethernet network, you attach one end of the cable to either a hub or switch. For this exam, therefore, an understanding of hubs and switches and their functions is essential. I explain each device throughout this chapter. In the following sections, you take a look at the differences between switches and hubs, and between collision and broadcast domains; you also learn how to troubleshoot problems in Ethernet networks.
Collision and Broadcast Domains
In a network, a collision domain is defined as all the interfaces on a single segment that can send data on the same physical wire, where a collision of the data being sent can occur. When you use a hub, all the nodes connected to the hub are in the same collision domain. A hub is basically a repeater that re-sends any signal it receives out each one of its ports. This means that, when a network uses a hub, every frame sent on the wire is seen by every node on the network.
If a node sends out a broadcast, all the nodes that can receive that broadcast on the physical wire are in the same broadcast domain. Since an Ethernet hub repeats the same signal it received out all of its ports, all the nodes that are in the same collision domain are also in the same broadcast domain.
Even if the frame is not destined for the node that receives the frame, the frame must still be checked to see what the destination address is. This in turn uses valuable processing power on the device receiving the frame.
For the exam, you must know the differences between a broadcast and a collision domain, as they occur in Ethernet networks. Remember that a broadcast copies and sends a transmission to every destination node on the network.
In networks that use hubs, every frame that traverses the broadcast domain must be processed by every node. When the nodes are processing broadcasts and frames for 100 devices, they can accomplish little else. Switches were designed to overcome the handicaps of a hub.
A switch looks similar to a hub but each port on a switch is in its own collision domain. Only the devices on ports assigned to the same Virtual Local Area Network (VLAN) receive broadcasts from one another. An administrator can assign certain devices to a VLAN as a way of creating smaller broadcast domains. A broadcast from a device connected to a port on VLAN 10 will only be seen by devices connected to ports assigned to VLAN 10. By default, Cisco switches are usually assigned to VLAN 1.
When a transmission is sent over an Ethernet network, if the switch knows the port of the destination node, the switch directs the frame to that node without the use of a broadcast to learn where the node is located. This greatly reduces the processing required by the nodes on the network, since the frames are received only by the destination device.
In the late 1970s, Xerox created the first Ethernet standard. In 1984, a consortium of Digital, Intel, and Xerox, calling themselves DIX, created the Ethernet_II standard. Around the same time, the IEEE created its own standard, with three groups providing input. The first group was called The High Level Interface (HILI), and was responsible for developing high-level internetworking protocols. This group later became the IEEE 802.1 Committee.
The second group was called The Logical Link Control Group, which focused on end-to-end connectivity. This group later became the IEEE 802.2 Committee.
The last group, called the Data Link and Medium Access Control (DLMAC) group, was responsible for developing medium access protocols. This group later formed committees for Ethernet (802.3), Token Bus (802.4), and Token Ring (802.5).
When the 802.3 group finished their Ethernet standard, it was almost identical to the Ethernet_II standard. The major difference was their descriptions of the Media Access Control (MAC) layer and the Logical Link Control (LLC) layer's responsibilities.
Ethernet consists of three basic elements: the physical medium, a set of medium access control rules, and the Ethernet frame. The physical medium is used to carry Ethernet signals devices. The network uses a set of medium access control rules, which allow multiple computers to share the available bandwidth. Each Ethernet frame that is sent on the physical medium consists of a set of bits understandable by the devices connected to the LAN.
Ethernet takes packets from upper-layer protocols, and places header and footer information around the data before it traverses the network. This process is called data encapsulation or framing. Ethernet frames travel at the Data Link layer of the OSI model and must be a minimum of 64 bytes and a maximum of 1518 bytes.
Figure 3.4 shows an Ethernet IEEE 802.3 frame and an Ethernet frame.Figure 3.4 An IEEE 802.3 frame and an Ethernet frame.
Here is a brief description of each field in an IEEE 802.3 frame and an Ethernet frame:
PreambleAn alternating pattern of ones and zeros that is used by the receiver to establish bit synchronization. Bit synchronization is like helping each device speak the same dialect of a language. Both Ethernet and IEEE 802.3 frames begin with a preamble.
Start frame delimiterIndicates where the frame starts, and is the byte before the destination address in both the Ethernet and IEEE 802.3 frame.
Destination Address and Source AddressAre each six bytes long in both Ethernet and IEEE 802.3 frames, and are contained in hardware on the Ethernet and IEEE 802.3 interface cards. The IEEE standards committee specifies the first three bytes of the address to a specific vendor. The source address is always a unicast (single-node) address, whereas the destination address may be unicast, multicast (group), or broadcast (all nodes).
Type FieldIn Ethernet frames, this is the two-byte field after the source address. After Ethernet processing, the type field specifies the upper-layer protocol to receive the data.
Length FieldIn IEEE 802.3 frames, the Length Field is a two-byte field following the source address. The length field indicates the number of bytes of data that follow this field and precede the frame check sequence field.
Data FieldIs the actual data contained in the frame and follows the type and length fields. After Physical-layer and Link-layer processes are complete, this data is sent to an upper-layer protocol. With Ethernet, the upper-layer protocol is identified in the type field. With IEEE 802.3, the upper-layer protocol must be defined within the data portion of the frame. If the data of the frame is not large enough to fill the frame to its minimum size of 64 bytes, padding bytes are inserted to ensure at least a 64-byte frame.
FCS (Frame Check Sequence) or CRC (Cyclic Redundancy Check) fieldsAre at the end of the frame. The frame check sequence recalculates the number of frames to make sure that none are missing or damaged. The CRC applies to all fields except the first, second, and last.
CSMA/CD (Carrier Sense Multiple Access with Collision Detection) Protocol
Ethernet uses a communication concept called datagrams to get messages across the network. The Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol makes sure that two datagrams aren't sent out at the same time, and if they are, it acts as a mediator to retransmit. A good analogy to CSMA/CD is that of a police radio system. If you are on the correct channel, you can hear everything, but if you try to talk when another person is speaking, one or both of you won't be heard clearly. You must wait for a break in communication to speak and be heard. The CSMA/CD communication control method works in a similar manner; if a channel is busy on the network, other stations cannot transmit. CSMA/CD, therefore, can slow the communication process in a network environment.
A datagram is a packet or a frame sent on the physical wire from one host to another.
The IEEE committee finalized a specification for running Ethernet-type signaling over unshielded twisted-pair (UTP) wiring. The IEEE calls the 10Mbps UTP standard 10Base-T, which indicates that networks using this standard use a signaling speed of ten megabits per second, a baseband signaling scheme, and twisted-pair wiring. 10Base-T isn't always fast enough; Transmission speeds of 100Base-T and Gigabit speeds are needed for today's typical networks to remain in a healthy state without significant latency. Although many smaller companies still use 10Base-T technology because it meets their needs, many notice that their newer applications are more demanding of resources. The next few sections a look at both 10Base-T and 100Base-T Ethernet technologies and how Ethernet adapted to increase LAN speeds.
Baseband, also known as narrowband, is a feature of a technology, such as Ethernet, that uses a single carrier frequency.
Fast Ethernet (or 100Base-T) uses the CSMA/CD protocol and has 10 times the performance of 10Base-T. Because 100Base-T uses the same protocol as 10Base-T (CSMA/CD), you can integrate 100Base-T into existing 10Base-T networks. 100Base-T technology uses the same network wiring and equipment as does 10Base-T, provided the wiring and equipment supports both 10Base-T and 100Base-T. Some Ethernet standards, such as Category 1, 2, or 3 cabling, cannot support 100Base-T. However, the biggest issue that a network administrator will face in upgrading to Fast Ethernet 100Base-T technology is at the workstation. To allow the workstation to transmit and receive at either 10Mbps or 100Mbps, the PCs need a NIC (network interface card) that is 10/100 capable (as you learn in the next section of this chapter, many networks today use 10/100/1000 NICs). Older Ethernet NICs allow only 4 to 10Mbps.
In addition to replacing NICs in every workstation, the actual implementation of 100Base-T in the network is another challenge. 10Base-T uses a larger collision domain diameter than 100Base-T and thus has a different signaling system. The network diameter is the cabling distance between the two farthest points in the network. The diameter between two devices would be the cabling distance between the two devices. The PC in your office may only be 250 feet from the wiring closet. However, if you were to wire from the switch in the wiring closet, up the wall, through the ceiling, back down through the wall in your office, and add another 10 foot cable from the wall to the PC desk, you may have over extended the available distance diameter for the cabling you are using.
The collision domain diameter of 100Base-T is 205 meters, which is approximately ten times smaller than that of 10Base-T. What does this mean? Because 100Base-T uses the same collision-detection mechanism as 10Base-T, the network diameter has to be reduced for 100Base-T. Think of it in terms of time slots. Each station gets its turn or time slot to transmit all of its data before another station can transmit its data. For 100Base-T networks to transmit in the same time slots as 10Base-T, the distance must be reduced (because 100Base-T travels at a faster rate; 100Base-T moves 10 times as fast, so it can only go one-tenth as far).
A 100Base-T network also supports an optional feature called autonegotiation, which allows a hub or switch and a network device to communicate their compatibilities and to agree upon an optimal communication speed and duplex. Autonegotiation can detect speed matching for 10 and 100Mbps, full-duplex, and automatic signaling configurations for 100Base-T4 and 100Base-TX stations. Autonegotiation can be enabled or disabled on hubs or switch ports.
Devices usually have autonegotiation on by default to automatically negotiate a mutual speed for both devices for communication. Such autonegotiation is a nice feature now that most networks use multiple speeds and duplexes.
A switch can provide dedicated point-to-point bandwidth availability from the switch port to the attached device. This means that a 10Mbps network interface card can provide 10Mbps, rather than sharing that bandwidth with all other stations. A small 12-port Ethernet switch provides a theoretical 120Mbit of bandwidth, compared to the 10Mbit provided by an Ethernet hub.
Full-duplex Ethernet is another advancing technology for Ethernet. Full duplex allows data to be sent and received simultaneously over a link. You may find full-duplex capabilities in autonegotiation hubs and interfaces. In a half-duplex link, data can either be sent or received, but not both at the same time. In theory, with full duplex, you can have twice the bandwidth of normal (half-duplex) Ethernet. The full-duplex mode requires that each end of the link connect to only a single device, such as a workstation, server, or a switched hub port. The devices also have to be running at the same speed, such as at 10Mbps, 100Mbps, or 1000Mbps.
Gigabit Ethernet is another addition to the IEEE 802.3 Ethernet standards. Gigabit Ethernet is 10 times faster than 100Base-T, with speeds up to 1000Mbps or 1Gbps (gigabit per second). This has been a welcome addition to many database and heavy process intensive networks; pushing large files across a network using standard Ethernet was nearly impossible, due to slow transfer speeds. Gigabit Ethernet allows for more resources to be shared throughout a network. It can run in half-duplex or full-duplex mode. Most products that use gigabit technology use fiber-optic cable. Ethnernet allows you to use Category 5 or 6 UTP and use distances of approximately 25 meters. Using fiber optics can greatly extend this distance. Implementing Gigabit Ethernet in your network increases its bandwidth and capacity, improves Layer 2 performance, and can eliminate Layer 2 bottlenecks.
Gigabit Ethernet looks identical to Ethernet from the Data Link layer upward. The physical layer is defined in the IEEE 802.3z 1000BASE-T standard, which is the standard for Gigabit Ethernet over the Category 5 or 6 cabling. Cisco recommends that it be installed according to the specifications of ANSI/TIA/EIA-568A. 1000BASE-T works by using all four of the Category 5 or 6 pairs to achieve 1000 Mbps operation. This means that each pair sends 250Mbps of data, and is capable of sending and receiving data over each of the four pairs simultaneously. 100BASE-TX (Fast Ethernet) uses two pairs, one to transmit and one to receive.
In early March 2002, Cisco agreed to support the IEEE 802.3ae 10 Gigabit Ethernet standard. This new standard preserves the 802.3 Ethernet frame format and continues to use MAC (Media Access Control) addresses embedded on the NIC card. The new standard allows for distances of up to 40km using multimode fiber cabling.
Cisco joined with six other companies to establish the 10 Gigabit Ethernet Alliance. The 10 Gigabit Ethernet standards began in March 1999 with a group called the Higher Speed Study Group (HSSG). This study group is now called the IEEE 803.3ae 10 Gigabit Ethernet Task Force.
Gigabit Media Types
Switch vendors offer different media types for switch interfaces needing to use Gigabit Ethernet. The three types of media for Gigabit Ethernet are longwave, shortwave, and copper medium:
Longwave (LW) laser can use single-mode and multimode fiber. This specification is referred to as 1000BaseLX. The 1000BaseLX GigaBit Interface Converter (GBIC) interfaces can pass data up to 1.8 miles (three kilometers).
Shortwave (SW) laser uses multimode fiber. This specification is generally referred to as 1000BaseSX. With the GBIC interfaces, 1000BaseSX gives the option for multimode fiber-optic cable connections with a 1,800 foot distance limitation.
A provision within the IEEE 802.3z specification defines Gigabit Ethernet over coaxial copper (1000BaseCX) using shielded 150-ohm copper. This can be used only for short distances because this cabling method has a distance limitation under 25 meters.
Unicast, Broadcasts, and Multicast
Sometimes different types of broadcasts can bring your network to its knees because of the sheer size or the large numbers of the datagrams. As a network administrator, you must understand these different types of broadcasts so that you can identify what they are and where they are coming from. Sometimes using a device called a sniffer on the network while the network is slow can be very helpful. A sniffer is a device that connects to the network to monitor the traffic on the network, including the broadcasts and who is sending them. This monitoring can be helpful in diagnosing whether there is a problem in the network. The three types of LAN data transmissions are unicast, multicast, and broadcast.
A Unicast transmission is a single packet that is sent from the source node to the destination node on a network. The source node sends a packet by sending it to the address of the destination node, and the packet is then sent on the network to find the address of the destination node.
A Multicast transmission is a single data packet that is copied and sent to a specific set of nodes on the network. The source node uses a multicast address to send a single packet addressed to several nodes. The packet is then sent on to the network where it's copied and sent to each destination node that is part of the multicast address.
A Broadcast transmission is a single data packet that is copied and sent to all nodes on the network. In this transmission, the source node sends a packet addressed to all destination nodes on the network. The packet is then sent on the network where it's copied and sent to every destination node on that network.
Broadcast transmissions can be very taxing on the network. You may want to segment your network into smaller sections to reduce the broadcast domain.