Case Study and Simulation Exercise
This case study is a continuation of the DJMP Industries case study we introduced in Chapter 2, "Applying Design Principles in Network Deployment."
Key Point: Case Study General Instructions
Use the scenarios, information, and parameters provided at each task of the ongoing case study. If you encounter ambiguities, make reasonable assumptions and proceed. For all tasks, use the initial customer scenario and build on the solutions provided thus far.
You can use any and all documentation, books, white papers, and so on.
In each task, you act as a network design consultant. Make creative proposals to accomplish the customer's business needs. Justify your ideas when they differ from the provided solutions.
Use any design strategies and internetworking technologies you feel are appropriate.
The final goal for each case study is a paper solution; you are not required to provide the specific product names.
Appendix G, "Answers to Review Questions, Case Studies, and Simulation Exercises," provides a solution for each task based on assumptions made. There is no claim that the provided solution is the best or only solution. Your solution might be more appropriate for the assumptions you made. The provided solution helps you understand the author's reasoning and offers a way for you to compare and contrast your solution.
Case Study: Enterprise Campus Design
Complete these steps:
Step 1 You might want to review the DJMP Industries Case Study Scenario in Chapter 2.
Step 2 Propose the optimal campus design that addresses the scenario requirements (switched solution, redundancy, servers in a separate segment, and so on).
Simulation 1: Shared Versus Switched LAN
This exercise is a paper-only version of the simulation that the simulation tool actually performed and includes the results it provided. Review the scenario and simulation results and answer the questions.
The customer (DJMP Industries) plans to restructure its flat campus network, which consists of workstations and servers that are located in the central building and building A. The company is considering Ethernet switching technology as a replacement for the 10BaseT Ethernet hubs. You have been asked to determine what effect the introduction of the switches might have on the load of the links and to estimate the network's responsiveness and utilization with respect to the existing applications.
To provide some proof of future network efficiency, you will model FTP and HTTP performance on the network using shared and then switched Ethernet platforms.
Client Accessing Server in Unloaded Shared Ethernet
The customer has provided the information about its existing network and the number of users. As illustrated in Figure 4-21, you began the initial network behavior evaluation by simulating the load on the LAN links, which was posed by a single client accessing the web server.
Figure 4-21 Single Client Accessing Web Server on Unloaded Shared Ethernet
You performed the simulation (using 10-minute intervals), observed the effect of traffic growth, and compared the results among different scenarios.
The relevant statistics of interest for this case are the link (Ethernet) utilization and the HTTP response times.
The graph in Figure 4-22 shows the network load's simulation results that resulted from the HTTP session between the client and the server. The low Ethernet utilization number indicates that the HTTP traffic exchanged between the client and the server does not represent a significant load in the network.
Figure 4-22 Ethernet Utilization on Unloaded Shared Ethernet
The graphs in Figure 4-23 show the simulation results of the HTTP response times. On average, the HTTP page response times are within the range of 0.01 and 0.015 seconds, whereas the HTTP object response times vary from approximately 0.004 to 0.01 seconds (every HTTP page consists of several objects).
Figure 4-23 HTTP Response Times on Unloaded Shared Ethernet
The graphs in Figure 4-24 show the simulation results of the probability that the HTTP response time is equal to a particular value.
Figure 4-24 Probability of HTTP Response Times on Unloaded Shared Ethernett
What can you observe from the graphs in Figures 4-23 and 4-24?
Client Accessing Server in Loaded Shared Ethernet
Your task now is to create a scenario in which the background traffic is simulated to provide a more realistic picture of the ongoing traffic in the network. The client continues to access the web server while all the other clients concurrently initiate FTP sessions to an FTP server; as illustrated in Figure 4-25, a separate FTP server is introduced to eliminate the effect of the server utilization. Therefore, the HTTP session is tested in a heavily-loaded, shared Ethernet network.
Figure 4-25 Single Client Accessing the Web Server on Loaded Shared Ethernet
You performed the simulation and compared the results with those from the previous simulation. The graph in Figure 4-26 describes the increased network utilization as a result of the concurrent FTP and HTTP conversations.
Figure 4-26 Ethernet Utilization on a Loaded Shared Network
The next step is to observe the HTTP response times again. When examining the graphs in Figure 4-27, you notice that, in general, the results match those that were obtained in the unloaded network. There are some deviations, presumably because of the retransmissions that lower the probability of an immediate response. The delayed responses seem evenly distributed throughout the observed interval.
Figure 4-27 HTTP Response Times on Loaded Shared Ethernet
- What can you determine from the results? What is the reason for the delayed HTTP responses?
Introducing Switched Ethernet
In the third simulation scenario, the shared Ethernet is replaced with switched Ethernet, which Figure 4-28 shows being implemented with a single LAN switch. The traffic pattern remains the same as in the previous scenariothe client is accessing a web server while all other clients are accessing an FTP server.
Figure 4-28 Single Client Accessing the Web Server on Switched Loaded Ethernet
Figure 4-29 shows the results of this simulation. By examining the HTTP response time carefully, it seems that the background FTP traffic does not significantly affect the web communication. Everything is back to normal, the HTTP response times are constantly low, and there is no sign of individual deviations that could compromise the overall statistic numbers.
Figure 4-29 HTTP Response Times on Loaded Switched Ethernet
The graph in Figure 4-30 illustrates the probability of receiving a prompt HTTP response. The possibility is almost as high as when a stand-alone HTTP session was simulated (with no background traffic). This leads you to the conclusion that switching technology might be the obvious solution.
Figure 4-30 HTTP Response Probabilities on Loaded Switched Ethernet
- You concluded that the introduction of the Layer 2 switch represents a significant improvement in this case. How did you determine this from the previous graphs?
Simulation 2: Layer 2 Versus Layer 3 Switching
This exercise is a paper-only version of the simulation that the simulation tool actually performed, including the results the tool provided. Review the scenario and the simulation results and answer the questions.
This simulation inspects the impact of Layer 2 versus Layer 3 switching on the load in various parts of the structured campus network.
After successfully deploying the switching technology, the company is considering further improvements to its campus network design. It has already finished some baseline wiring work in the central building and in Building A and is facing some Layer 2 and Layer 3 design issues.
You decided to model the company's network to match the existing situation using the following architecture:
Each building contains distribution-layer switches, to which the access-layer (wiring closet or data center concentrator) switches are connected.
The distribution layer devices are connected via two central core switches (the campus backbone).
The whole campus is fully redundant.
To provide comparable results, you need a reference traffic flow. Therefore, you decided to focus solely on the communication between the two workstationsWS_A and WS_Bthat are located in different floors of building A, and the server in the central building.
In the simulation, Workstations A and B communicate with the server using various loads, as illustrated by the graph in Figure 4-31.
Figure 4-31 Simulation Load
Layer 2 Only Design
As shown in Figure 4-32, you began the simulation by turning on the Layer 2 functionality on all switches in the campus network. Soon you realized that, even in the highly redundant Layer 2 network, the number of possible paths reduces to only one, as determined by STP. STP computes loop-free networks, and any redundant links belonging to the same LAN or VLAN are placed in the blocking state and cannot be used.
Figure 4-32 Layer 2 Only Design
Figure 4-33 depicts the result of simulating 10 minutes of traffic originated by both workstations toward the server, and vice versa. The average-loaded links (30 percent) appear as solid lines, and the heavily-loaded links (60 percent) appear as dotted lines. The resulting dashed and dotted arrows indicate that the load is not balanced; specifically, all traffic moves over a single path: DS_B _ CS_A _ DC_B.
Figure 4-33 Layer 2 Only Loaded Network
Use of redundant links terminating at separate devices helps increase the network's reliability. This is especially true for the observed case, in which you expect that the link or node failure would neither impact the network (at least not for a longer period) nor result in a load imbalance.
To prove this, you studied the effect of the link and node failure on the network performance by tearing down the DS_B _ CS_A link and afterwards disabling the DC_B node. The resulting graph, which is illustrated in Figure 4-34, indicates that the traffic is simply redirected over the alternative path, DS_B __CS_B __DC_A.
Figure 4-34 Link Failure on the Layer 2 Only Loaded Network
Does the traffic immediately start using the original path once the link or node has fully recovered?
Layer 3 Switching in Distribution
Next, you decided to replace distribution-layer Layer 2 switches with Layer 3 switches, thereby eliminating the STP path selection restrictions. This was expected to improve the efficiency of the distribution to core link usage.
Figure 4-35 presents the results of the simulation. The traffic is perfectly balanced from the ingress Layer 3 switch all the way to the destination. The sharing is proportional on pairs of source-destination distribution switches, so all the distribution switches are equally loaded (see the arrows representing the load: dotted for average load and solid for heavy load). The access layer contains the only remaining sub-optimal paths.
Figure 4-35 Balanced Traffic with Layer 3 Switching in the Distribution Layer
Examining the results in Figure 4-36, you might notice that no load sharing occurs in building A's access layer. Is this a result of the default routing on the workstations using distribution switch DS_A for the primary exit point, or a result of the attached Layer 2 switch placing the secondary port in the blocking mode?
The graph in Figure 4-36 shows the path (using thick lines) taken by the packet that is originated by workstation WS_A and destined for the server in the central building. It is obvious that the network resources are used more fairly.
Figure 4-36 Traffic Flow from WS_A to the Server
Figure 4-37 shows the path (using thick lines) the packets take in the opposite direction, from the server toward the workstation WS_A. The server uses default routing to send the packets out of the local LAN and therefore does not utilize the redundant path at all.
Figure 4-37 Traffic Flow from Server to the WS_A
The network is now tested against severe failure events, such as link loss, by simulating a failure of the CS_A _ DC_A link. Figure 4-38 shows the result; the traffic from WS_A to the server is represented by a thick line. As expected, the network does not change its behavior under link failure. The load balancing from the ingress Layer 3 switch to the destination is still perfect, but on a reduced topology. The load distribution ratio on DS_A _ CS_A versus DS_A _ CS_B is 1:2 because the load is shared between distribution-layer next hops.
Figure 4-38 Link Failure Scenario Showing WS_A to Server Traffic
Figure 4-39 illustrates the return path from the server to WS_A (shown as thick lines).
Figure 4-39 Link Failure Scenario Showing Server to WS_A Traffic
In Figure 4-39, why is the return path completely bypassing the CS_A switch?
Layer 3 Switching in Core and Distribution
At this point, you change the core so that the core and distribution layer switches are all Layer 3 switches. As illustrated in Figure 4-40, the simulated load is perfectly shared from the distribution layer across the core on a hop-by-hop basis.
Figure 4-40 Layer 3 Switching Results in a Balanced Load
Load Sharing Under Failure
Next you simulated failure of the link CS_B to DC_B. Figure 4-41 illustrates the resulting path (shown as thick lines) taken by the WS_A traffic to the server. The way the load sharing is done is comparable to the previous case, with the distribution Layer 3 switches and Layer 2 switching in the core.
The actual impact of Layer 3 switches in the core can only be seen if the convergence after the failure is taken into account.
Figure 4-41 Link Failure Is Accommodated by WS_A to Server Traffic with the Layer 3 Core
What is the load distribution ratio on DS_A _ CS_A versus the DS_A _ CS_B link in Figure 4-41? Explain.
Layer 3 Access Switch
No load sharing occurs in the access-layer LAN or VLAN if the access-layer switch is a Layer 2 switch and all the workstations use the same default gateway (distribution-layer switch). To achieve load sharing in the access layer, the workstations must be configured to use different next hops (DS_A and DS_B, in this case) for their default routes.
In this scenario, the AS_F1 access-layer switch is upgraded to a Layer 3 switch to achieve more optimal load sharing in the access layer.
Figure 4-42 illustrates the result of the simulation (shown as thick lines): load sharing from AS_F1 toward DS_A and DS_B is perfect.
Figure 4-42 Load Sharing in Access the Layer with a Layer 3 Switch
The workstation WS_B is not running any routing protocol; rather, it depends on the default routing. What is a proper next-hop address?
IP Routing Process on the Server
In the last scenario, OSPF is configured on the server. The server starts participating in the campus routing and can rely on OSPF to load-share its traffic toward the workstations.
Figure 4-43 shows the result of the server to WS_A path simulation (shown as thick lines): the load distribution is achieved from the access layer to the destination.
Figure 4-43 Load Sharing with the Server Running OSPF
Running a routing protocol is one way to force the server to forward packets to both distribution-layer switches. Can you think of any other option?