Computer System Architecture
At the core of every computer system is the central processing unit (CPU) and the hardware that makes it run. The CPU is just one of the items that you can find on the motherboard. The motherboard serves as the base for most crucial system components. These physical components interact with the OS and applications to do the things we need done. Let’s start at the heart of the system and work our way out.
Central Processing Unit
The CPU is the heart of the computer system. The CPU consists of the following:
- An arithmetic logic unit (ALU) that performs arithmetic and logical operations
- A control unit that extracts instructions from memory and decodes and executes the requested instructions
- Memory, used to hold instructions and data to be processed
The CPU is capable of executing a series of basic operations, including fetch, decode, execute, and write. Pipelining combines multiple steps into one process. The CPU has the capability to fetch instructions and then process them. The CPU can function in one of four states:
- Ready state—Program is ready to resume processing
- Supervisor state—Program can access entire system
- Problem state—Only nonprivileged instructions executed
- Wait state—Program waiting for an event to complete
Because CPUs have very specific designs, the operating system must be developed to work with the CPU. CPUs also have different types of registers to hold data and instructions. The base register contains the beginning address assigned to a process, whereas the limit address marks the end of the memory segment. Together, the components are responsible for the recall and execution of programs. CPUs have made great strides, as Table 5.1 documents. As the size of transistors has decreased, the number of transistors that can be placed on a CPU has increased. By increasing the total number of transistors and ramping up clock speed, the power of CPUs has increased exponentially. As an example, a 3.06GHz Intel Core i7 can perform about 18 million instructions per second (MIPS).
Table 5.1. CPU Advancements
Intel Core 2
Intel Core i7
Two basic designs of CPUs are manufactured for modern computer systems:
- Reduced Instruction Set Computing (RISC)—Uses simple instructions that require a reduced number of clock cycles.
- Complex Instruction Set Computing (CISC)—Performs multiple operations for a single instruction.
The CPU requires two inputs to accomplish its duties: instructions and data. The data is passed to the CPU for manipulation where it is typically worked on in either the problem or the supervisor state. In the problem state, the CPU works on the data with nonprivileged instructions. In the supervisor state, the CPU executes privileged instructions.
The CPU can be classified in one of several categories depending on its functionality. When the computer’s CPU, motherboard, and operating system all support the functionality, the computer system is also categorized according to the following:
- Multiprogramming—Can interleave two or more programs for execution at any one time.
- Multitasking—Can perform one or more tasks or subtasks at a time.
- Multiprocessor—Supports one or more CPUs. Windows 98 does not support multiprocessors, whereas Windows Server 2008 does.
A multiprocessor system can work in symmetric or asymmetric mode. Symmetric mode shares resources equally among all programs. Asymmetric mode can set a priority so that one application can have priority and gain control of one of the processors. The data that CPUs work with is usually part of an application or program. These programs are tracked by a process ID (PID). Anyone who has ever looked at Task Manager in Windows or executed a ps command on a Linux machine has probably seen a PID number. Fortunately, most programs do much more than the first C code you wrote that probably just said, “Hello World.” Each line of code or piece of functionality that a program has is known as a thread.
A program that has the capability to carry out more than one thread at a time is known as multi-threaded. You can see an example of this in Figure 5.1.
Figure 5.1. Processes and threads.
Process activity uses process isolation to separate processes. These techniques are needed to ensure that each application receives adequate processor time to operate properly. The four process isolation techniques used are
- Encapsulation of objects—Other processes do not interact with the application.
- Virtual mapping—The application is written in such a way that it believes it is the only application running.
- Time multiplexing—This allows the application or process to share resources.
- Naming distinctions—Processes are assigned their own unique name.
An interrupt is another key piece of a computer system. An interrupt is an electrical connection between a device and the CPU. The device can put an electrical signal on this line to get the attention of the CPU. The following are common interrupt methods:
- Programmed I/O—Used to transfer data between a CPU and peripheral device.
- Interrupt-driven I/O—A more efficient input output method but requires complex hardware.
- I/O using DMA—I/O based on direct memory access can bypass the processor and write the information directly into main memory.
- Memory mapped I/O—Requires the CPU to reserve space for I/O functions and make use of the address for both memory and I/O devices.
- Port mapped I/O—Uses a special class of instruction that can read and write a single byte to an I/O device.
There is a natural hierarchy to memory and, as such, there must be a way to manage memory and ensure that it does not become corrupted. That is the job of the memory management. Memory management systems on multitasking operating systems are responsible for
- Relocation—Maintains the ability to swap memory contents from memory to secondary storage as needed.
- Protection—Provides control to memory segments and restricts what process can write to memory.
- Sharing—Allows sharing of information based on a user’s level of access; that is, Mike can read the object, whereas Shawn can read and write to the object.
- Logical organization—Provides for the sharing and support for dynamic link libraries.
- Physical organization—Provides for the physical organization of memory.
Let’s now look at storage media.
A computer is not just a CPU; memory is also an important component. The CPU uses memory to store instructions and data. Therefore, memory is an important type of storage media. The CPU is the only device that can directly access memory. Systems are designed that way because the CPU has a high level of system trust. The CPU can use different types of addressing schemes to communicate with memory, which includes absolute addressing and relative addressing. Memory can be addressed either physically or logically. Physical addressing refers to the hard-coded address assigned to the memory. Applications and programmers writing code use logical addresses. Relative addresses use a known address with an offset applied. Not only can memory be addressed in different ways but there are also different types of memory. Memory can be either nonvolatile or volatile. The sections that follow provide examples of both.
Random access memory (RAM) is volatile memory. If power is lost, the data is destroyed. Types of RAM include static RAM, which uses circuit latches to represent binary data, and dynamic RAM, which must be refreshed every few milliseconds.
Static random access memory (SRAM) doesn’t require a refresh signal as DRAM does. The chips are more complex and are thus more expensive. However, they are faster. DRAM access times come in at 60 nanoseconds (ns) or more; SRAM has access times as fast as 10ns. SRAM is often used for cache memory.
RAM can be configured as Dynamic Random Access Memory (DRAM). Dynamic RAM chips are cheap to manufacture. Dynamic refers to the memory chips’ need for a constant update signal (also called a refresh signal) to keep the information that is written there. Currently, there are four popular implementations of DRAM:
- Synchronous DRAM (SDRAM)—Shares a common clock signal with the transmitter of the data. The computer’s system bus clock provides the common signal that all SDRAM components use for each step to be performed.
- Double Data Rate (DDR)—Supports a double transfer rate of ordinary SDRAM. This obtains twice the transfer rate.
- DDR2—Splits each clock pulse in two, doubling the number of operations it can perform.
- Rambus Direct RAM (RDRAM)—A proprietary synchronous DRAM technology. RDRAM can be found in fewer new systems today than just a few years ago. Rambus is found mainly in gaming consoles and home theater components.
Read-only memory (ROM) is nonvolatile memory that retains information even if power is removed. ROM is typically used to load and store firmware. Firmware is embedded software much like BIOS.
Some common types of ROM include
- Erasable Programmable Read-Only Memory (EPROM)
- Electrically Erasable Programmable Read-Only Memory (EEPROM)
- Flash Memory
- Programmable Logic Devices (PLD)
Although memory plays an important part in the world of storage, other long-term types of storage are also needed. One of these is sequential storage. Anyone who has owned an IBM PC with a tape drive knows what sequential storage is. Tape drives are a type of sequential storage that must be read sequentially from beginning to end. Another well-known type of secondary storage is direct-access storage. Direct access storage devices do not have to be read sequentially; the system can identify the location of the information and go directly to it to read the data. A hard drive is an example of a direct access storage device: A hard drive has a series of platters, read/write heads, motors, and drive electronics contained within a case designed to prevent contamination. Hard drives are used to hold data and software. Software is the operating system or an application that you’ve installed on a computer system. Floppies or diskettes are also considered secondary storage. The data on diskettes are organized in tracks and sectors. Tracks are narrow concentric circles on the disk. Sectors are pie-shaped slices of the disk. The disk is made of a thin plastic material coated with iron oxide. This is much like the material found in a backup tape or cassette tape. As the disk spins, the disk drive heads move in and out to locate the correct track and sector. It then reads or writes the requested track and sector.
Compact disks (CDs) are a type of optical media. They use a laser/opto-electronic sensor combination to read or write data. A CD can be read only, write once, or rewriteable. CDs can hold up to around 800MB on a single disk. A CD is manufactured by applying a thin layer of aluminum to what is primarily hard clear plastic. During manufacturing or whenever a CD/R is burned, small bumps or pits are placed in the surface of the disk. These bumps or pits are what are converted into binary ones or zeros. Unlike the tracks and sectors of a floppy, a CD comprises one long spiral track that begins at the inside of the disk and continues toward the outer edge.
Digital video disks (DVDs) are very similar to a CD because both are optical media—DVDs just hold more data. The next generation of optical storage is the Blu-ray disk. These optical disks can hold 50GB or more of data.
I/O Bus Standards
The data that the CPU is working with must have a way to move from the storage media to the CPU. This is accomplished by means of a bus. The bus is nothing more than lines of conductors that transmit data between the CPU, storage media, and other hardware devices. From the point of view of the CPU, the various adaptors plugged into the computer are external devices. These connectors and the bus architecture used to move data to the devices has changed over time. Some common bus architectures are listed here:
- ISA—The Industry Standard Architecture (ISA) bus started as an 8-bit bus designed for IBM PCs. It is now obsolete.
- PCI—The peripheral component interface (PCI) bus was developed by Intel and served as a replacement for ISA and other bus standards. PCI express is now the current standard.
- SCSI—The small computer systems interface (SCSI) bus allows a variety of devices to be daisy-chained off of a single controller. Many servers use the SCSI bus for their preferred hard drive solution.
Two serial bus standards, USB and FireWire, have also gained wide market share. USB overcame the limitations of traditional serial interfaces. USB 2.0 devices can communicate at speeds up to 480Mbps, whereas USB 3.0 devices have a proposed rate of 4.8Gbps. Devices can be chained together so that up to 127 devices can be chained together. USB is used for flash memory, cameras, printers, external hard drives, and even iPods. Two of the fundamental advantages of the USB are that it has such broad product support and that many devices are immediately recognized when connected. The competing standard for USB is FireWire or IEEE 1394. This design can be found on many Apple computers, but is also found on digital audio and video equipment.
Hardware Cryptographic Components
Hardware offers the ability to build in encryption. A relatively new hardware security device for computers is called the trusted platform module (TPM) chip. The TPM moves the cryptographic processes down to the hardware level and provides a greater level of security than software encryption. A TPM chip can be installed on the motherboard of a client computer and is used for hardware authentication. The TPM authenticates the computer in question rather than the user. TPM uses the boot sequence to determine the trusted status of a platform. TPM is now covered by ISO 11889-1:2009.
The TPM provides the ability for encryption by calculating a hashed value based on items such as the system’s firmware, configuration details, and core components of the operating system’s kernel. At the time of installation, this hash value is securely stored within the TPM chip. This provides attestation. Attestation confirms, authenticates, or proves to be genuine. The TPM is a tamper-proof cryptographic module that can provide a means to report the system configuration to a policy enforcer securely to provide attestation.
Virtual Memory and Virtual Machines
Modern computer systems have developed other ways in which to store and access information. One of these is virtual memory. Virtual memory is the combination of the computer’s primary memory (RAM) and secondary storage (the hard drive). By combining these two technologies, the OS can make the CPU believe that it has much more memory than it actually does. Examples of virtual memory include
- Page file
- Swap space
- Swap partition
These virtual memory types are user-defined in terms of size, location, and so on. When RAM is depleted, the CPU begins saving data onto the computer’s hard drive. Paging takes a part of a program out of memory and uses the page file to swap an entire program out of memory. This process uses a swap file so that the data can be moved back and forth between the hard drive and RAM as needed. A specific drive can even be configured to hold such data and as such is a swap partition. Individuals who have used a computer’s hibernation function or ever opened more programs on their computers than they’ve had enough memory to support are probably familiar with the operation of virtual memory.
Closely related to virtual memory are virtual machines, such asVMWare, VirtualBox, and VirtualPC. VMWare and VirtualPC are the two leading contenders in this category. A virtual machine enables the user to run a second OS within a virtual host. For example, a virtual machine will let you run another Windows OS, Linux x86, or any other OS that runs on x86 processor and supports standard BIOS booting. Virtual systems make use of a hypervisor to manage the virtualized hardware resources to the guest operating system. A Type 1 hypervisor runs directly on the hardware with VM resources provided by the hypervisor, whereas a Type 2 hypervisor runs on a host operating system above the hardware. Virtual machines are a huge trend and can be used for development and system administration, production, and to reduce the number of physical devices needed. The hypervisor is also being used to design virtual switches, routers, and firewalls.
The following is a list of some of the most commonly used computer and device configurations:
- Print server—Print servers are usually located close to printers and allow many users to access the printer and share its resources.
- File server—File servers allow users to have a centralized site to store files. This provides an easy way to perform backups because it can be done on one server and not all the client computers. It also allows for group collaboration and multiuser access.
- Program server—Program servers are also known as application servers. This service allows users to run applications not installed on the end users’ system. It is a very popular concept in thin client environments. Thin clients depend on a central server for processing power. Licensing is another important consideration.
- Web server—Web servers provide web services to internal and external users via web pages. A sample web address or URL (uniform resource locator) is http://www.thesolutionfirm.com.
- Database server—Database servers store and access data. This includes information such as product inventory, price lists, customer lists, and employee data. Because databases hold sensitive information, they require well-designed security controls.
- Laptops and tablets—Mobile devices that are easily lost or stolen. Mobile devices have become much more powerful and must be properly secured.
- Smart phones—Gone are the cell phones of the past that simply placed calls and sent SMS texts. Today’s smart phones are more like many computers and have a large amount of processing capability; they can take photos and have onboard storage, Internet connectivity, and the ability to run applications. These devices are of particular concern as more companies start to support bring your own device (BYOD). Such devices can easily fall outside of company policy and controls.