- Conceptual Overview
- Read-Only Memory (ROM)
- Random Access Memory (RAM)
- Cycles and Frequencies
- Summary—Basic Memory
- Cache Memory
- Memory Pages
- Rambus Memory (RDRAM)
- Double Data Rate SDRAM (DDR SDRAM)
- Video RAM (VRAM)
- Supplemental Information
- Packaging Modules
- Memory Diagnostics—Parity
- Exam Prep Questions
- Need to Know More?
Random Access Memory (RAM)
The memory experts over at Crucial Technology, a division of Micron Technology, Inc. (http://www.crucial.com) have created a great illustration of memory. We're going to modify their original inspiration, and expand it to include some of the related concepts discussed throughout this book. Imagine a motherboard as being like a printing business. Originally, there was only "the guy in charge" and a few employees. They all worked in a small building, and things were pretty disorganized. The CPUthe bossis in charge of getting things done. The other components on the board all have been developed to lend a helping hand.
When the CPU finishes a processing job, it uses the address bus to set up memory locations for the results of its processing. It then sends the data to the memory controller, where each bit in every byte is stored in a memory cell. At some point, if the CPU needs the results again, it orders the memory controller to find the stored bits and send them back.
Dynamic RAM (DRAM)
In the old days, when the boss took in a print job, he'd have to go running back to the pressman to have it printed. The pressman is the memory controller, and the printing press is a memory chip. (The print job is a set of bits the CPU needs to move out of its registers.) The pressman would examine each document he got from the boss, character by character, and grab matching lead blocks, individually carved with each letter. He would then place each block of lead into a form, one by one. In other words, each bit gets its own address in a matrix.
After the form was typeset (filled with letters), the pressman slopped on ink and put a piece of paper under the press. He would crank down a handle and print a copy of the document. Then he had to re-ink the grid to get it ready to print another copy. This is much like the process where a memory controller takes bits from the CPU, examines them, then assigns each one a memory address. The "printing" step is the moment the storage takes place in the memory cells. Keep an eye on that moment, because the re-inking step relates to a memory refresh.
A controller is a small device, usually a single chip, that controls data flow for a particular piece of hardware. A memory chip is also a device, and the memory controller executes various instructions as to how to use the chip. A disk drive controller contains instructions to operate the drive mechanics. Most PC motherboards use simple controllers for the basic I/O ports, as well as having two controllers for IDE drives.
Nowadays you can buy a toy printing kit, with many letters engraved on pieces of rubber. You slide each piece of rubber into a rail, one by one. After you've inserted a complete line of letters, you apply some ink and stamp the line onto a piece of paper. When you're finished, you remove each letter, one by one, and start all over again. Suppose you could insert an entire line of rubber letters all at once? Wouldn't that be a whole lot faster? That was the idea behind FPM and EDO memory, which we'll look at later in this chapter.
Here's a bit of trivia: The space above and below a line of printing is called the leadingpronounced as "led-ding." This space was the extra room on a lead block surrounding each carved letter on those original printing presses.
Memory Refresh and Wait States
DRAM cells are made up of many capacitors that can either hold a charge (1) or not hold a charge (0). One of the problems with capacitors is that they leak (their charge fades). This is somewhat similar to ink coming off each letter block during a print job. A memory refresh is when the memory controller checks with the CPU for a correct data bit, then re-charges a specific capacitor. While a memory refresh is taking place, the memory controller is busy and can't work with other data. (Remember that "moment," earlier?)
When two devices attempt to exchange information, but one of them is busy doing something else, we speak of a wait state. The CPU is often the fastest device in a system, and so it often has to wait for other devices. The more wait states, the less efficiency and the slower the performance of the overall system.
One of the big problems with DRAM, to follow the story, was that at any given time, the boss wouldn't know what the pressman was doing. Neither did the pressman have any idea of what the boss was doing. If the boss ran in with a new document while the pressman was re-inking the press, he'd have to wait until the guy was done before they could talk. This is like the CPU waiting for the memory controller to complete a memory refresh.
If there were some way to avoid the capacitor leakage, the CPU and memory controller wouldn't have to constantly waste time recharging memory cells. Fewer wait states would mean faster throughput. Without the recharging cycle, the controller could also avoid interrupting the CPU for copies of data bit information.
Refreshing a Bit Charge
Technically speaking, a bit is a pulse of electrical current. When the CPU moves a bit out to memory, it sends a pulse over a signal trace (like a very tiny wire). The pulse moves through the memory controller, which directs the charge to a small capacitor. The charge trips a switch in the controller, indicating that the capacitor is in use. The controller then "remembers" which capacitor stored that pulse.
The memory controller recharges the capacitors on a cyclical basis, whether or not they really need it. The timing for the recharge is designed to be well before significant leakage would take place. Note that Static RAM (SRAM) works with transistors, rather than capacitors. Transistors are switcheseither on or off. Unlike capacitors, transistors don't leak, but remain switched on or off, as long as a small amount of current remains present.
Transistors provide a performance increase over capacitors when they're used in memory chips. Because the transistors in SRAM don't require constant refreshes to prevent leakage, data changes only when the CPU sends out an instruction pulse. This makes SRAM a lot faster than DRAM and SDRAM.
Static RAM (SRAM)
Be careful that you don't confuse Static RAM (SRAM) with Synchronous DRAM (SDRAM). SRAM is referred to as being static, because when its transistors are set, they remain that way until actively changed. Static comes from the Latin staticus, meaning unchanging. It relates to a Greek word meaning to make a stand. Dynamic comes from the Greek dynamikós, meaning force or power. In a manner of speaking, dynamic RAM requires memory refresh logic to "force" the capacitors to remember their stored data.
SRAM is static memory. SDRAM is synchronous dynamic memory. Both chips require electrical current to retain information, but DRAM and SDRAM also require memory refreshes to prevent the capacitors from leaking their charge. SRAM uses power to switch a transistor on or off, and doesn't require additional current to refresh the switch's state.
Transistors can be built onto a chip either close together or far apart. In the same way we refer to trees growing closely together or farther apart as the density of the forest, so, too, do we refer to SRAM densities. Depending upon how many transistors are used in a given area, SRAM is categorized as either fast SRAM (high-density), or low-density SRAM (slower).
Fast SRAM is more expensive to manufacture, and uses significantly more power than low-density chips (watts versus microwatts, respectively). Because transistors are also usually placed farther apart than capacitors, SRAM uses more chips than DRAM to produce the same amount of memory. Higher manufacturing costs and less memory per chip mean that fast SRAM is typically used in Level 1 and Level 2 caches, where speed is critical. Low-density SRAM chips are more often used on less important devices, or for battery-powered backup memory such as CMOS.
Secondary memory caches (L-1 and L-2) are usually SRAM chips, which are extremely fast (as fast as 79ns, and 25ns for ultra-fast SRAM). Level 2 cache is usually installed in sizes of 256KB or 512KB.
SRAM is also used for CMOS configuration setups and requires a small amount of electricity. This current is provided by a backup battery on the system board. SRAM comes on credit-card-sized memory cards, available in 128KB, 256KB, 512KB, 1MB, 2MB, and 4MB sizes. Typical CMOS battery life is 10 or more years.
Getting back to the story, DRAM has another problem. Each time the pressman finished a job and was ready to take it back to the boss, he'd come running into the front office and interrupt whatever was going on. If the boss was busy with a customer, then the pressman would stand there and shout, "Hey boss! Hey boss! Hey boss!" until eventually he was heard (or punched in the facean IRQ conflict). Once in awhile, just by luck, the pressman would run into the office when there weren't any customers and the boss was free to talk.
The CPU only sends a request to the memory controller on a clock tick. The clock is always ticking, and the CPU tries to do something with every clock tick. Meanwhile, the controller has run off to track down the necessary bits to send back to the CPU, taking time to do so. Think of the clock pulses as a pendulum, always swinging back and forth (positive and negative polarity). The CPU can't connect with the controller again until the clock's pendulum swings back its way, opening up another "tick." The CPU in Figure 3.1 can attach and send off a request, or take back a bit only when the clock tickswhen the pendulum is on its side. Meanwhile, with asynchronous memory, the controller isn't paying any attention to the clock at all.Figure 3.1 Moving data on the clock tick.
In a DRAM setupunsynchronized memoryonly the CPU transmits and receives according to a clock tick. The memory controller has no idea a clock is ticking, and tries to send data back to the CPU, unaware of the swinging pendulum.
Synchronized DRAM (SDRAM)
Interruptions are known as Interrupt Requests (IRQs) and, to mix metaphors, they are like a two-year-old demanding attention. One way to handle them is to repeat "not now...not now...not now" until it's a good time to listen. Another way to handle an interruption is to say, "Come back in a minute and I'll be ready to respond then." The problem is explaining to the two-year-old what you mean by "a minute." We'll discuss IRQs in Chapter 4, "Processor Mechanics, IRQs, and DMA."
One day the boss had a great idea. There was a big clock in the front office (the motherboard oscillator) and he proposed that both he and the pressman wear a watch. That way, both of them could tell time. It was a novel idea: The boss would then be able to call out to the pressman that he had a job to run, and the pressman could holler back, "I'll be ready in a minute."
This could also work the other way around. When the pressman finished a job, he could call out to the boss that he was ready to deliver the goods. The boss could then shout back that he needed another minute because he was busy with a customer. Either one of them could watch the clock for a minute to go by, doing something else until the other one was ready to talk.
Another way to think of clock ticks is to imagine a ski lift. Regardless of whether anyone takes a seat, an endless chain goes up and down the slope. Each seat is a clock tick, and either the CPU or the memory controller can put a data bit on a seat. Synchronization is sort of like waiting until a seat comes by before putting a data bit on it. Asynchronous is something like trying to shove a data bit toward the ski lift without paying any attention at all to whether or not there's a seat nearby. Usually, the bit goes nowhere and the device has to try again, hoping a seat just happens to show up.
An interesting feature of SRAM is that it allows for timing and synchronization with the CPU. The same idea was retrofitted to DRAM chips, and synchronized memory was born. DRAM is called asynchronous because it reacts immediately and on its own, to any given instruction. SDRAM is synchronous because it waits for a clock tick before responding to instructions.
SDRAM provides a way for the memory controller and CPU to both understand clock ticks, and to adjust their actions according to the same clock.