Back to ComputerTerms CaChe
DRAMs are d Mbits x w bits: Memory Depth by Width Bits
Commonly the width is between 1 and 8, and the depth is in millions of bits. How memory is designed is very important because it determines how we can access it. If we organize it in Banks that can be individually accessed we can simultaneously access each bank reducing the latency of a memory read.
Memory organized 1 word wide requires that we have latency for each word accessed. This gives us the following formula:
Read Latency=cache miss=request time + size of block * (4 bytes/word + write time) [read time per block + write time per block]
So if we had
- 1 clock cycle to send the address to retrieve
- 15 clock cycles for each DRAM access initiated
- 1 clock cycle to send a word of data
The miss penalty would be: 1 + 4 * (15 + 1) = 65
Suppose that we want to make specify 64 MB of memory using 4Mx1 DRAMS and that we wanted to have 4 word blocks, how would we organize the memory?
ANS: We create 4 banks of 32 DRAMS each giving us 4194304 bits in each DRAM or 134,217,728 bits per bank = 16 MB/bank. Since we have 4 banks that makes 64 MB of DRAM.
This method of organization is called interleaved memory organization. This allows us to read all the words for a block at once (providing that the block size coresponds to the banks). Thus we change our above formula to
Miss penalty = request time + retrieval time + write time
We can further organize this into paged mode memory used in EDO. Essentially memory is layed out in a table with rows and columns. Paged mode allows you multiple accesses to the row you are in at the moment without having to move to it over and over again. This cuts down the latency quite a bit.
Back to ComputerTerms CaChe