Ceg3420 L15.1 DAP Fa97,  U.CB CEG3420 Computer Design Locality and Memory Technology.

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
COEN 180 DRAM. Dynamic Random Access Memory Dynamic: Periodically refresh information in a bit cell. Else it is lost. Small footprint: transistor + capacitor.
CMPE 421 Parallel Computer Architecture MEMORY SYSTEM.
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
CS.305 Computer Architecture Memory: Structures Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made.
10/11/2007EECS150 Fa07 - DRAM 1 EECS Components and Design Techniques for Digital Systems Lec 14 – Storage: DRAM, SDRAM David Culler Electrical Engineering.
Memory Subsystem and Cache Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
11/29/2004EE 42 fall 2004 lecture 371 Lecture #37: Memory Last lecture: –Transmission line equations –Reflections and termination –High frequency measurements.
EE30332 Ch7 DP.1 °Most of the slides are from Prof. Dave Patterson of University of California at Berkeley °Part of the materials is from Sun Microsystems.
Memory Computer Architecture Lecture 16: Memory Systems.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 20 - Memory.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
CS152 / Kubiatowicz Lec /9/01©UCB Fall 2001 CS152 Computer Architecture and Engineering Lecture 20 Locality and Memory Technology November 9 th,
Memory Hierarchy.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
SDRAM Memory Controller
CIS629 - Fall 2002 Caches 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality.
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr CS-447– Computer Architecture.
EECC550 - Shaaban #1 Lec # 10 Summer Main Memory Main memory generally utilizes Dynamic RAM (DRAM), which use a single transistor to store.
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
John Kubiatowicz (http.cs.berkeley.edu/~kubitron)
ECE 232 L24.Memory.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 24 Memory.
CS152 / Kubiatowicz Lec /01/99©UCB Fall 1999 CS152 Computer Architecture and Engineering Lecture 18 Locality and Memory Technology November 1, 1999.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
EECC551 - Shaaban #1 Lec # 11 Fall Main Memory Main memory generally utilizes Dynamic RAM (DRAM), which use a single transistor to store.
DAP Spr.‘98 ©UCB 1 Lecture 11: Memory Hierarchy—Ways to Reduce Misses.
Memory Technology “Non-so-random” Access Technology:
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
CPE232 Memory Hierarchy1 CPE 232 Computer Organization Spring 2006 Memory Hierarchy Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
CSIE30300 Computer Architecture Unit 07: Main Memory Hsin-Chou Chi [Adapted from material by and
CS 152 / Fall 02 Lec 19.1 CS 152: Computer Architecture and Engineering Lecture 19 Locality and Memory Technologies Randy H. Katz, Instructor Satrajit.
CpE 442 Memory System Start: X:40.
CS1104: Computer Organisation School of Computing National University of Singapore.
EEL 5708 Memory technology Lotzi Bölöni. EEL 5708 Acknowledgements All the lecture slides were adopted from the slides of David Patterson (1998, 2001)
EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff Case.
1 CSCI 2510 Computer Organization Memory System I Organization.
IT253: Computer Organization Lecture 11: Memory Tonga Institute of Higher Education.
Lecture 14 Memory Hierarchy and Cache Design Prof. Mike Schulte Computer Architecture ECE 201.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
Lecture 13 Main Memory Computer Architecture COE 501.
Memory System Unit-IV 4/24/2017 Unit-4 : Memory System.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
EEL5708/Bölöni Lec 4.1 Fall 2004 September 10, 2004 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Review: Memory Hierarchy.
CS252/Kubiatowicz Lec 4.1 1/26/01 CS252 Graduate Computer Architecture Lecture 4 Caches and Memory Systems January 26, 2001 Prof. John Kubiatowicz.
1010 Caching ENGR 3410 – Computer Architecture Mark L. Chang Fall 2006.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
Adapted from Computer Organization and Design, Patterson & Hennessy, UCB ECE232: Hardware Organization and Design Part 14: Memory Hierarchy Chapter 5 (4.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
DIGITAL SYSTEMS Read Only– and Random Access Memory ( ROM – RAM) Rudolf Tracht and A.J. Han Vinck.
CS35101 Computer Architecture Spring 2006 Lecture 18: Memory Hierarchy Paul Durand ( ) [Adapted from M Irwin (
CSE431 L18 Memory Hierarchy.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 18: Memory Hierarchy Review Mary Jane Irwin (
CPEG3231 Integration of cache and MIPS Pipeline  Data-path control unit design  Pipeline stalls on cache misses.
1Chapter 7 Memory Hierarchies Outline of Lectures on Memory Systems 1. Memory Hierarchies 2. Cache Memory 3. Virtual Memory 4. The future.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
CS152 / Kubiatowicz Lec17.1 4/5/99©UCB Spring 1999 CS152 Computer Architecture and Engineering Lecture 17 Locality and Memory Technology April 5, 1999.
Administration Midterm on Thursday Oct 28. Covers material through 10/21. Histogram of grades for HW#1 posted on newsgroup. Sample problem set (and solutions)
CS 704 Advanced Computer Architecture
COSC3330 Computer Architecture
Computer Organization
CpE 442 Memory System Start: X:40.
Yu-Lun Kuo Computer Sciences and Information Engineering
The Goal: illusion of large, fast, cheap memory
John Kubiatowicz ( CS152 Computer Architecture and Engineering Lecture 20 Locality and Memory Technology April 16, 2003.
Morgan Kaufmann Publishers Memory Hierarchy: Introduction
Presentation transcript:

Ceg3420 L15.1 DAP Fa97,  U.CB CEG3420 Computer Design Locality and Memory Technology

Ceg3420 L15.2 DAP Fa97,  U.CB Recap °MIPS I instruction set architecture made pipeline visible (delayed branch, delayed load) °More performance from deeper pipelines, parallelism °Increasing length of pipe increases impact of hazards; pipelining helps instruction bandwidth, not latency °SW Pipelining Symbolic Loop Unrolling to get most from pipeline with little code expansion, little overhead °Dynamic Branch Prediction + early branch address for speculative execution °Superscalar and VLIW CPI < 1 Dynamic issue vs. Static issue More instructions issue at same time, larger the penalty of hazards Intel EPIC in IA-64 a hybrid: compact LIW + data hazard check

Ceg3420 L15.3 DAP Fa97,  U.CB °The Five Classic Components of a Computer °Today’s Topics: Recap last lecture Locality and Memory Hierarchy Administrivia SRAM Memory Technology DRAM Memory Technology Memory Organization The Big Picture: Where are We Now? Control Datapath Memory Processor Input Output

Ceg3420 L15.4 DAP Fa97,  U.CB Technology Trends (from 1st lecture) DRAM YearSizeCycle Time Kb250 ns Kb220 ns Mb190 ns Mb165 ns Mb145 ns Mb120 ns CapacitySpeed (latency) Logic:2x in 3 years2x in 3 years DRAM:4x in 3 years2x in 10 years Disk:4x in 3 years2x in 10 years 1000:1!2:1!

Ceg3420 L15.5 DAP Fa97,  U.CB Who Cares About the Memory Hierarchy? µProc 60%/yr. (2X/1.5yr) DRAM 9%/yr. (2X/10 yrs) DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance Time “Moore’s Law” Processor-DRAM Memory Gap (latency)

Ceg3420 L15.6 DAP Fa97,  U.CB Today’s Situation: Microprocessor °Rely on caches to bridge gap °Microprocessor-DRAM performance gap time of a full cache miss in instructions executed 1st Alpha (7000): 340 ns/5.0 ns = 68 clks x 2 or136 instructions 2nd Alpha (8400):266 ns/3.3 ns = 80 clks x 4 or320 instructions 3rd Alpha (t.b.d.):180 ns/1.7 ns =108 clks x 6 or648 instructions 1/2X latency x 3X clock rate x 3X Instr/clock  ­5X

Ceg3420 L15.7 DAP Fa97,  U.CB Impact on Performance °Suppose a processor executes at Clock Rate = 200 MHz (5 ns per cycle) CPI = % arith/logic, 30% ld/st, 20% control °Suppose that 10% of memory operations get 50 cycle miss penalty °CPI = ideal CPI + average stalls per instruction = 1.1(cyc) +( 0.30 (datamops/ins) x 0.10 (miss/datamop) x 50 (cycle/miss) ) = 1.1 cycle cycle = 2. 6 °58 % of the time the processor is stalled waiting for memory! °a 1% instruction miss rate would add an additional 0.5 cycles to the CPI!

Ceg3420 L15.8 DAP Fa97,  U.CB The Goal: illusion of large, fast, cheap memory °Fact: Large memories are slow, fast memories are small °How do we create a memory that is large, cheap and fast (most of the time)? Hierarchy Parallelism

Ceg3420 L15.9 DAP Fa97,  U.CB An Expanded View of the Memory System Control Datapath Memory Processor Memory Fastest Slowest Smallest Biggest Highest Lowest Speed: Size: Cost:

Ceg3420 L15.10 DAP Fa97,  U.CB Why hierarchy works °The Principle of Locality: Program access a relatively small portion of the address space at any instant of time. Address Space 02^n - 1 Probability of reference

Ceg3420 L15.11 DAP Fa97,  U.CB Memory Hierarchy: How Does it Work? °Temporal Locality (Locality in Time): => Keep most recently accessed data items closer to the processor °Spatial Locality (Locality in Space): => Move blocks consists of contiguous words to the upper levels Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y

Ceg3420 L15.12 DAP Fa97,  U.CB Memory Hierarchy: Terminology °Hit: data appears in some block in the upper level (example: Block X) Hit Rate: the fraction of memory access found in the upper level Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss °Miss: data needs to be retrieve from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor °Hit Time << Miss Penalty Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y

Ceg3420 L15.13 DAP Fa97,  U.CB Memory Hierarchy of a Modern Computer System °By taking advantage of the principle of locality: Present the user with as much memory as is available in the cheapest technology. Provide access at the speed offered by the fastest technology. Control Datapath Secondary Storage (Disk) Processor Registers Main Memory (DRAM) Second Level Cache (SRAM) On-Chip Cache 1s10,000,000s (10s ms) Speed (ns):10s100s Gs Size (bytes): KsMs Tertiary Storage (Disk) 10,000,000,000s (10s sec) Ts

Ceg3420 L15.14 DAP Fa97,  U.CB How is the hierarchy managed? °Registers Memory by compiler (programmer?) °cache memory by the hardware °memory disks by the hardware and operating system (virtual memory) by the programmer (files)

Ceg3420 L15.15 DAP Fa97,  U.CB Memory Hierarchy Technology °Random Access: “Random” is good: access time is the same for all locations DRAM: Dynamic Random Access Memory -High density, low power, cheap, slow -Dynamic: need to be “refreshed” regularly SRAM: Static Random Access Memory -Low density, high power, expensive, fast -Static: content will last “forever”(until lose power) °“Non-so-random” Access Technology: Access time varies from location to location and from time to time Examples: Disk, CDROM °Sequential Access Technology: access time linear in location (e.g.,Tape) °The next two lectures will concentrate on random access technology The Main Memory: DRAMs + Caches: SRAMs

Ceg3420 L15.16 DAP Fa97,  U.CB Main Memory Background °Performance of Main Memory: Latency: Cache Miss Penalty -Access Time: time between request and word arrives -Cycle Time: time between requests Bandwidth: I/O & Large Block Miss Penalty (L2) °Main Memory is DRAM: Dynamic Random Access Memory Dynamic since needs to be refreshed periodically (8 ms) Addresses divided into 2 halves (Memory as a 2D matrix): -RAS or Row Access Strobe -CAS or Column Access Strobe °Cache uses SRAM: Static Random Access Memory No refresh (6 transistors/bit vs. 1 transistorSize: DRAM/SRAM ­ 4-8, Cost/Cycle time: SRAM/DRAM ­ 8-16

Ceg3420 L15.17 DAP Fa97,  U.CB Random Access Memory (RAM) Technology °Why do computer designers need to know about RAM technology? Processor performance is usually limited by memory bandwidth As IC densities increase, lots of memory will fit on processor chip -Tailor on-chip memory to specific needs -Instruction cache -Data cache -Write buffer °What makes RAM different from a bunch of flip-flops? Density: RAM is much more denser

Ceg3420 L15.18 DAP Fa97,  U.CB Administrative Issues °Office Hours: Gebis: Tuesday, 3:30-4:30 Kirby: ? Kozyrakis: Monday 1pm-2pm, Th 11am-noon 415 Soda Hall Patterson: Wednesday 12-1 and Wednesday 3:30-4: Soda Hall °Reflector site for handouts and lecture notes (backup): °Computers in the news Intel buys DEC fab line for $700M + rights to DEC patents; + Intel pays some royalty per chip from DEC has rights to continue fab Alpha in future on Intel owned line Intel offers jobs to 2000 fab/process people; DEC keeps MPU designers DEC will build servers based on IA-64 (+Alpha); customers choose Intel gets rights to DEC UNIX

Ceg3420 L15.19 DAP Fa97,  U.CB Static RAM Cell 6-Transistor SRAM Cell bit word (row select) bit word °Write: 1. Drive bit lines (bit=1, bit=0) 2.. Select row °Read: 1. Precharge bit and bit to Vdd 2.. Select row 3. Cell pulls one line low 4. Sense amp on column detects difference between bit and bit replaced with pullup to save area 10 01

Ceg3420 L15.20 DAP Fa97,  U.CB Typical SRAM Organization: 16-word x 4-bit SRAM Cell SRAM Cell SRAM Cell SRAM Cell SRAM Cell SRAM Cell SRAM Cell SRAM Cell SRAM Cell SRAM Cell SRAM Cell SRAM Cell -+ Sense Amp :::: Word 0 Word 1 Word 15 Dout 0Dout 1Dout 2Dout 3 -+ Wr Driver & Precharger -+ Wr Driver & Precharger -+ Wr Driver & Precharger -+ Wr Driver & Precharger Address Decoder WrEn Precharge Din 0Din 1Din 2Din 3 A0 A1 A2 A3 Q: Which is longer: word line or bit line?

Ceg3420 L15.21 DAP Fa97,  U.CB Logic Diagram of a Typical SRAM °Write Enable is usually active low (WE_L) °Din and Dout are combined to save pins: A new control signal, output enable (OE_L) is needed WE_L is asserted (Low), OE_L is disasserted (High) -D serves as the data input pin WE_L is disasserted (High), OE_L is asserted (Low) -D is the data output pin Both WE_L and OE_L are asserted: -Result is unknown. Don’t do that!!! °Although could change VHDL to do what desire, must do the best with what you’ve got (vs. what you need) A DOE_L 2 N words x M bit SRAM N M WE_L

Ceg3420 L15.22 DAP Fa97,  U.CB Typical SRAM Timing Write Timing: D Read Timing: WE_L A Write Hold Time Write Setup Time A DOE_L 2 N words x M bit SRAM N M WE_L Data In Write Address OE_L High Z Read Address Junk Read Access Time Data Out Read Access Time Data Out Read Address

Ceg3420 L15.23 DAP Fa97,  U.CB Problems with SRAM °Six transistors use up a lot of area °Consider a “Zero” is stored in the cell: Transistor N1 will try to pull “bit” to 0 Transistor P2 will try to pull “bit bar” to 1 °But bit lines are precharged to high: Are P1 and P2 necessary? bit = 1bit = 0 Select = 1 OnOff On N1N2 P1P2 On

Ceg3420 L15.24 DAP Fa97,  U.CB 1-Transistor Memory Cell (DRAM) °Write: 1. Drive bit line 2.. Select row °Read: 1. Precharge bit line to Vdd 2.. Select row 3. Cell and bit line share charges -Very small voltage changes on the bit line 4. Sense (fancy sense amp) -Can detect changes of ~1 million electrons 5. Write: restore the value °Refresh 1. Just do a dummy read to every cell. row select bit

Ceg3420 L15.25 DAP Fa97,  U.CB Classical DRAM Organization (square) rowdecoderrowdecoder row address Column Selector & I/O Circuits Column Address data RAM Cell Array word (row) select bit (data) lines °Row and Column Address together: Select 1 bit a time Each intersection represents a 1-T DRAM Cell

Ceg3420 L15.26 DAP Fa97,  U.CB DRAM logical organization (4 Mbit) °Square root of bits per RAS/CAS Column Decoder SenseAmps & I/O MemoryArray (2,048 x 2,048) A0…A10 … 11 D Q Word Line Storage Cell

Ceg3420 L15.27 DAP Fa97,  U.CB DRAM physical organization (4 Mbit) Block Row Dec. 9 : 512 Row Block Row Dec. 9 : 512 ColumnAddress … Block Row Dec. 9 : 512 Block Row Dec. 9 : 512 … Block 0Block 3 … I/O D Q Address 2 8 I/Os

Ceg3420 L15.28 DAP Fa97,  U.CB Memory Systems DRAM 2^n x 1 chip DRAM Controller address Memory Timing Controller Bus Drivers n n/2 w Tc = Tcycle + Tcontroller + Tdriver

Ceg3420 L15.29 DAP Fa97,  U.CB Logic Diagram of a Typical DRAM A D OE_L 256K x 8 DRAM 98 WE_L °Control Signals (RAS_L, CAS_L, WE_L, OE_L) are all active low °Din and Dout are combined (D): WE_L is asserted (Low), OE_L is disasserted (High) -D serves as the data input pin WE_L is disasserted (High), OE_L is asserted (Low) -D is the data output pin °Row and column addresses share the same pins (A) RAS_L goes low: Pins A are latched in as row address CAS_L goes low: Pins A are latched in as column address RAS/CAS edge-sensitive CAS_LRAS_L

Ceg3420 L15.30 DAP Fa97,  U.CB Key DRAM Timing Parameters °t RAC : minimum time from RAS line falling to the valid data output. Quoted as the speed of a DRAM A fast 4Mb DRAM t RAC = 60 ns °t RC : minimum time from the start of one row access to the start of the next. t RC = 110 ns for a 4Mbit DRAM with a t RAC of 60 ns °t CAC : minimum time from CAS line falling to valid data output. 15 ns for a 4Mbit DRAM with a t RAC of 60 ns °t PC : minimum time from the start of one column access to the start of the next. 35 ns for a 4Mbit DRAM with a t RAC of 60 ns

Ceg3420 L15.31 DAP Fa97,  U.CB DRAM Performance °A 60 ns (t RAC ) DRAM can perform a row access only every 110 ns (t RC ) perform column access (t CAC ) in 15 ns, but time between column accesses is at least 35 ns (t PC ). -In practice, external address delays and turning around buses make it 40 to 50 ns °These times do not include the time to drive the addresses off the microprocessor nor the memory controller overhead. Drive parallel DRAMs, external memory controller, bus to turn around, SIMM module, pins… 180 ns to 250 ns latency from processor to memory is good for a “60 ns” (t RAC ) DRAM

Ceg3420 L15.32 DAP Fa97,  U.CB DRAM Write Timing A D OE_L 256K x 8 DRAM 98 WE_LCAS_LRAS_L WE_L ARow Address OE_L Junk WR Access Time CAS_L RAS_L Col AddressRow AddressJunkCol Address DJunk Data In Junk DRAM WR Cycle Time Early Wr Cycle: WE_L asserted before CAS_LLate Wr Cycle: WE_L asserted after CAS_L °Every DRAM access begins at: The assertion of the RAS_L 2 ways to write: early or late v. CAS

Ceg3420 L15.33 DAP Fa97,  U.CB DRAM Read Timing A D OE_L 256K x 8 DRAM 98 WE_LCAS_LRAS_L OE_L ARow Address WE_L Junk Read Access Time Output Enable Delay CAS_L RAS_L Col AddressRow AddressJunkCol Address DHigh ZData Out DRAM Read Cycle Time Early Read Cycle: OE_L asserted before CAS_LLate Read Cycle: OE_L asserted after CAS_L °Every DRAM access begins at: The assertion of the RAS_L 2 ways to read: early or late v. CAS JunkData OutHigh Z

Ceg3420 L15.34 DAP Fa97,  U.CB Main Memory Performance °Simple: CPU, Cache, Bus, Memory same width (32 bits) °Wide: CPU/Mux 1 word; Mux/Cache, Bus, Memory N words (Alpha: 64 bits & 256 bits) °Interleaved: CPU, Cache, Bus 1 word: Memory N Modules (4 Modules); example is word interleaved

Ceg3420 L15.35 DAP Fa97,  U.CB Cycle Time versus Access Time °DRAM (Read/Write) Cycle Time >> DRAM (Read/Write) Access Time ­ 2:1; why? °DRAM (Read/Write) Cycle Time : How frequent can you initiate an access? Analogy: A little kid can only ask his father for money on Saturday °DRAM (Read/Write) Access Time: How quickly will you get what you want once you initiate an access? Analogy: As soon as he asks, his father will give him the money °DRAM Bandwidth Limitation analogy: What happens if he runs out of money on Wednesday? Time Access Time Cycle Time

Ceg3420 L15.36 DAP Fa97,  U.CB Increasing Bandwidth - Interleaving Access Pattern without Interleaving: Start Access for D1 CPUMemory Start Access for D2 D1 available Access Pattern with 4-way Interleaving: Access Bank 0 Access Bank 1 Access Bank 2 Access Bank 3 We can Access Bank 0 again CPU Memory Bank 1 Memory Bank 0 Memory Bank 3 Memory Bank 2

Ceg3420 L15.37 DAP Fa97,  U.CB Main Memory Performance °Timing model 1 to send address, 6 access time, 1 to send data Cache Block is 4 words °Simple M.P. = 4 x (1+6+1) = 32 °Wide M.P. = = 8 °Interleaved M.P. = x1 = 11

Ceg3420 L15.38 DAP Fa97,  U.CB Independent Memory Banks °How many banks? number banks  number clocks to access word in bank For sequential accesses, otherwise will return to original bank before it has next word ready °Increasing DRAM => fewer chips => harder to have banks Growth bits/chip DRAM : 50%-60%/yr Nathan Myrvold M/S: mature software growth (33%/yr for NT) ­ growth MB/$ of DRAM (25%-30%/yr)

Ceg3420 L15.39 DAP Fa97,  U.CB Fewer DRAMs/System over Time Minimum PC Memory Size DRAM Generation ‘86 ‘89 ‘92‘96‘99‘02 1 Mb 4 Mb 16 Mb 64 Mb 256 Mb1 Gb 4 MB 8 MB 16 MB 32 MB 64 MB 128 MB 256 MB Memory per System 25%-30% / year Memory per DRAM 60% / year (from Pete MacWilliams, Intel)

Ceg3420 L15.40 DAP Fa97,  U.CB Page Mode DRAM: Motivation °Regular DRAM Organization: N rows x N column x M-bit Read & Write M-bit at a time Each M-bit access requires a RAS / CAS cycle °Fast Page Mode DRAM N x M “register” to save a row ARow AddressJunk CAS_L RAS_L Col AddressRow AddressJunkCol Address 1st M-bit Access2nd M-bit Access N rows N cols DRAM M bits Row Address Column Address M-bit Output

Ceg3420 L15.41 DAP Fa97,  U.CB Fast Page Mode Operation °Fast Page Mode DRAM N x M “SRAM” to save a row °After a row is read into the register Only CAS is needed to access other M- bit blocks on that row RAS_L remains asserted while CAS_L is toggled ARow Address CAS_L RAS_L Col Address 1st M-bit Access N rows N cols DRAM Column Address M-bit Output M bits N x M “SRAM” Row Address Col Address 2nd M-bit3rd M-bit4th M-bit

Ceg3420 L15.42 DAP Fa97,  U.CB DRAM v. Desktop Microprocessors Cultures Standardspinout, package,binary compatibility, refresh rate, IEEE 754, I/O bus capacity,... SourcesMultipleSingle Figures1) capacity, 1a) $/bit1) SPEC speed of Merit2) BW, 3) latency2) cost Improve1) 60%, 1a) 25%,1) 60%, Rate/year2) 20%, 3) 7%2) little change

Ceg3420 L15.43 DAP Fa97,  U.CB DRAM Design Goals °Reduce cell size 2.5, increase die size 1.5 °Sell 10% of a single DRAM generation 6.25 billion DRAMs sold in 1996 °3 phases: engineering samples, first customer ship(FCS), mass production Fastest to FCS, mass production wins share °Die size, testing time, yield => profit Yield >> 60% (redundant rows/columns to repair flaws)

Ceg3420 L15.44 DAP Fa97,  U.CB DRAM History °DRAMs: capacity +60%/yr, cost –30%/yr 2.5X cells/area, 1.5X die size in ­3 years °‘97 DRAM fab line costs $1B to $2B DRAM only: density, leakage v. speed °Rely on increasing no. of computers & memory per computer (60% market) SIMM or DIMM is replaceable unit => computers use any generation DRAM °Commodity, second source industry => high volume, low profit, conservative Little organization innovation in 20 years page mode, EDO, Synch DRAM °Order of importance: 1) Cost/bit 1a) Capacity RAMBUS: 10X BW, +30% cost => little impact

Ceg3420 L15.45 DAP Fa97,  U.CB Today’s Situation: DRAM °Commodity, second source industry  high volume, low profit, conservative Little organization innovation (vs. processors) in 20 years: page mode, EDO, Synch DRAM °DRAM industry at a crossroads: Fewer DRAMs per computer over time -Growth bits/chip DRAM : 50%-60%/yr -Nathan Myrvold M/S: mature software growth (33%/yr for NT) ­ growth MB/$ of DRAM (25%-30%/yr) Starting to question buying larger DRAMs?

Ceg3420 L15.46 DAP Fa97,  U.CB Today’s Situation: DRAM $16B $7B Intel: 30%/year since 1987; 1/3 income profit

Ceg3420 L15.47 DAP Fa97,  U.CB Summary: °Two Different Types of Locality: Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon. Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon. °By taking advantage of the principle of locality: Present the user with as much memory as is available in the cheapest technology. Provide access at the speed offered by the fastest technology. °DRAM is slow but cheap and dense: Good choice for presenting the user with a BIG memory system °SRAM is fast but expensive and not very dense: Good choice for providing the user FAST access time.

Ceg3420 L15.48 DAP Fa97,  U.CB Summary: Processor-Memory Performance Gap “Tax” Processor % Area %Transistors (­cost)(­power) °Alpha %77% °StrongArm SA11061%94% °Pentium Pro64%88% 2 dies per package: Proc/I$/D$ + L2$ °Caches have no inherent value, only try to close performance gap