CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California at.

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
COEN 180 DRAM. Dynamic Random Access Memory Dynamic: Periodically refresh information in a bit cell. Else it is lost. Small footprint: transistor + capacitor.
Technical University of Lodz Department of Microelectronics and Computer Science Elements of high performance microprocessor architecture Memory system.
Cache Memory Locality of reference: It is observed that when a program refers to memory, the access to memory for data as well as code are confined to.
ECE 552 / CPS 550 Advanced Computer Architecture I Lecture 12 Memory – Part 1 Benjamin Lee Electrical and Computer Engineering Duke University
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Virtual Memory I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache IV Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache I Steve Ko Computer Sciences and Engineering University at Buffalo.
Chapter 9 Memory Basics Henry Hexmoor1. 2 Memory Definitions  Memory ─ A collection of storage cells together with the necessary circuits to transfer.
February 9, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California at.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 20 - Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Memory Hierarchy.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output.
February 4, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences.
1 Lecture 13: Cache Innovations Today: cache access basics and innovations, DRAM (Sections )
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1 Lecture 14: Virtual Memory Today: DRAM and Virtual memory basics (Sections )
February 9, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
DAP Spr.‘98 ©UCB 1 Lecture 11: Memory Hierarchy—Ways to Reduce Misses.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
© Krste Asanovic, 2014CS252, Spring 2014, Lecture 10 CS252 Graduate Computer Architecture Spring 2014 Lecture 10: Memory Krste Asanovic
CS 152 Computer Architecture and Engineering Lecture 23: Putting it all together: Intel Nehalem Krste Asanovic Electrical Engineering and Computer Sciences.
Charles Kime & Thomas Kaminski © 2008 Pearson Education, Inc. (Hyperlinks are active in View Show mode) Chapter 8 – Memory Basics Logic and Computer Design.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
Charles Kime & Thomas Kaminski © 2004 Pearson Education, Inc. Terms of Use (Hyperlinks are active in View Show mode) Terms of Use ECE/CS 352: Digital Systems.
Multilevel Memory Caches Prof. Sirer CS 316 Cornell University.
CPEN Digital System Design
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
+ CS 325: CS Hardware and Software Organization and Architecture Memory Organization.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
Constructive Computer Architecture Realistic Memories and Caches Arvind Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology.
2/8/2016 CS152, Spring 2016 CS 152 Computer Architecture and Engineering Lecture 6 - Memory Dr. George Michelogiannakis EECS, University of California.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
CS 152 Computer Architecture and Engineering Lecture 6 - Memory
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
COMPSYS 304 Computer Architecture Cache John Morris Electrical & Computer Enginering/ Computer Science, The University of Auckland Iolanthe at 13 knots.
CENG709 Computer Architecture and Operating Systems Lecture 6 - Memory Murat Manguoglu Department of Computer Engineering Middle East Technical University.
Memory Hierarchy and Cache. A Mystery… Memory Main memory = RAM : Random Access Memory – Read/write – Multiple flavors – DDR SDRAM most common 64 bit.
Electrical and Computer Engineering
Multilevel Memories (Improving performance using alittle “cash”)
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
CS 152 Computer Architecture and Engineering Lecture 6 - Memory
Computer Organization & Design 计算机组成与设计
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Rose Liu Electrical Engineering and Computer Sciences
Krste Asanovic Electrical Engineering and Computer Sciences
Adapted from slides by Sally McKee Cornell University
CS 152 Computer Architecture and Engineering Lecture 6 - Memory
Dept. of Info. Sci. & Elec. Engg.
CS 152 Computer Architecture and Engineering Lecture 6 - Memory
CSE 486/586 Distributed Systems Cache Coherence
Memory Principles.
Presentation transcript:

CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley

2/12/2008CS152-Spring’08 2 Last time in Lecture 5 Control hazards (branches, interrupts) are most difficult to handle as they change which instruction should be executed next Speculation commonly used to reduce effect of control hazards (predict sequential fetch, predict no exceptions) Branch delay slots make control hazard visible to software Precise exceptions: stop cleanly on one instruction, all previous instructions completed, no following instructions have changed architectural state To implement precise exceptions in pipeline, shift faulting instructions down pipeline to “commit” point, where exceptions are handled in program order

2/12/2008CS152-Spring’08 3 CPU-Memory Bottleneck Memory CPU Performance of high-speed computers is usually limited by memory bandwidth & latency Latency (time for a single access) Memory access time >> Processor cycle time Bandwidth (number of accesses per unit time) if fraction m of instructions access memory, 1+m memory references / instruction CPI = 1 requires 1+m memory refs / cycle

2/12/2008CS152-Spring’08 4 Core Memory Core memory was first large scale reliable main memory –invented by Forrester in late 40s at MIT for Whirlwind project Bits stored as magnetization polarity on small ferrite cores threaded onto 2 dimensional grid of wires Coincident current pulses on X and Y wires would write cell and also sense original state (destructive reads) DEC PDP-8/E Board, 4K words x 12 bits, (1968) Robust, non-volatile storage Used on space shuttle computers until recently Cores threaded onto wires by hand (25 billion a year at peak production) Core access time ~ 1  s

2/12/2008CS152-Spring’08 5 Semiconductor Memory, DRAM Semiconductor memory began to be competitive in early 1970s –Intel formed to exploit market for semiconductor memory First commercial DRAM was Intel 1103 –1Kbit of storage on single chip –charge on a capacitor used to hold value Semiconductor memory quickly replaced core in ‘70s

2/12/2008CS152-Spring’08 6 One Transistor Dynamic RAM TiN top electrode (V REF ) Ta 2 O 5 dielectric W bottom electrode poly word line access transistor 1-T DRAM Cell word bit access transistor Storage capacitor (FET gate, trench, stack) V REF

2/12/2008CS152-Spring’08 7 DRAM Architecture Row Address Decoder Col. 1 Col. 2 M Row 1 Row 2 N Column Decoder & Sense Amplifiers M N N+M bit lines word lines Memory cell (one bit) D Data Bits stored in 2-dimensional arrays on chip Modern chips have around 4 logical banks on each chip – each logical bank physically implemented as many smaller arrays

2/12/2008CS152-Spring’08 8 DRAM Packaging DIMM (Dual Inline Memory Module) contains multiple chips with clock/control/address signals connected in parallel (sometimes need buffers to drive signals to all chips) Data pins work together to return wide word (e.g., 64- bit data bus using 16x4-bit parts) Address lines multiplexed row/column address Clock and control signals Data bus (4b,8b,16b,32b) DRAM chip ~12 ~7

2/12/2008CS152-Spring’08 9 DRAM Operation Three steps in read/write access to a given bank Row access (RAS) –decode row address, enable addressed row (often multiple Kb in row) –bitlines share charge with storage cell –small change in voltage detected by sense amplifiers which latch whole row of bits –sense amplifiers drive bitlines full rail to recharge storage cells Column access (CAS) –decode column address to select small number of sense amplifier latches (4, 8, 16, or 32 bits depending on DRAM package) –on read, send latched bits out to chip pins –on write, change sense amplifier latches which then charge storage cells to required value –can perform multiple column accesses on same row without another row access (burst mode) Precharge –charges bit lines to known value, required before next row access Each step has a latency of around 20ns in modern DRAMs Various DRAM standards (DDR, RDRAM) have different ways of encoding the signals for transmission to the DRAM, but all share same core architecture

2/12/2008CS152-Spring’08 10 Double-Data Rate (DDR2) DRAM [ Micron, 256Mb DDR2 SDRAM datasheet ] RowColumnPrechargeRow’ Data 200MHz Clock 400Mb/s Data Rate

2/12/2008CS152-Spring’08 11 Processor-DRAM Gap (latency) Time µProc 60%/year DRAM 7%/year DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance “Moore’s Law” Four-issue 2GHz superscalar accessing 100ns DRAM could execute 800 instructions during time for one memory access!

2/12/2008CS152-Spring’08 12 Little’s Law Throughput (T) = Number in Flight (N) / Latency (L) Memory CPU Table of accesses in flight Example: --- Assume infinite bandwidth memory cycles / memory reference memory references / instruction  Table size = 1.2 * 100 = 120 entries 120 independent memory operations in flight!

2/12/2008CS152-Spring’08 Typical Memory Reference Patterns Address Time Instruction fetches Stack accesses Data accesses n loop iterations subroutine call subroutine return argument access vector access scalar accesses

2/12/2008CS152-Spring’08 Common Predictable Patterns Two predictable properties of memory references: –Temporal Locality: If a location is referenced it is likely to be referenced again in the near future. –Spatial Locality: If a location is referenced it is likely that locations near it will be referenced in the near future.

Memory Reference Patterns Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): (1971) Time Memory Address (one dot per access) Spatial Locality Temporal Locality

2/12/2008CS152-Spring’08 Multilevel Memory Strategy: Reduce average latency using small, fast memories called caches. Caches are a mechanism to reduce memory latency based on the empirical observation that the patterns of memory references made by a processor are often highly predictable: PC … 96 loop : ADD r2, r1, r1 100 SUBI r3, r3, #1 104 BNEZ r3, loop 108 … 112

2/12/2008CS152-Spring’08 17 Memory Hierarchy Small, Fast Memory (RF, SRAM) capacity: Register << SRAM << DRAM why? latency: Register << SRAM << DRAM why? bandwidth: on-chip >> off-chip why? On a data access: hit (data  fast memory)  low latency access miss (data  fast memory)  long latency access (DRAM) CPU Big, Slow Memory (DRAM) AB holds frequently used data

2/12/2008CS152-Spring’08 18 Management of Memory Hierarchy Small/fast storage, e.g., registers –Address usually specified in instruction –Generally implemented directly as a register file »but hardware might do things behind software’s back, e.g., stack management, register renaming Large/slower storage, e.g., memory –Address usually computed from values in register –Generally implemented as a cache hierarchy »hardware decides what is kept in fast memory »but software may provide “hints”, e.g., don’t cache or prefetch

2/12/2008CS152-Spring’08 19 CS152 Administrivia Krste, no office hours, Monday 2/18 (President’s Day Holiday) – for alternate time Henry office hours, 511 Soda –9:30-10:30AM Mondays –2:00-3:00PM Fridays In-class quiz dates –Q1: Tuesday February 19 (ISAs, microcode, simple pipelining) »Material covered, Lectures 1-5, PS1, Lab 1 ing PS/Lab questions – Henry & CC Krste, send only one Watch website for updates/news Not sure how long book will take to appear in store, can buy online

2/12/2008CS152-Spring’08 Caches Caches exploit both types of predictability: –Exploit temporal locality by remembering the contents of recently accessed locations. –Exploit spatial locality by fetching blocks of data around recently accessed locations.

2/12/2008CS152-Spring’08 Inside a Cache CACHE Processor Main Memory Address Data Address Tag Data Block Data Byte Data Byte Data Byte Line copy of main memory location 100 copy of main memory location

2/12/2008CS152-Spring’08 Cache Algorithm (Read) Look at Processor Address, search cache tags to find match. Then either Found in cache a.k.a. HIT Return copy of data from cache Not in cache a.k.a. MISS Read block of data from Main Memory Wait … Return data to processor and update cache Q: Which line do we replace?

2/12/2008CS152-Spring’08 23 Placement Policy Set Number Cache Fully (2-way) Set Direct AssociativeAssociative Mapped anywhereanywhere in only into set 0 block 4 (12 mod 4) (12 mod 8) Memory Block Number block 12 can be placed

2/12/2008CS152-Spring’08 Direct-Mapped Cache TagData Block V = Block Offset TagIndex t k b t HIT Data Word or Byte 2 k lines

2/12/2008CS152-Spring’08 Direct Map Address Selection higher-order vs. lower-order address bits TagData Block V = Block Offset Index t k b t HIT Data Word or Byte 2 k lines Tag

2/12/2008CS152-Spring’08 2-Way Set-Associative Cache TagData Block V = Block Offset TagIndex t k b HIT TagData Block V Data Word or Byte = t

2/12/2008CS152-Spring’08 Fully Associative Cache TagData Block V = Block Offset Tag t b HIT Data Word or Byte = = t

2/12/2008CS152-Spring’08 28 Replacement Policy In an associative cache, which block from a set should be evicted when the set becomes full? Random Least Recently Used (LRU) LRU cache state must be updated on every access true implementation only feasible for small sets (2-way) pseudo-LRU binary tree often used for 4-8 way First In, First Out (FIFO) a.k.a. Round-Robin used in highly associative caches Not Least Recently Used (NLRU) FIFO with exception for most recently used block or blocks This is a second-order effect. Why? Replacement only happens on misses

2/12/2008CS152-Spring’08 29 Acknowledgements These slides contain material developed and copyright by: –Arvind (MIT) –Krste Asanovic (MIT/UCB) –Joel Emer (Intel/MIT) –James Hoe (CMU) –John Kubiatowicz (UCB) –David Patterson (UCB) MIT material derived from course UCB material derived from course CS252