CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache I Steve Ko Computer Sciences and Engineering University at Buffalo.

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
COEN 180 DRAM. Dynamic Random Access Memory Dynamic: Periodically refresh information in a bit cell. Else it is lost. Small footprint: transistor + capacitor.
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
ECE 552 / CPS 550 Advanced Computer Architecture I Lecture 12 Memory – Part 1 Benjamin Lee Electrical and Computer Engineering Duke University
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Pipelining III Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Virtual Memory I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache IV Steve Ko Computer Sciences and Engineering University at Buffalo.
February 9, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California at.
CIS429/529 Cache Basics 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality °Three.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 20 - Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Memory Hierarchy.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
February 4, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences.
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
February 9, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California at.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Systems I Locality and Caching
© Krste Asanovic, 2014CS252, Spring 2014, Lecture 10 CS252 Graduate Computer Architecture Spring 2014 Lecture 10: Memory Krste Asanovic
Memory Hierarchy. Since 1980, CPU has outpaced DRAM... CPU 60% per yr 2X in 1.5 yrs DRAM 9% per yr 2X in 10 yrs 10 DRAM CPU Performance (1/latency) 100.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
Multilevel Memory Caches Prof. Sirer CS 316 Cornell University.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
+ CS 325: CS Hardware and Software Organization and Architecture Memory Organization.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
EEL5708/Bölöni Lec 4.1 Fall 2004 September 10, 2004 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Review: Memory Hierarchy.
Computer Memory Storage Decoding Addressing 1. Memories We've Seen SIMM = Single Inline Memory Module DIMM = Dual IMM SODIMM = Small Outline DIMM RAM.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
Constructive Computer Architecture Realistic Memories and Caches Arvind Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology.
For each of these, where could the data be and how would we find it? TLB hit – cache or physical memory TLB miss – cache, memory, or disk Virtual memory.
2/8/2016 CS152, Spring 2016 CS 152 Computer Architecture and Engineering Lecture 6 - Memory Dr. George Michelogiannakis EECS, University of California.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
CS 152 Computer Architecture and Engineering Lecture 6 - Memory
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
CENG709 Computer Architecture and Operating Systems Lecture 6 - Memory Murat Manguoglu Department of Computer Engineering Middle East Technical University.
1 Lecture 20: OOO, Memory Hierarchy Today’s topics:  Out-of-order execution  Cache basics.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
Electrical and Computer Engineering
Multilevel Memories (Improving performance using alittle “cash”)
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
Lec 3 – Memory Hierarchy Review
CS 152 Computer Architecture and Engineering Lecture 6 - Memory
Computer Organization & Design 计算机组成与设计
Rose Liu Electrical Engineering and Computer Sciences
CMSC 611: Advanced Computer Architecture
Krste Asanovic Electrical Engineering and Computer Sciences
CS 152 Computer Architecture and Engineering Lecture 6 - Memory
CS 152 Computer Architecture and Engineering Lecture 6 - Memory
CSE 486/586 Distributed Systems Cache Coherence
Memory Principles.
Presentation transcript:

CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache I Steve Ko Computer Sciences and Engineering University at Buffalo

CSE 490/590, Spring Last Time… Pipelining hazards –Structural hazards –Data hazards –Control hazards Data hazards –Stall –Bypass Control hazards –Jump –Conditional branch

CSE 490/590, Spring Branch Delay Slots (expose control hazard to software) Change the ISA semantics so that the instruction that follows a jump or branch is always executed –gives compiler the flexibility to put in a useful instruction where normally a pipeline bubble would have resulted. I 1 096ADD I 2 100BEQZ r I 3 104ADD I 4 304ADD Delay slot instruction executed regardless of branch outcome Other techniques include more advanced branch prediction, which can dramatically reduce the branch penalty... to come later

CSE 490/590, Spring time t0t1t2t3t4t5t6t7.... IFI 1 I 2 I 3 I 4 IDI 1 I 2 I 3 I 4 EX I 1 I 2 I 3 I 4 MA I 1 I 2 I 3 I 4 WB I 1 I 2 I 3 I 4 Branch Pipeline Diagrams (branch delay slot) time t0t1t2t3t4t5t6t7.... (I 1 ) 096: ADDIF 1 ID 1 EX 1 MA 1 WB 1 (I 2 ) 100: BEQZ +200IF 2 ID 2 EX 2 MA 2 WB 2 (I 3 ) 104: ADDIF 3 ID 3 EX 3 MA 3 WB 3 (I 4 ) 304: ADD IF 4 ID 4 EX 4 MA 4 WB 4 Resource Usage

CSE 490/590, Spring Why an Instruction may not be dispatched every cycle (CPI>1) Full bypassing may be too expensive to implement –typically all frequently used paths are provided –some infrequently used bypass paths may increase cycle time and counteract the benefit of reducing CPI Loads have two-cycle latency –Instruction after load cannot use load result –MIPS-I ISA defined load delay slots, a software-visible pipeline hazard (compiler schedules independent instruction or inserts NOP to avoid hazard). Removed in MIPS-II (pipeline interlocks added in hardware) »MIPS:“Microprocessor without Interlocked Pipeline Stages” Conditional branches may cause bubbles –kill following instruction(s) if no delay slots

CSE 490/590, Spring 2011 Early Read-Only Memory Technologies 6 Punched cards, From early 1700s through Jaquard Loom, Babbage, and then IBM Punched paper tape, instruction stream in Harvard Mk 1 IBM Card Capacitor ROS IBM Balanced Capacitor ROS Diode Matrix, EDSAC-2 µcode store

CSE 490/590, Spring 2011 Early Read/Write Main Memory Technologies 7 Williams Tube, Manchester Mark 1, 1947 Babbage, 1800s: Digits stored on mechanical wheels Mercury Delay Line, Univac 1, 1951 Also, regenerative capacitor memory on Atanasoff-Berry computer, and rotating magnetic drum memory on IBM 650

CSE 490/590, Spring Semiconductor Memory Semiconductor memory began to be competitive in early 1970s –Intel formed to exploit market for semiconductor memory –Early semiconductor memory was Static RAM (SRAM). SRAM cell internals similar to a latch (cross-coupled inverters). First commercial Dynamic RAM (DRAM) was Intel 1103 –1Kbit of storage on single chip –charge on a capacitor used to hold value Semiconductor memory quickly replaced core in ‘70s

CSE 490/590, Spring 2011 Modern DRAM Structure 9 [Samsung, sub-70nm DRAM, 2004]

CSE 490/590, Spring DRAM Architecture Row Address Decoder Col. 1 Col. 2 M Row 1 Row 2 N Column Decoder & Sense Amplifiers M N N+M bit lines word lines Memory cell (one bit) D Data Bits stored in 2-dimensional arrays on chip Modern chips have around 4 logical banks on each chip – each logical bank physically implemented as many smaller arrays

CSE 490/590, Spring DRAM Operation Three steps in read/write access to a given bank Row access (RAS) –decode row address, enable addressed row (often multiple Kb in row) –bitlines share charge with storage cell –small change in voltage detected by sense amplifiers which latch whole row of bits –sense amplifiers drive bitlines full rail to recharge storage cells Column access (CAS) –decode column address to select small number of sense amplifier latches (4, 8, 16, or 32 bits depending on DRAM package) –on read, send latched bits out to chip pins –on write, change sense amplifier latches which then charge storage cells to required value –can perform multiple column accesses on same row without another row access (burst mode) Precharge –charges bit lines to known value, required before next row access Each step has a latency of around 15-20ns in modern DRAMs Various DRAM standards (DDR, RDRAM) have different ways of encoding the signals for transmission to the DRAM, but all share same core architecture

CSE 490/590, Spring DRAM Packaging DIMM (Dual Inline Memory Module) contains multiple chips with clock/control/address signals connected in parallel (sometimes need buffers to drive signals to all chips) Data pins work together to return wide word (e.g., 64-bit data bus using 16x4-bit parts) Address lines multiplexed row/column address Clock and control signals Data bus (4b,8b,16b,32b) DRAM chip ~12 ~7

CSE 490/590, Spring CPU-Memory Bottleneck Memory CPU Performance of high-speed computers is usually limited by memory bandwidth & latency Latency (time for a single access) Memory access time >> Processor cycle time Problematic Bandwidth (number of accesses per unit time) Increase the bus size, etc. Usually OK

CSE 490/590, Spring Processor-DRAM Gap (latency) Time µProc 60%/year DRAM 7%/year DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance Four-issue 2GHz superscalar accessing 100ns DRAM could execute 800 instructions during time for one memory access!

CSE 490/590, Spring 2011 Physical Size Affects Latency 15 Small Memory CPU Big Memory CPU Signals have further to travel Fan out to more locations

CSE 490/590, Spring CSE 490/590 Administrivia Very important to attend –Recitations next week & the week after Guest lectures –There will be a couple guest lectures late Feb/early Mar. Quiz 1 –Rescheduled –Fri, 2/11 –Closed book, in-class –Includes lectures until last Monday (1/31) –Review: next Wed (2/9)

CSE 490/590, Spring Memory Hierarchy Small, Fast Memory (RF, SRAM) capacity: Register << SRAM << DRAM why? latency: Register << SRAM << DRAM why? bandwidth: on-chip >> off-chip why? On a data access: if data  fast memory  low latency access (SRAM) If data  fast memory  long latency access (DRAM) CPU Big, Slow Memory (DRAM) AB holds frequently used data

CSE 490/590, Spring Relative Memory Cell Sizes [ Foss, “Implementing Application-Specific Memory”, ISSCC 1996 ] DRAM on memory chip On-Chip SRAM in logic chip

CSE 490/590, Spring Levels of the Memory Hierarchy CPU Registers 100s Bytes <10s ns Cache K Bytes ns cents/bit Main Memory M Bytes 200ns- 500ns $ cents /bit Disk G Bytes, 10 ms (10,000,000 ns) cents/bit Capacity Access Time Cost Tape infinite sec-min Registers Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl bytes OS 512-4K bytes user/operator Mbytes Upper Level Lower Level faster Larger

CSE 490/590, Spring 2011 Memory Hierarchy: Apple iMac G5 iMac G5 1.6 GHz 07 RegL1 InstL1 DataL2DRAMDisk Size 1K64K32K512K256M80G Latency Cycles, Time 1, 0.6 ns 3, 1.9 ns 3, 1.9 ns 11, 6.9 ns 88, 55 ns 10 7, 12 ms Let programs address a memory space that scales to the disk size, at a speed that is usually as fast as register access Managed by compiler Managed by hardware Managed by OS, hardware, application Goal: Illusion of large, fast, cheap memory

CSE 490/590, Spring Management of Memory Hierarchy Small/fast storage, e.g., registers –Address usually specified in instruction –Generally implemented directly as a register file »but hardware might do things behind software’s back, e.g., stack management, register renaming Larger/slower storage, e.g., main memory –Address usually computed from values in register –Generally implemented as a hardware-managed cache hierarchy »hardware decides what is kept in fast memory »but software may provide “hints”, e.g., don’t cache or prefetch

CSE 490/590, Spring 2011 Real Memory Reference Patterns Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): (1971) Time Memory Address (one dot per access)

CSE 490/590, Spring 2011 Typical Memory Reference Patterns Address Time Instruction fetches Stack accesses Data accesses n loop iterations subroutine call subroutine return argument access vector access scalar accesses

CSE 490/590, Spring 2011 Common Predictable Patterns Two predictable properties of memory references: –Temporal Locality: If a location is referenced it is likely to be referenced again in the near future. –Spatial Locality: If a location is referenced it is likely that locations near it will be referenced in the near future.

CSE 490/590, Spring 2011 Memory Reference Patterns Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): (1971) Time Memory Address (one dot per access) Spatial Locality Temporal Locality

CSE 490/590, Spring 2011 Caches Caches exploit both types of predictability: –Exploit temporal locality by remembering the contents of recently accessed locations. –Exploit spatial locality by fetching blocks of data around recently accessed locations.

CSE 490/590, Spring 2011 Inside a Cache CACHE Processor Main Memory Address Data Address Tag Data Block Data Byte Data Byte Data Byte Line copy of main memory location 100 copy of main memory location

CSE 490/590, Spring 2011 Cache Algorithm (Read) Look at Processor Address, search cache tags to find match. Then either Found in cache a.k.a. HIT Return copy of data from cache Not in cache a.k.a. MISS Read block of data from Main Memory Wait … Return data to processor and update cache Q: Which line do we replace?

CSE 490/590, Spring Placement Policy Set Number Cache Fully (2-way) Set Direct AssociativeAssociative Mapped anywhereanywhere in only into set 0 block 4 (12 mod 4) (12 mod 8) Memory Block Number block 12 can be placed

CSE 490/590, Spring 2011 Direct-Mapped Cache TagData Block V = Block Offset TagIndex t k b t HIT Data Word or Byte 2 k lines

CSE 490/590, Spring 2011 Direct Map Address Selection higher-order vs. lower-order address bits TagData Block V = Block Offset Index t k b t HIT Data Word or Byte 2 k lines Tag

CSE 490/590, Spring Way Set-Associative Cache TagData Block V = Block Offset TagIndex t k b HIT TagData Block V Data Word or Byte = t

CSE 490/590, Spring 2011 Fully Associative Cache TagData Block V = Block Offset Tag t b HIT Data Word or Byte = = t

CSE 490/590, Spring Replacement Policy In an associative cache, which block from a set should be evicted when the set becomes full? Random Least Recently Used (LRU) LRU cache state must be updated on every access true implementation only feasible for small sets (2-way) pseudo-LRU binary tree often used for 4-8 way First In, First Out (FIFO) a.k.a. Round-Robin used in highly associative caches Not Least Recently Used (NLRU) FIFO with exception for most recently used block or blocks This is a second-order effect. Why? Replacement only happens on misses

CSE 490/590, Spring Acknowledgements These slides heavily contain material developed and copyright by –Krste Asanovic (MIT/UCB) –David Patterson (UCB) And also by: –Arvind (MIT) –Joel Emer (Intel/MIT) –James Hoe (CMU) –John Kubiatowicz (UCB) MIT material derived from course UCB material derived from course CS252