Chapter Twelve Memory Organization

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
1 Recap: Memory Hierarchy. 2 Memory Hierarchy - the Big Picture Problem: memory is too slow and or too small Solution: memory hierarchy Fastest Slowest.
Cache Memory Locality of reference: It is observed that when a program refers to memory, the access to memory for data as well as code are confined to.
CMPE 421 Parallel Computer Architecture MEMORY SYSTEM.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
Overview of Cache and Virtual MemorySlide 1 The Need for a Cache (edited from notes with Behrooz Parhami’s Computer Architecture textbook) Cache memories.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Memory Chapter 7 Cache Memories.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Chapter IX Memory Organization CS 147 Presented by: Duong Pham.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Overview: Memory Memory Organization: General Issues (Hardware) –Objectives in Memory Design –Memory Types –Memory Hierarchies Memory Management (Software.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Cache Memories Effectiveness of cache is based on a property of computer programs called locality of reference Most of programs time is spent in loops.
Maninder Kaur CACHE MEMORY 24-Nov
Chapter 1 Computer System Overview Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Storage HierarchyCS510 Computer ArchitectureLecture Lecture 12 Storage Hierarchy.
Memory/Storage Architecture Lab Computer Architecture Memory Hierarchy.
Memory Hierarchy and Cache Memory Jennifer Tsay CS 147 Section 3 October 8, 2009.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIT 301 (Blum)1 Cache Based in part on Chapter 9 in Computer Architecture (Nicholas Carter)
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
2007 Sept. 14SYSC 2001* - Fall SYSC2001-Ch4.ppt1 Chapter 4 Cache Memory 4.1 Memory system 4.2 Cache principles 4.3 Cache design 4.4 Examples.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
MEMORY ORGANIZATION - Memory hierarchy - Main memory - Auxiliary memory - Cache memory.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Outline Cache writes DRAM configurations Performance Associative caches Multi-level caches.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
Chapter 9 Memory Organization By Nguyen Chau Topics Hierarchical memory systems Cache memory Associative memory Cache memory with associative mapping.
Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X)  Hit Rate : the fraction of memory access found in.
Memory Hierarchy How to improve memory access. Outline Locality Structure of memory hierarchy Cache Virtual memory.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
What is it and why do we need it? Chris Ward CS147 10/16/2008.
CACHE MEMORY CS 147 October 2, 2008 Sampriya Chandra.
Chapter 9 Memory Organization. 9.1 Hierarchical Memory Systems Figure 9.1.
CPE 626 CPU Resources: Introduction to Cache Memories Aleksandar Milenkovic Web:
CMSC 611: Advanced Computer Architecture
CSE 351 Section 9 3/1/12.
The Goal: illusion of large, fast, cheap memory
Ramya Kandasamy CS 147 Section 3
Cache Memory Presentation I
CACHE MEMORY.
Lecture 21: Memory Hierarchy
Chap. 12 Memory Organization
M. Usha Professor/CSE Sona College of Technology
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Memory Operation and Performance
Lecture 21: Memory Hierarchy
Chapter 1 Computer System Overview
Lecture 13: Cache Basics Topics: terminology, cache organization (Sections )
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Chapter Twelve Memory Organization

Memory Hierarchy Magnetic tapes Main memory I/O processor Magnetic discs Auxiliary memory Cache CPU

Auxiliary vs. Main Memory Provides backup storage Low-cost Large capacity Holds data file, system programs, etc not currently needed No direct access by CPU Holds programs and data currently needed Smaller capacity Direct access by CPU Higher cost Faster access, but still not fast enough…

Cache Memory Very-high speed memory Smaller capacity, higher cost Access time is close to processor logic clock cycle time Stores program segments currently being executed and data frequently accessed Increases performance of computer

Memory Hierarchy • Get the performance of fast expensive memory for the price of slow cheap memory! • GP Registers (2-5 ns) • Cache • Level 1 Cache (2-10 ns) • Level 2 Cache (5-20 ns) • Main Memory (40–80 ns) • Disk (10 ms seek, 5-100 Mb/s throughput)

Foundations • Convention: Registers are lowest, Disk is highest • Inclusion: Data found at one level of the hierarchy is also found at all higher levels • A miss at level i implies data is not at any lower level • Coherence: Copies of data at multiple levels of the hierarchy must (eventually) be consistent • Write through • Write back • Locality • Temporal: Recently accessed items are likely to be accessed • Spatial: Items that are close in address are likely to be accessed

Basic Operation of Cache When CPU needs to access memory, first checks cache If found, then it is a hit -- data is read If not found, then it is a miss -- access main memory and transfer block to cache

Metrics Hit ratio: hi is the probability that a datum is present at at level i • Access frequency: fi = (1-h1)(1-h2) …. (1-hi-1)hi Effective access time: Teff = sum fi*ti

Goals Performance = level 1 Cost per bit = level n Overall performance  effective hierarchical access time per memory reference is small Foundation is based on locality

Mapping Procedures Associative mapping – fastest and most flexible Direct mapping – most rigid Set-associative mapping – compromise between the two

Replacement Policies Choose the victim: • Least Recently Used (LRU) • Least Frequently Used (LFU) • First In First Out (FIFO) • Random • Optimal

Analysis of Write-Back Simple Write-Back – all replaced pages are written back to disk Average reference time is Tm + 2(1-h)Tp Where Tm is the memory access time for one word Tp is the page transfer time 2  one for loading and a for writing back

Flagged Write Back Use dirty bit, set when a page is modified Write back only if it is dirty Average reference time = Tm + (1-h) (Tp +Wp * Tp) Where Wp is the probability that a page is modified

Write-through Update main memory and disk same time Main memory always has the same data as disk Page is loaded on a write-miss and read miss Ave. time = Tm + (1-h)Tp + Wt (Td-Tm) Where Wt is the fraction of writes and Td is the disk access time