March 2005 1R. Smith - University of St Thomas - Minnesota ENGR 330: Today’s Class Toys, er, Processor TechnologyToys, er, Processor Technology Cache ReviewCache.

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

May 7, A Real Problem  What if you wanted to run a program that needs more memory than you have?
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
1 A Real Problem  What if you wanted to run a program that needs more memory than you have?
Memory/Storage Architecture Lab Computer Architecture Virtual Memory.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Memory Management 2010.
Memory Organization.
Translation Buffers (TLB’s)
Virtual Memory. Why do we need VM? Program address space: 0 – 2^32 bytes –4GB of space Physical memory available –256MB or so Multiprogramming systems.
Chapter 9 Virtual Memory Produced by Lemlem Kebede Monday, July 16, 2001.
March R. Smith - University of St Thomas - Minnesota ENGR 330: Today’s Class Homework 8 recap; Homework 9 questionsHomework 8 recap; Homework 9 questions.
March R. Smith - University of St Thomas - Minnesota ENGR 330: Today’s Class CachesCaches Direct mapped cacheDirect mapped cache Set associative.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy (Part II)
11/10/2005Comp 120 Fall November 10 8 classes to go! questions to me –Topics you would like covered –Things you don’t understand –Suggestions.
Lecture 21 Last lecture Today’s lecture Cache Memory Virtual memory
Review of Memory Management, Virtual Memory CS448.
Computer Architecture Lecture 28 Fasih ur Rehman.
Lecture 19: Virtual Memory
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
IT253: Computer Organization
Lecture 9: Memory Hierarchy Virtual Memory Kai Bu
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
CE Operating Systems Lecture 14 Memory management.
Virtual Memory. DRAM as cache What about programs larger than DRAM? When we run multiple programs, all must fit in DRAM! Add another larger, slower level.
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
1 Some Real Problem  What if a program needs more memory than the machine has? —even if individual programs fit in memory, how can we run multiple programs?
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
3/1/2002CSE Virtual Memory Virtual Memory CPU On-chip cache Off-chip cache DRAM memory Disk memory Note: Some of the material in this lecture are.
CDA 5155 Virtual Memory Lecture 27. Memory Hierarchy Cache (SRAM) Main Memory (DRAM) Disk Storage (Magnetic media) CostLatencyAccess.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Chapter 9 Memory Organization. 9.1 Hierarchical Memory Systems Figure 9.1.
Virtual Memory. Cache memory enhances performance by providing faster memory access speed. Virtual memory enhances performance by providing greater memory.
CS161 – Design and Architecture of Computer
Virtual Memory Chapter 7.4.
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Virtual Memory: the Page Table and Page Swapping
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Morgan Kaufmann Publishers
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Virtual Memory 4 classes to go! Today: Virtual Memory.
Lecture 23: Cache, Memory, Virtual Memory
Lecture 22: Cache Hierarchies, Memory
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
Translation Buffers (TLB’s)
Virtual Memory Overcoming main memory size limitation
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Translation Buffers (TLB’s)
CSC3050 – Computer Architecture
Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CS703 - Advanced Operating Systems
Translation Buffers (TLBs)
Sarah Diesburg Operating Systems CS 3430
Review What are the advantages/disadvantages of pages versus segments?
Memory Management & Virtual Memory
Presentation transcript:

March R. Smith - University of St Thomas - Minnesota ENGR 330: Today’s Class Toys, er, Processor TechnologyToys, er, Processor Technology Cache ReviewCache Review Magic: fully associative cacheMagic: fully associative cache Four Questions/3 C’sFour Questions/3 C’s Virtual MemoryVirtual Memory Toys!Toys!

March R. Smith - University of St Thomas - Minnesota Processor Technology Tubes – fragile!Tubes – fragile! Transistors – big, but promisingTransistors – big, but promising ICs – thank goodness for DIPsICs – thank goodness for DIPs Sorry, no first-aid kitSorry, no first-aid kit

March R. Smith - University of St Thomas - Minnesota Direct Mapped Cache The basis of today’s designsThe basis of today’s designs –A collection of high speed RAM locations –Broken into individually addressed “cache entries” –Part of RAM address chooses cache entry (“Direct mapping”) A cache entryA cache entry –“Index” is its address in the cache –Valid bit - true if the entry contains valid RAM data –“Tag” holds the address bits not matching the cache address –Data area - where the stored data resides Store multiple words (spatial locality)Store multiple words (spatial locality)

March R. Smith - University of St Thomas - Minnesota Set Associative Caches That 2-way, 4-way, 8-way stuffThat 2-way, 4-way, 8-way stuff Provides multiple ‘hit’ entries per mappingProvides multiple ‘hit’ entries per mapping Problem:Problem: –Calculate size information for a set associative cache AttributesAttributes –Address size –Block size –Number of lines –N-way

March R. Smith - University of St Thomas - Minnesota Fully associative cache “Association list” approach“Association list” approach –Accepts an address –Returns the data Not a RAM – stores tags and dataNot a RAM – stores tags and data –Tag field = full address – block size –Data field = data block Parallel tag field checkingParallel tag field checking –Automatically matches, retrieves data with matching tag –Expensive in terms of logic

March R. Smith - University of St Thomas - Minnesota Four Questions General framework for memory hierarchiesGeneral framework for memory hierarchies 1. Where can a block be placed?1. Where can a block be placed? –Different schemes have different restrictions –Some have no restrictions (fully associative) 2. How is a block found?2. How is a block found? –Fully associative - logic does all the work in one cycle –Direct addressing does much of the work 3. How do we choose a block to replace?3. How do we choose a block to replace? –Option: Randomly –Option: LRU 4. What happens during a write?4. What happens during a write? –Write-back –Write-through

March R. Smith - University of St Thomas - Minnesota Types of Misses (Three C’s) Compulsory misses or Cold start missesCompulsory misses or Cold start misses –When a block is first accessed by the program –Impossible to eliminate these –Right block size can reduce the number Capacity missesCapacity misses –Cache can’t contain all blocks needed by the program –i.e. the program keeps pulling blocks back in after they’ve been replaced by other referenced blocks –Suggests the cache isn’t big enough Conflict misses or Collision missesConflict misses or Collision misses –When multiple blocks compete for the same set/location –Happens in set associative and direct mapped –Doesn’t happen in fully associative cache

March R. Smith - University of St Thomas - Minnesota “Virtual Memory” (VM) The Cache problem:The Cache problem: –Convert a RAM address into a data item The VM problem:The VM problem: –Convert a convenient RAM address into the real one Back to the old problem: Software is expensiveBack to the old problem: Software is expensive –Eliminate trouble caused by varying RAM addresses –How do we “load” a program into RAM?

March R. Smith - University of St Thomas - Minnesota Memory Management Problems RelocationRelocation Storage ProtectionStorage Protection FragmentationFragmentation

March R. Smith - University of St Thomas - Minnesota Assumptions User applications pose the biggest problemsUser applications pose the biggest problems Memory Management focuses on user modeMemory Management focuses on user mode –User programs run in restricted RAM –Restrictions may be turned off for the OS –I/O operations use “real” addresses

March R. Smith - University of St Thomas - Minnesota #1: Base+Limit Register Memory controlled through 2 registersMemory controlled through 2 registers –Base register – sets program’s starting address –Limit register – sets program’s address space size All of the program’s addresses are relocatedAll of the program’s addresses are relocated –If greater than limit, then an error –Add base value to get ‘real’ RAM address Program can’t “see” RAM outside its areaProgram can’t “see” RAM outside its area

March R. Smith - University of St Thomas - Minnesota #2: Segmentation Similar to 80x86/Pentium “segments”Similar to 80x86/Pentium “segments” –Generalization of Base+Limit A set of registers tied to high address bitsA set of registers tied to high address bits –High bits select a segment register set –Rest of address is processed like Base+Limit

March R. Smith - University of St Thomas - Minnesota Fragmentation problems External fragmentationExternal fragmentation Internal fragmentationInternal fragmentation

March R. Smith - University of St Thomas - Minnesota #3: Paging All “segments” are the same (small) sizeAll “segments” are the same (small) size –Minimizes the fragmentation problem –4K for example –So we don’t need a “limit” register Addresses translated with a ‘page table’Addresses translated with a ‘page table’ –High order bits select the page table entry –The selected entry points to the page in RAM –Low order bits are the offset into the page CPU must support pagingCPU must support paging –Special register points to the current page table –Gets changed when switching processes

March R. Smith - University of St Thomas - Minnesota Virtual Memory Uses RAM as a cacheUses RAM as a cache –“Real” data is all on the hard drive –Pages travel to RAM from hard drive as needed –Pages sent to hard drive if not being used Page table entry (PTE) indicates optionsPage table entry (PTE) indicates options –Page is in RAM right now (“valid”) –Page is not in RAM, but on the hard drive –Page doesn’t exist Other PTE infoOther PTE info –Page has been used/read –Page is “read only” –Page is “dirty”

March R. Smith - University of St Thomas - Minnesota Implications of Paging Good thingsGood things –Programs can be larger than physical RAM –Programs can’t see each others’ RAM –Bits of RAM can be shared through mapping ProblemsProblems –Thrashing and working sets –Slow translation speeds – Translation Lookaside Buffer (TLB) Yet another specialized sort of cacheYet another specialized sort of cache

March R. Smith - University of St Thomas - Minnesota Page system sizing questions Size of the page table, givenSize of the page table, given –4K pages –24-bit virtual addresses –32-bit physical (“real”) RAM addresses –4 bits for valid, dirty, protected, used Size of the page table, givenSize of the page table, given –8K pages –32-bit virtual addresses –36-bit physical (“real”) RAM addresses –4 bits for valid, dirty, protected, used

March R. Smith - University of St Thomas - Minnesota More toys? Magnetic storageMagnetic storage Hard DrivesHard Drives

March R. Smith - University of St Thomas - Minnesota That’s it. Questions?Questions? Creative Commons License This work is licensed under the Creative Commons Attribution-Share Alike 3.0 United States License. To view a copy of this license, visit or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.