The Three C’s of Misses 7.5 Compulsory Misses The first time a memory location is accessed, it is always a miss Also known as cold-start misses Only way to decrease miss rate is to increase the block size Capacity Misses Occur when a program is using more data than can fit in the cache Some misses will result because the cache isn’t big enough Increasing the size of the cache solves this problem Conflict Misses Occur when a block forces out another block with the same index Increasing Associtivity reduces conflict misses Worst in Direct-Mapped, non-existent in Fully Associative
The cost of a cache miss 7.2 For a memory access, assume: 1 clock cycle to send address to memory 25 clock cycles for each DRAM access (clock cycle 2ns, 50 ns access time) 1 clock cycle to send each resulting data word This actually depends on the bus speed Miss access time (4-word block) 4 x (Address + access + sending data word) 4 x ( ) = 108 = 108 cycles for each miss
Memory Interleaving 7.2 Interleaving Default Begin accessing one word, and while waiting, start accessing other three words (pipelining) CPU Cache Memory 4 bytes Bus CPU Cache Memory 2 4 bytes Memory 1 Memory 3 Memory 0 Bus Requires 4 separate memories, each 1/4 size Must finish accessing one word before starting the next access (1+25+1)x4 = 108 cycles cycles Spread out addresses among the memories Interleaving works perfectly with caches Sophisticated DRAMs (EDO, SDRAM, etc.) provide support for this
Too little Memory 7.4 You’re running a huge program that requires 32MB Your PC has only 16MB available... Rewrite your program so that it implements overlays Execute the first portion of code (fit it in the available memory) When you need more memory... Find some memory that isn’t needed right now Save it to disk Use the memory for the latter portion of code So on... The memory is to disk as registers are to memory We’re using the disk as an extension of memory Can we use the same techniques as caching?
A Memory Hierarchy 7.4 Disk Extend the hierarchy Main memory acts like a cache for the disk Cache: About $20/Mbyte <2ns access time 512KB typical Memory: About $0.15/MBtye, 50ns access time 256MB typical Disk: About $0.0015/MByte, 15ms (15,000,000 ns) access time 40GB typical Registers CPU Load or I-FetchStore Main Memory (DRAM) Cache
Virtual Memory 7.4 Idea: Keep only the portions of a program (code, data) that are currently needed in Main Memory Currently unused data is saved on disk, ready to be brought in when needed Appears as a very large virtual memory (limited only by the disk size) Advantages: Programs that require large amounts of memory can be run (As long as they don’t need it all at once) Multiple programs can be in virtual memory at once Disadvantages: The memory a program needs may all be on disk The operating system has to manage virtual memory
The Virtual Memory Concept Virtual Memory Space: All possible memory addresses (4GB in 32-bit systems) All that can be conceived. Disk Swap Space: Area on hard disk that can be used as an extension of memory. (Typically 100MB) All that can be used. Main Memory: Physical memory. (Typically 64MB) All that physically exists. Virtual Memory Space Disk Swap Space Main Memory 7.4
The Virtual Memory Concept Virtual Memory Space Disk Swap Space Main Memory This address can be conceived of, but doesn’t correspond to any memory. Accessing it will produce an error. This address can be accessed. However, it currently is only on disk and must be read into main memory before being used. A table maps from its virtual address to the disk location. This address can be accessed immediately since it is already in memory. A table maps from its virtual address to its physical address. There will also be a back-up location on disk. Error Disk Address: Not in main memory Physical Address: Disk Address:
The Process The CPU deals with Virtual Addresses 7.4 Steps to accessing memory with a virtual address 1. Convert the virtual address to a physical address A special table makes this easy The table may indicate that the desired address is on disk, but not in physical memory Read the location from the disk into memory (this may require moving something else out of memory to make room) 2. Do the memory access using the physical address Check the cache first (note: cache uses only physical addresses) Update the cache if needed
Making Virtual Memory Work V.M. is like a cache system Main memory is a cache for virtual memory 7.4 Differences The miss penalty is huge (7,000,000 cycles) Increase the block size to be about 8KB Disk transfers have a large startup time, but data transfer is relatively fast after started Blocks in V.M. are called pages Even on misses, V.M. must provide info on the disk location V.M. system must have an entry for all possible locations When there’s a hit, the V.M. system provides the physical address in main memory, not the actual data Saves room (one address rather than 8KB of data) VM systems typically have a miss (page fault) rate of %
Virtual to Physical Mapping 7.4 Virtual Page NumberPage Offset Physical Page NumberPage Offset Example 4GB (32-bit) Virtual Address Space 32MB (25-bit) Physical Address Space 8 KB (13-bit) page size (block size) Example 4GB (32-bit) Virtual Address Space 32MB (25-bit) Physical Address Space 8 KB (13-bit) page size (block size) Translation A 32-bit virtual address is given to the V.M. hardware The virtual page number (index) is derived from this by removing the page (block) offset Note: may involve reading from disk (index) No tag - All entries are unique The Virtual Page Number is looked up in a page table When found, entry is either: The physical page number, if in memory The disk address, if not in memory (a page fault) If not found, the address is invalid
Virtual Memory: 8KB page size,16MB Mem 7.4 Phys. Page # Disk Address Virt. Pg.# V K Index Virtual Address Page offset Physical Address 4GB / 8KB = 512K entries 2 19 =512K 11
Virtual memory example Virtual Page #ValidPhysical Page #/ (index)BitDisk address sector sector 4323… sector Page Table: System with 20-bit V.A., 16KB pages, 256KB of physical memory Page offset takes 14 bits, 6 bits for V.P.N. and 4 bits for P.P.N. Access to: PPN = 0010 Physical Address: Access to: PPN = Page Fault to sector Pick a page to “kick out” of memory (use LRU). Assume LRU is VPN for this example sector xxxx... Read data from sector 1239 into PPN 1010
Virtual Memory Issues 7.4 A slot in the page table for every possible virtual address? With 8KB (2 13 ) pages, there are 2 (32-13) (512K) entries for a 32-bit V.A. Each entry takes around 15 bits (round up to 16 bits…) That’s 1MB for the page table. Solutions Put the page table itself in main memory rather than on the CPU chip Make the page table only big enough for the amount of memory being used
Write issues 7.5 Write Though - Update both disk and memory + Easy to implement - Requires a write buffer - Requires a separate disk write for every write to memory - A write miss requires reading in the page first, then writing back the single word Write Back - Write only to main memory. Write to the disk only when block is replaced. + Writes are fast + Multiple writes to a page are combined into one disk write - Must keep track of when page has been written (dirty bit) - A Read miss may cause page to be replaced and written back
Each process must have an individual page table to make this work Every virtual page 0 must point to a different physical page, and so on Individual Space Every process wants its own space Ideally, it would like the entire computer to itself Sharing the computer’s memory creates problems Sometimes a program will be at location 4000, sometimes at location , etc. 7.4 Use Virtual Memory to fool each process to thinking that it starts at location 0 CPU uses virtual addresses - Start the program at virtual page 0, even if it’s not physical page 0
Example VPNValidPPN/ (index)BitDisk address sector sector 4323… sector Page Table for process A: VPNValidPPN/ (index)BitDisk address sector sector sector Page Table for process B: 7.4 Virtual page (process A) points to physical page Virtual page (process B) points to physical page The processes both can start at location , but have different data. Note: Physical page 1100 is shared.
Protection Using Virtual Memory 7.4 We want to protect different processes from each other Can’t read or write to any other process’s memory, unless specifically allowed Providing separate page tables fixes this problem Each process can only access pages through its page table As long as the page table doesn’t point to pages belonging to other processes, no problem Since only the OS can write the page tables, the system is safe
Protection Example VPNValidPPN/ (index)BitDisk address sector sector 4323… sector Page Table for process A: VPNValidPPN/ (index)BitDisk address sector sector sector Page Table for process B: 7.4 How can process A access process B’s V.P ? Note: Since physical page 1100 is shared, protection is violated. None of process A’s V.P. point to Physical page Impossible to to access it!
Shooting Ourselves in the Foot 7.4 Virtual Memory Access Look up page number in page table Access memory Each memory access becomes two accesses Even for addresses stored in the cache Solution: Cache the page table entries in a special cache The Translation Lookaside Buffer (TLB) is just a cache that holds recently accessed page table entries A TLB hit means that we don’t have to actually look in the page table
TLB Design 7.4 We want the TLB to have a high hit rate Fortunately, pages are huge, providing super-high locality TLB usually only has a small number of entries (i.e. 64) and is fully-associative Typical hit rates are 98.0 to 99.9% The TLB should store anything needed from the page table Physical page number Valid bit, Dirty bit Warning: TLB can violate protection after a process switch Flush the TLB on each process switch
Virtual Memory Benefits 7.4 Virtual Memory frees the programmer from many issues Large programs can run in smaller systems It doesn’t matter what else is running on the system, all programs start at a virtual address of zero and can access the entire address space Virtual memory protects different processes from each other
Evidence of Virtual Memory at Work 7.4 Thrashing If a program is just too big, it will constantly page fault to read in new pages (and throw out ones it needs) Paging Out If a program has been sitting idle for a long time, it is likely that it will be completely paged out to disk When you return to the program, it will start out slow as it pages all of the memory back in Loading Bringing in a new program may require writing pages for an old one out to disk
Three processes A, B and C use the following address translation tables in a system with a 20-bit address bus and a page size of 8 KBytes. For the following accesses determine whether there is a) A fault and what kind b) A valid memory access c) A valid fetch from hard disk to memory and the new contents of the address translation table i) Process A: 103B4 ii) Process B:12789 iii) Process A:03278 iv) Process C:0A1A2 Are there any other potential memory protection faults with the address translation table contents? Process AProcess BProcess C VPN index VPPN/ disk addr Last used (time) VPN index VPPN/ disk addr Last used (time) VPN index VPPN/ disk addr Last used (time) Sector Sector Sector Sector Sector Sector