Download presentation
Presentation is loading. Please wait.
Published byIsaac Harris Modified over 9 years ago
2
The Three C’s of Misses 7.5 Compulsory Misses The first time a memory location is accessed, it is always a miss Also known as cold-start misses Only way to decrease miss rate is to increase the block size Capacity Misses Occur when a program is using more data than can fit in the cache Some misses will result because the cache isn’t big enough Increasing the size of the cache solves this problem Conflict Misses Occur when a block forces out another block with the same index Increasing Associtivity reduces conflict misses Worst in Direct-Mapped, non-existent in Fully Associative
3
The cost of a cache miss 7.2 For a memory access, assume: 1 clock cycle to send address to memory 25 clock cycles for each DRAM access (clock cycle 2ns, 50 ns access time) 1 clock cycle to send each resulting data word This actually depends on the bus speed Miss access time (4-word block) 4 x (Address + access + sending data word) 4 x (1 + 25 + 1) = 108 = 108 cycles for each miss
4
Memory Interleaving 7.2 Interleaving Default Begin accessing one word, and while waiting, start accessing other three words (pipelining) CPU Cache Memory 4 bytes Bus CPU Cache Memory 2 4 bytes Memory 1 Memory 3 Memory 0 Bus Requires 4 separate memories, each 1/4 size Must finish accessing one word before starting the next access (1+25+1)x4 = 108 cycles 1251 30 cycles 12511 11 1 Spread out addresses among the memories Interleaving works perfectly with caches Sophisticated DRAMs (EDO, SDRAM, etc.) provide support for this
5
Too little Memory 7.4 You’re running a huge program that requires 32MB Your PC has only 16MB available... Rewrite your program so that it implements overlays Execute the first portion of code (fit it in the available memory) When you need more memory... Find some memory that isn’t needed right now Save it to disk Use the memory for the latter portion of code So on... The memory is to disk as registers are to memory We’re using the disk as an extension of memory Can we use the same techniques as caching?
6
A Memory Hierarchy 7.4 Disk Extend the hierarchy Main memory acts like a cache for the disk Cache: About $20/Mbyte <2ns access time 512KB typical Memory: About $0.15/MBtye, 50ns access time 256MB typical Disk: About $0.0015/MByte, 15ms (15,000,000 ns) access time 40GB typical Registers CPU Load or I-FetchStore Main Memory (DRAM) Cache
7
Virtual Memory 7.4 Idea: Keep only the portions of a program (code, data) that are currently needed in Main Memory Currently unused data is saved on disk, ready to be brought in when needed Appears as a very large virtual memory (limited only by the disk size) Advantages: Programs that require large amounts of memory can be run (As long as they don’t need it all at once) Multiple programs can be in virtual memory at once Disadvantages: The memory a program needs may all be on disk The operating system has to manage virtual memory
8
The Virtual Memory Concept Virtual Memory Space: All possible memory addresses (4GB in 32-bit systems) All that can be conceived. Disk Swap Space: Area on hard disk that can be used as an extension of memory. (Typically 100MB) All that can be used. Main Memory: Physical memory. (Typically 64MB) All that physically exists. Virtual Memory Space Disk Swap Space Main Memory 7.4
9
The Virtual Memory Concept Virtual Memory Space Disk Swap Space Main Memory This address can be conceived of, but doesn’t correspond to any memory. Accessing it will produce an error. This address can be accessed. However, it currently is only on disk and must be read into main memory before being used. A table maps from its virtual address to the disk location. This address can be accessed immediately since it is already in memory. A table maps from its virtual address to its physical address. There will also be a back-up location on disk. Error Disk Address: 58984 Not in main memory Physical Address: 883232 Disk Address: 322321 7.4
10
The Process The CPU deals with Virtual Addresses 7.4 Steps to accessing memory with a virtual address 1. Convert the virtual address to a physical address A special table makes this easy The table may indicate that the desired address is on disk, but not in physical memory Read the location from the disk into memory (this may require moving something else out of memory to make room) 2. Do the memory access using the physical address Check the cache first (note: cache uses only physical addresses) Update the cache if needed
11
Making Virtual Memory Work V.M. is like a cache system Main memory is a cache for virtual memory 7.4 Differences The miss penalty is huge (7,000,000 cycles) Increase the block size to be about 8KB Disk transfers have a large startup time, but data transfer is relatively fast after started Blocks in V.M. are called pages Even on misses, V.M. must provide info on the disk location V.M. system must have an entry for all possible locations When there’s a hit, the V.M. system provides the physical address in main memory, not the actual data Saves room (one address rather than 8KB of data) VM systems typically have a miss (page fault) rate of 0.00001 - 0.0001%
12
Virtual to Physical Mapping 7.4 Virtual Page NumberPage Offset 012 1331 Physical Page NumberPage Offset 012 1324 Example 4GB (32-bit) Virtual Address Space 32MB (25-bit) Physical Address Space 8 KB (13-bit) page size (block size) Example 4GB (32-bit) Virtual Address Space 32MB (25-bit) Physical Address Space 8 KB (13-bit) page size (block size) Translation A 32-bit virtual address is given to the V.M. hardware The virtual page number (index) is derived from this by removing the page (block) offset Note: may involve reading from disk (index) No tag - All entries are unique The Virtual Page Number is looked up in a page table When found, entry is either: The physical page number, if in memory The disk address, if not in memory (a page fault) If not found, the address is invalid
13
Virtual Memory: 8KB page size,16MB Mem 7.4 Phys. Page # Disk Address Virt. Pg.# V 0 1 2 512K... 0 121331 Index 13 19 Virtual Address Page offset 0 121323 Physical Address 4GB / 8KB = 512K entries 2 19 =512K 11
14
Virtual memory example Virtual Page #ValidPhysical Page #/ (index)BitDisk address 00000011001 0000010sector 5000... 00001010010 0000110sector 4323… 00010011011 00010111010 0001100sector 1239... 00011110001 Page Table: System with 20-bit V.A., 16KB pages, 256KB of physical memory Page offset takes 14 bits, 6 bits for V.P.N. and 4 bits for P.P.N. Access to: 0000 1000 1100 1010 1010 7.4 PPN = 0010 Physical Address: 00 1000 1100 1010 1010 Access to: 0001 1001 0011 1100 0000 PPN = Page Fault to sector 1239... Pick a page to “kick out” of memory (use LRU). Assume LRU is VPN 000101 for this example. 0 1 1010 sector xxxx... Read data from sector 1239 into PPN 1010
15
Virtual Memory Issues 7.4 A slot in the page table for every possible virtual address? With 8KB (2 13 ) pages, there are 2 (32-13) (512K) entries for a 32-bit V.A. Each entry takes around 15 bits (round up to 16 bits…) That’s 1MB for the page table. Solutions Put the page table itself in main memory rather than on the CPU chip Make the page table only big enough for the amount of memory being used
16
Write issues 7.5 Write Though - Update both disk and memory + Easy to implement - Requires a write buffer - Requires a separate disk write for every write to memory - A write miss requires reading in the page first, then writing back the single word Write Back - Write only to main memory. Write to the disk only when block is replaced. + Writes are fast + Multiple writes to a page are combined into one disk write - Must keep track of when page has been written (dirty bit) - A Read miss may cause page to be replaced and written back
18
Each process must have an individual page table to make this work Every virtual page 0 must point to a different physical page, and so on Individual Space Every process wants its own space Ideally, it would like the entire computer to itself Sharing the computer’s memory creates problems Sometimes a program will be at location 4000, sometimes at location 5888820, etc. 7.4 Use Virtual Memory to fool each process to thinking that it starts at location 0 CPU uses virtual addresses - Start the program at virtual page 0, even if it’s not physical page 0
19
Example VPNValidPPN/ (index)BitDisk address 00000011001 0000010sector 5000... 00001011100 0000110sector 4323… 00010011011 00010110101 0001100sector 1239... 00011110001 Page Table for process A: VPNValidPPN/ (index)BitDisk address 00000010010 00000110000 00001010011 00001111100 0001000sector 2311... 0001010sector 158... 0001100sector 555... 00011110100 Page Table for process B: 7.4 Virtual page 000000 (process A) points to physical page 1001. Virtual page 000000 (process B) points to physical page 0010. The processes both can start at location 000000, but have different data. Note: Physical page 1100 is shared.
20
Protection Using Virtual Memory 7.4 We want to protect different processes from each other Can’t read or write to any other process’s memory, unless specifically allowed Providing separate page tables fixes this problem Each process can only access pages through its page table As long as the page table doesn’t point to pages belonging to other processes, no problem Since only the OS can write the page tables, the system is safe
21
Protection Example VPNValidPPN/ (index)BitDisk address 00000011001 0000010sector 5000... 00001011100 0000110sector 4323… 00010011011 00010110101 0001100sector 1239... 00011110001 Page Table for process A: VPNValidPPN/ (index)BitDisk address 00000010010 00000110000 00001010011 00001111100 0001000sector 2311... 0001010sector 158... 0001100sector 555... 00011110100 Page Table for process B: 7.4 How can process A access process B’s V.P. 000010? Note: Since physical page 1100 is shared, protection is violated. None of process A’s V.P. point to Physical page 0011 - Impossible to to access it!
22
Shooting Ourselves in the Foot 7.4 Virtual Memory Access Look up page number in page table Access memory Each memory access becomes two accesses Even for addresses stored in the cache Solution: Cache the page table entries in a special cache The Translation Lookaside Buffer (TLB) is just a cache that holds recently accessed page table entries A TLB hit means that we don’t have to actually look in the page table
23
TLB Design 7.4 We want the TLB to have a high hit rate Fortunately, pages are huge, providing super-high locality TLB usually only has a small number of entries (i.e. 64) and is fully-associative Typical hit rates are 98.0 to 99.9% The TLB should store anything needed from the page table Physical page number Valid bit, Dirty bit Warning: TLB can violate protection after a process switch Flush the TLB on each process switch
24
Virtual Memory Benefits 7.4 Virtual Memory frees the programmer from many issues Large programs can run in smaller systems It doesn’t matter what else is running on the system, all programs start at a virtual address of zero and can access the entire address space Virtual memory protects different processes from each other
25
Evidence of Virtual Memory at Work 7.4 Thrashing If a program is just too big, it will constantly page fault to read in new pages (and throw out ones it needs) Paging Out If a program has been sitting idle for a long time, it is likely that it will be completely paged out to disk When you return to the program, it will start out slow as it pages all of the memory back in Loading Bringing in a new program may require writing pages for an old one out to disk
26
Three processes A, B and C use the following address translation tables in a system with a 20-bit address bus and a page size of 8 KBytes. For the following accesses determine whether there is a) A fault and what kind b) A valid memory access c) A valid fetch from hard disk to memory and the new contents of the address translation table i) Process A: 103B4 ii) Process B:12789 iii) Process A:03278 iv) Process C:0A1A2 Are there any other potential memory protection faults with the address translation table contents? Process AProcess BProcess C VPN index VPPN/ disk addr Last used (time) VPN index VPPN/ disk addr Last used (time) VPN index VPPN/ disk addr Last used (time) 00000010010100000000Sector 3248 90000000101002 00000110001300000010Sector 4352 770000010Sector 2814 48 0000100Sector 3452 5200001010100320000101000023 000011101102800001110101560000111001131 0001000Sector 2240 430001001001150001000Sector 1112 68
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.