Lecture: Large Caches, Virtual Memory

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

Virtual Memory Chapter 18 S. Dandamudi To be used with S. Dandamudi, “Fundamentals of Computer Organization and Design,” Springer,  S. Dandamudi.
1 Lecture 20: Cache Hierarchies, Virtual Memory Today’s topics:  Cache hierarchies  Virtual memory Reminder:  Assignment 8 will be posted soon (due.
1 Lecture 14: Virtual Memory Topics: virtual memory (Section 5.4) Reminders: midterm begins at 9am, ends at 10:40am.
1 Lecture 15: Virtual Memory and Large Caches Today: TLB design and large cache design basics (Sections )
1 Lecture 16: Virtual Memory Today: DRAM innovations, virtual memory (Sections )
1 Lecture 16: Large Cache Innovations Today: Large cache design and other cache innovations Midterm scores  91-80: 17 students  79-75: 14 students 
Lecture 17: Virtual Memory, Large Caches
1 Lecture 14: Virtual Memory Today: DRAM and Virtual memory basics (Sections )
1 Lecture 11: Large Cache Design Topics: large cache basics and… An Adaptive, Non-Uniform Cache Structure for Wire-Dominated On-Chip Caches, Kim et al.,
1 Lecture 21: Virtual Memory, I/O Basics Today’s topics:  Virtual memory  I/O overview Reminder:  Assignment 8 due Tue 11/21.
1 Lecture 15: Large Cache Design Topics: innovations for multi-mega-byte cache hierarchies Reminders:  Assignment 5 posted.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy (Part II)
Lecture 19: Virtual Memory
1 Lecture: Virtual Memory, DRAM Main Memory Topics: virtual memory, TLB/cache access, DRAM intro (Sections 2.2)
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
Lecture 9: Memory Hierarchy Virtual Memory Kai Bu
1 Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5)
1 Lecture 13: Cache, TLB, VM Today: large caches, virtual memory, TLB (Sections 2.4, B.4, B.5)
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
1 Lecture 14: DRAM Main Memory Systems Today: cache/TLB wrap-up, DRAM basics (Section 2.3)
1 Lecture 16: Virtual Memory Topics: virtual memory, improving TLB performance (Sections )
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1)
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
1 Lecture: Virtual Memory Topics: virtual memory, TLB/cache access (Sections 2.2)
1 Lecture: Virtual Memory, DRAM Main Memory Topics: virtual memory, TLB/cache access, DRAM intro (Sections 2.2)
CS203 – Advanced Computer Architecture Virtual Memory.
1 Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5)
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
CS161 – Design and Architecture of Computer
Lecture 11 Virtual Memory
Memory COMPUTER ARCHITECTURE
Lecture: Large Caches, Virtual Memory
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
Virtual Memory User memory model so far:
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Some Real Problem What if a program needs more memory than the machine has? even if individual programs fit in memory, how can we run multiple programs?
Lecture: Large Caches, Virtual Memory
Lecture 13: Large Cache Design I
Chapter 8: Main Memory.
Lecture 21: Memory Hierarchy
Lecture 21: Memory Hierarchy
Lecture 12: Cache Innovations
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Part V Memory System Design
CMSC 611: Advanced Computer Architecture
Virtual Memory 4 classes to go! Today: Virtual Memory.
Memory and cache CPU Memory I/O.
Lecture: DRAM Main Memory
Lecture 23: Cache, Memory, Virtual Memory
Lecture 22: Cache Hierarchies, Memory
Lecture: DRAM Main Memory
Lecture: DRAM Main Memory
Page that info back into your memory!
Lecture: Cache Innovations, Virtual Memory
Lecture 24: Memory, VM, Multiproc
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Lecture 15: Memory Design
Virtual Memory Overcoming main memory size limitation
Lecture: Cache Hierarchies
CSC3050 – Computer Architecture
Lecture 21: Memory Hierarchy
Lecture 24: Virtual Memory, Multiprocessors
Lecture 23: Virtual Memory, Multiprocessors
Lecture 13: Cache Basics Topics: terminology, cache organization (Sections )
Virtual Memory 1 1.
Presentation transcript:

Lecture: Large Caches, Virtual Memory Topics: shared/private caches, virtual memory, TLBs (Sections 2.4, B.4, B.5)

Intel 80-Core Prototype – Polaris Prototype chip with an entire die of SRAM cache stacked upon the cores

Example Intel Studies L3 Cache sizes up to 32 MB C C C C C C C C L1 L1 Memory interface L2 L2 L2 L2 Interconnect L3 From Zhao et al., CMP-MSI Workshop 2007 IO interface L3 Cache sizes up to 32 MB

Intel Montecito Cache Two cores, each with a private 12 MB L3 cache and 1 MB L2 Naffziger et al., Journal of Solid-State Circuits, 2006

Shared Vs. Private Caches in Multi-Core What are the pros/cons to a shared L2 cache? P1 P2 P3 P4 P1 P2 P3 P4 L1 L1 L1 L1 L1 L1 L1 L1 L2 L2 L2 L2 L2

Shared Vs. Private Caches in Multi-Core Advantages of a shared cache: Space is dynamically allocated among cores No waste of space because of replication Potentially faster cache coherence (and easier to locate data on a miss) Advantages of a private cache: small L2  faster access time private bus to L2  less contention

UCA and NUCA The small-sized caches so far have all been uniform cache access: the latency for any access is a constant, no matter where data is found For a large multi-megabyte cache, it is expensive to limit access time by the worst case delay: hence, non-uniform cache architecture

Large NUCA Issues to be addressed for Non-Uniform Cache Access: CPU Issues to be addressed for Non-Uniform Cache Access: Mapping Migration Search Replication

Shared NUCA Cache A single tile composed of a core, L1 caches, and a bank (slice) of the shared L2 cache Core 0 Core 1 Core 2 Core 3 L1 D$ L1 I$ L1 D$ L1 I$ L1 D$ L1 I$ L1 D$ L1 I$ L2 $ L2 $ L2 $ L2 $ Core 4 Core 5 Core 6 Core 7 The cache controller forwards address requests to the appropriate L2 bank and handles coherence operations L1 D$ L1 I$ L1 D$ L1 I$ L1 D$ L1 I$ L1 D$ L1 I$ L2 $ L2 $ L2 $ L2 $ Memory Controller for off-chip access

Virtual Memory Processes deal with virtual memory – they have the illusion that a very large address space is available to them There is only a limited amount of physical memory that is shared by all processes – a process places part of its virtual memory in this physical memory and the rest is stored on disk Thanks to locality, disk access is likely to be uncommon The hardware ensures that one process cannot access the memory of a different process

Virtual Memory and Page Tables

Address Translation The virtual and physical memory are broken up into pages 8KB page size Virtual address 13 virtual page number page offset Translated to phys page number Physical address 13 physical page number page offset Physical memory

Memory Hierarchy Properties A virtual memory page can be placed anywhere in physical memory (fully-associative) Replacement is usually LRU (since the miss penalty is huge, we can invest some effort to minimize misses) A page table (indexed by virtual page number) is used for translating virtual to physical page number The memory-disk hierarchy can be either inclusive or exclusive and the write policy is writeback

TLB Since the number of pages is very high, the page table capacity is too large to fit on chip A translation lookaside buffer (TLB) caches the virtual to physical page number translation for recent accesses A TLB miss requires us to access the page table, which may not even be found in the cache – two expensive memory look-ups to access one word of data! A large page size can increase the coverage of the TLB and reduce the capacity of the page table, but also increases memory waste

TLB and Cache Is the cache indexed with virtual or physical address? To index with a physical address, we will have to first look up the TLB, then the cache  longer access time Multiple virtual addresses can map to the same physical address – can we ensure that these different virtual addresses will map to the same location in cache? Else, there will be two different copies of the same physical memory word Does the tag array store virtual or physical addresses? Since multiple virtual addresses can map to the same physical address, a virtual tag comparison can flag a miss even if the correct physical memory word is present

TLB and Cache

index bits 64KB direct-mapped Virtually Indexed Caches 24-bit virtual address, 4KB page size  12 bits offset and 12 bits virtual page number To handle the example below, the cache must be designed to use only 12 index bits – for example, make the 64KB cache 16-way Page coloring can ensure that some bits of virtual and physical address match abcdef abbdef Virtually indexed cache cdef bdef Data cache that needs 16 index bits 64KB direct-mapped or 128KB 2-way… Page in physical memory

Cache and TLB Pipeline Virtually Indexed; Physically Tagged Cache Virtual address Offset Virtual page number Virtual index TLB Tag array Data array Physical page number Physical tag Physical tag comparion Virtually Indexed; Physically Tagged Cache

Protection The hardware and operating system must co-operate to ensure that different processes do not modify each other’s memory The hardware provides special registers that can be read in user mode, but only modified by instrs in supervisor mode A simple solution: the physical memory is divided between processes in contiguous chunks by the OS and the bounds are stored in special registers – the hardware checks every program access to ensure it is within bounds Protection bits are tracked in the TLB on a per-page basis

Superpages If a program’s working set size is 16 MB and page size is 8KB, there are 2K frequently accessed pages – a 128-entry TLB will not suffice By increasing page size to 128KB, TLB misses will be eliminated – disadvantage: memory waste, increase in page fault penalty Can we change page size at run-time? Note that a single page has to be contiguous in physical memory

… Superpages Implementation At run-time, build superpages if you find that contiguous virtual pages are being accessed at the same time For example, virtual pages 64-79 may be frequently accessed – coalesce these pages into a single superpage of size 128KB that has a single entry in the TLB The physical superpage has to be in contiguous physical memory – the 16 physical pages have to be moved so they are contiguous virtual physical virtual physical …

Ski Rental Problem Promoting a series of contiguous virtual pages into a superpage reduces TLB misses, but has a cost: copying physical memory into contiguous locations Page usage statistics can determine if pages are good candidates for superpage promotion, but if cost of a TLB miss is x and cost of copying pages is Nx, when do you decide to form a superpage? If ski rentals cost $50 and new skis cost $500, when do I decide to buy new skis? If I rent 10 times and then buy skis, I’m guaranteed to not spend more than twice the optimal amount

Title Bullet