CS162 Operating Systems and Systems Programming Midterm Review

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Section 3. True/False Changing the order of semaphores’ operations in a program does not matter. False.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
COSC 3407: Operating Systems Lecture 8: Semaphores, Monitors and Condition Variables.
CS 162 Week 3 Discussion (2/13 – 2/14). Today’s Section Project Administrivia (5 min) Quiz (5 min) Review Lectures 6 and 7 (15 min) Worksheet and Discussion.
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Virtual Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Computer Organization and Architecture
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Review of Memory Management, Virtual Memory CS448.
Exam2 Review Bernard Chen Spring Deadlock Example semaphores A and B, initialized to 1 P0 P1 wait (A); wait(B) wait (B); wait(A)
CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables February 2, 2011 Ion Stoica
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
COSC 3407: Operating Systems Lecture 9: Readers-Writers and Language Support for Synchronization.
CS162 Operating Systems and Systems Programming Lecture 8 Readers-Writers Language Support for Synchronization Friday 11, 2010 Ion Stoica
CS 162 Discussion Section Week 5 10/7 – 10/11. Today’s Section ●Project discussion (5 min) ●Quiz (5 min) ●Lecture Review (20 min) ●Worksheet and Discussion.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
CS703 - Advanced Operating Systems
Non Contiguous Memory Allocation
Improving Memory Access The Cache and Virtual Memory
2 July 2015 Charles Reiss
CS161 – Design and Architecture of Computer
Concurrency: Deadlock and Starvation
CS 6560: Operating Systems Design
Lecture 12 Virtual Memory.
CS703 – Advanced Operating Systems
Background on the need for Synchronization
A Real Problem What if you wanted to run a program that needs more memory than you have? September 11, 2018.
Virtual Memory - Part II
CS703 - Advanced Operating Systems
Outline Paging Swapping and demand paging Virtual memory.
1 July 2015 Charles Reiss CS162: Operating Systems and Systems Programming Lecture 7: Synchronization: Lock Implementation,
Monitors, Condition Variables, and Readers-Writers
CS510 Operating System Foundations
CS61C : Machine Structures Lecture 6. 2
Chapter 8: Main Memory.
CS162 Operating Systems and Systems Programming Midterm Review
Anthony D. Joseph and Ion Stoica
Anthony D. Joseph and Ion Stoica
Anthony D. Joseph and John Canny
Lecture 2 Part 2 Process Synchronization
CSE 451: Operating Systems Autumn 2005 Memory Management
Anthony D. Joseph and Ion Stoica
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
February 6, 2013 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables.
Implementing Mutual Exclusion
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
September 12, 2012 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables.
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
CS703 - Advanced Operating Systems
CSE 451 Section 1/27/2000.
Cache writes and examples
CSE 542: Operating Systems
CSE 542: Operating Systems
September 19, 2018 Prof. Ion Stoica
Review The Critical Section problem Peterson’s Algorithm
Presentation transcript:

CS162 Operating Systems and Systems Programming Midterm Review March 9, 2013 http://inst.eecs.berkeley.edu/~cs162

Synchronization, Critical section

Definitions Synchronization: using atomic operations to ensure cooperation between threads Mutual Exclusion: ensuring that only one thread does a particular thing at a time One thread excludes the other while doing its task Critical Section: piece of code that only one thread can execute at once Critical section is the result of mutual exclusion Critical section and mutual exclusion are two ways of describing the same thing.

Locks: using interrupts Key idea: maintain a lock variable and impose mutual exclusion only during operations on that variable int value = FREE; Acquire() { disable interrupts; if (value == BUSY) { put thread on wait queue; Go to sleep(); // Enable interrupts? } else { value = BUSY; } enable interrupts; } Release() { disable interrupts; if (anyone on wait queue) { take thread off wait queue Place on ready queue; } else { value = FREE; } enable interrupts; }

Better locks: using test&set test&set (&address) { /* most architectures */ result = M[address]; M[address] = 1; return result; } int guard = 0; int value = FREE; Acquire() { // Short busy-wait time while (test&set(guard)); if (value == BUSY) { put thread on wait queue; go to sleep() & guard = 0; } else { value = BUSY; guard = 0; } } Release() { // Short busy-wait time while (test&set(guard)); if anyone on wait queue { take thread off wait queue Place on ready queue; } else { value = FREE; } guard = 0;

Does this busy wait? If so, how long does it wait? Why is this better than the interrupt version? Miscellaneous: Remember that a thread can be context-switched while holding a lock.

Semaphores Definition: a Semaphore has a non-negative integer value and supports the following two operations: P(): an atomic operation that waits for semaphore to become positive, then decrements it by 1 Think of this as the wait() operation V(): an atomic operation that increments the semaphore by 1, waking up a waiting P, if any This of this as the signal() operation You cannot read the integer value! The only methods are P() and V(). Unlike lock, there is no owner (thus no priority donation). The thread that calls V() may never call P(). Example: Producer/consumer

Buffered Producer/Consumer Correctness Constraints: Consumer must wait for producer to fill slots, if empty (scheduling constraint) Producer must wait for consumer to make room in buffer, if all full (scheduling constraint) Only one thread can manipulate buffer queue at a time (mutual exclusion) General rule of thumb: Use a separate semaphore for each constraint Semaphore fullSlots; // consumer’s constraint Semaphore emptySlots;// producer’s constraint Semaphore mutex; // mutual exclusion

Full Solution to Bounded Buffer Semaphore fullSlots = 0; // Initially, no coke Semaphore emptySlots = bufSize; // Initially, num empty slots Semaphore mutex = 1; // No one using machine Producer(item) { emptySlots.P(); // Wait until space mutex.P(); // Wait until slot free Enqueue(item); mutex.V(); fullSlots.V(); // Tell consumers there is // more coke } Consumer() { fullSlots.P(); // Check if there’s a coke mutex.P(); // Wait until machine free item = Dequeue(); mutex.V(); emptySlots.V(); // tell producer need more return item; }

Condition Variables Condition Variable: a queue of threads waiting for something inside a critical section Key idea: allow sleeping inside critical section by atomically releasing lock at time we go to sleep Contrast to semaphores: Can’t wait inside critical section Operations: Wait(&lock): Atomically release lock and go to sleep. Re-acquire lock later, before returning. Signal(): Wake up one waiter, if any Broadcast(): Wake up all waiters Rule: Must hold lock when doing condition variable ops!

Complete Monitor Example (with condition variable) Is this Hoare or Mesa monitor? Here is an (infinite) synchronized queue Lock lock; Condition dataready; Queue queue; AddToQueue(item) { lock.Acquire(); // Get Lock queue.enqueue(item); // Add item dataready.signal(); // Signal any waiters lock.Release(); // Release Lock } RemoveFromQueue() { lock.Acquire(); // Get Lock while (queue.isEmpty()) { dataready.wait(&lock); // If nothing, sleep } item = queue.dequeue(); // Get next item lock.Release(); // Release Lock return(item); }

Mesa vs. Hoare monitors Need to be careful about precise definition of signal and wait. Consider a piece of our dequeue code: while (queue.isEmpty()) { dataready.wait(&lock); // If nothing, sleep } item = queue.dequeue(); // Get next item Why didn’t we do this? if (queue.isEmpty()) { dataready.wait(&lock); // If nothing, sleep } item = queue.dequeue(); // Get next item Answer: depends on the type of scheduling Hoare-style (most textbooks): Signaler gives lock, CPU to waiter; waiter runs immediately Waiter gives up lock, processor back to signaler when it exits critical section or if it waits again Mesa-style (most real operating systems): Signaler keeps lock and processor Waiter placed on ready queue with no special priority Practically, need to check condition again after wait

Reader/Writer Two types of lock: read lock and write lock (i.e. shared lock and exclusive lock) Shared locks is an example where you might want to do broadcast/wakeAll on a condition variable. If you do it in other situations, correctness is generally preserved, but it is inefficient because one thread gets in and then the rest of the woken threads go back to sleep.

Read/Writer What if we remove this line? Reader() { // check into system lock.Acquire(); while ((AW + WW) > 0) { WR++; okToRead.wait(&lock); WR--; } AR++; lock.release(); // read-only access AccessDbase(ReadOnly); // check out of system lock.Acquire(); AR--; if (AR == 0 && WW > 0) okToWrite.signal(); lock.Release(); } Writer() { // check into system lock.Acquire(); while ((AW + AR) > 0) { WW++; okToWrite.wait(&lock); WW--; } AW++; lock.release(); // read/write access AccessDbase(ReadWrite); // check out of system lock.Acquire(); AW--; if (WW > 0){ okToWrite.signal(); } else if (WR > 0) { okToRead.broadcast(); } lock.Release(); } What if we remove this line?

Read/Writer Revisited Reader() { // check into system lock.Acquire(); while ((AW + WW) > 0) { WR++; okToRead.wait(&lock); WR--; } AR++; lock.release(); // read-only access AccessDbase(ReadOnly); // check out of system lock.Acquire(); AR--; if (AR == 0 && WW > 0) okToWrite.broadcast(); lock.Release(); } Writer() { // check into system lock.Acquire(); while ((AW + AR) > 0) { WW++; okToWrite.wait(&lock); WW--; } AW++; lock.release(); // read/write access AccessDbase(ReadWrite); // check out of system lock.Acquire(); AW--; if (WW > 0){ okToWrite.signal(); } else if (WR > 0) { okToRead.broadcast(); } lock.Release(); } What if we turn signal to broadcast?

Read/Writer Revisited Reader() { // check into system lock.Acquire(); while ((AW + WW) > 0) { WR++; okContinue.wait(&lock); WR--; } AR++; lock.release(); // read-only access AccessDbase(ReadOnly); // check out of system lock.Acquire(); AR--; if (AR == 0 && WW > 0) okContinue.signal(); lock.Release(); } Writer() { // check into system lock.Acquire(); while ((AW + AR) > 0) { WW++; okContinue.wait(&lock); WW--; } AW++; lock.release(); // read/write access AccessDbase(ReadWrite); // check out of system lock.Acquire(); AW--; if (WW > 0){ okToWrite.signal(); } else if (WR > 0) { okContinue.broadcast(); } lock.Release(); } What if we turn okToWrite and okToRead into okContinue?

Read/Writer Revisited Reader() { // check into system lock.Acquire(); while ((AW + WW) > 0) { WR++; okContinue.wait(&lock); WR--; } AR++; lock.release(); // read-only access AccessDbase(ReadOnly); // check out of system lock.Acquire(); AR--; if (AR == 0 && WW > 0) okContinue.signal(); lock.Release(); } Writer() { // check into system lock.Acquire(); while ((AW + AR) > 0) { WW++; okContinue.wait(&lock); WW--; } AW++; lock.release(); // read/write access AccessDbase(ReadWrite); // check out of system lock.Acquire(); AW--; if (WW > 0){ okToWrite.signal(); } else if (WR > 0) { okContinue.broadcast(); } lock.Release(); } R1 arrives W1, R2 arrive while R1 reads R1 signals R2

Read/Writer Revisited Reader() { // check into system lock.Acquire(); while ((AW + WW) > 0) { WR++; okContinue.wait(&lock); WR--; } AR++; lock.release(); // read-only access AccessDbase(ReadOnly); // check out of system lock.Acquire(); AR--; if (AR == 0 && WW > 0) okContinue.broadcast(); lock.Release(); } Writer() { // check into system lock.Acquire(); while ((AW + AR) > 0) { WW++; okContinue.wait(&lock); WW--; } AW++; lock.release(); // read/write access AccessDbase(ReadWrite); // check out of system lock.Acquire(); AW--; if (WW > 0){ okToWrite.signal(); } else if (WR > 0) { okContinue.broadcast(); } lock.Release(); } Need to change to broadcast! Why?

Deadlock

Four requirements for Deadlock Mutual exclusion Only one thread at a time can use a resource. Hold and wait Thread holding at least one resource is waiting to acquire additional resources held by other threads No preemption Resources are released only voluntarily by the thread holding the resource, after thread is finished with it CIRCULAR WAIT There exists a set {T1, …, Tn} of waiting threads T1 is waiting for a resource that is held by T2 T2 is waiting for a resource that is held by T3 … Tn is waiting for a resource that is held by T1

Livelock and deadlock both cause starvation. Circular wait: Sp11, #3 Livelock: State constantly changing, but still no progress made. E.g. A system that deadlocks, then detects the deadlock, so it restarts everyone. And then they all deadlock again, over and over. Livelock and deadlock both cause starvation. Deadlock => starvation, but not other way around

Resource Allocation Graph Examples Recall: request edge – directed edge T1  Rj assignment edge – directed edge Rj  Ti T1 T2 T3 R1 R2 R3 R4 Simple Resource Allocation Graph T1 T2 T3 R1 R2 R3 R4 T1 T2 T3 R2 R1 T4 Allocation Graph With Deadlock Allocation Graph With Cycle, but No Deadlock

Deadlock Detection Algorithm Only one of each type of resource  look for loops More General Deadlock Detection Algorithm Let [X] represent an m-ary vector of non-negative integers (quantities of resources of each type): [FreeResources]: Current free resources each type [RequestX]: Current requests from thread X [AllocX]: Current resources held by thread X See if tasks can eventually terminate on their own [Avail] = [FreeResources] Add all nodes to UNFINISHED do { done = true Foreach node in UNFINISHED { if ([Requestnode] <= [Avail]) { remove node from UNFINISHED [Avail] = [Avail] + [Allocnode] done = false } } } until(done) Nodes left in UNFINISHED  deadlocked T1 T2 T3 R2 R1 T4

Example: Sp11, #2

Banker’s algorithm Method of deadlock avoidance. Process specifies max resources it will ever request. Process can initially request less than max and later ask for more. Before a request is granted, the Banker’s algorithm checks that the system is “safe” if the requested resources are granted. Roughly: Some process could request it’s specified Max and eventually finish; when it finishes, that process frees up its resources. A system is safe if there exists some sequence by which processes can request their max, terminate, and free up their resources so that all other processes can finish (in the same way). Example: Sp04 midterm, problem 4

Memory Multiplexing, Address Translation

Important Aspects of Memory Multiplexing Controlled overlap: Processes should not collide in physical memory Conversely, would like the ability to share memory when desired (for communication) Protection: Prevent access to private memory of other processes Different pages of memory can be given special behavior (Read Only, Invisible to user programs, etc). Kernel data protected from User programs Programs protected from themselves Translation: Ability to translate accesses from one address space (virtual) to a different one (physical) When translation exists, processor uses virtual addresses, physical memory uses physical addresses Side effects: Can be used to avoid overlap Can be used to give uniform view of memory to programs

Why Address Translation? Data 2 Stack 1 Heap 1 OS heap & Stacks Code 1 Stack 2 Data 1 Heap 2 Code 2 OS code OS data Code Data Heap Stack Code Data Heap Stack Prog 1 Virtual Address Space 1 Prog 2 Virtual Address Space 2 Translation Map 1 Translation Map 2 Physical Address Space

Addr. Translation: Segmentation vs. Paging Offset Seg # Virtual Address > Error + Physical Address Base0 Limit0 V Base1 Limit1 Base2 Limit2 Base3 Limit3 N Base4 Limit4 Base5 Limit5 Base6 Limit6 Base7 Limit7 Base2 Limit2 V Offset Virtual Page # Virtual Address: Physical Address Offset PageTablePtr page #0 page #2 page #3 page #4 page #5 V,R page #1 V,R,W N Physical Page # page #1 V,R Check Perm Access Error

Review: Address Segmentation Virtual memory view Physical memory view 1111 1111 stack 1111 0000 stack 1110 000 Seg # base limit 00 0001 0000 10 0000 01 0101 0000 10 0111 0000 1 1000 11 1011 0000 1 0000 1100 0000 heap heap 1000 0000 0111 0000 data data 0101 0000 0100 0000 code code 0001 0000 0000 0000 0000 0000 seg # offset

Review: Address Segmentation Virtual memory view Physical memory view 1111 1111 stack 1110 0000 stack 1110 000 Seg # base limit 00 0001 0000 10 0000 01 0101 0000 10 0111 0000 1 1000 11 1011 0000 1 0000 What happens if stack grows to 1110 0000? 1100 0000 heap heap 1000 0000 0111 0000 data data 0101 0000 0100 0000 code code 0001 0000 0000 0000 0000 0000 seg # offset

Review: Address Segmentation Virtual memory view Physical memory view 1111 1111 stack 1110 0000 stack 1110 000 Seg # base limit 00 0001 0000 10 0000 01 0101 0000 10 0111 0000 1 1000 11 1011 0000 1 0000 1100 0000 No room to grow!! Buffer overflow error heap heap 1000 0000 0111 0000 data data 0101 0000 0100 0000 code code 0001 0000 0000 0000 0000 0000 seg # offset

Review: Paging stack stack heap heap data data code code Page Table Virtual memory view Physical memory view 11111 11101 11110 11100 11101 null 11100 null 11011 null 11010 null 11001 null 11000 null 10111 null 10110 null 10101 null 10100 null 10011 null 10010 10000 10001 01111 10000 01110 01111 null 01110 null 01101 null 01100 null 01011 01101 01010 01100 01001 01011 01000 01010 00111 null 00110 null 00101 null 00100 null 00011 00101 00010 00100 00001 00011 00000 00010 1111 1111 stack 1111 0000 stack 1110 0000 1100 0000 heap 1000 0000 heap 0111 000 data data 0101 000 0100 0000 code code 0001 0000 0000 0000 0000 0000 page # offset

Review: Paging stack stack What happens if stack grows to 1110 0000? Page Table Virtual memory view Physical memory view 11111 11101 11110 11100 11101 null 11100 null 11011 null 11010 null 11001 null 11000 null 10111 null 10110 null 10101 null 10100 null 10011 null 10010 10000 10001 01111 10000 01110 01111 null 01110 null 01101 null 01100 null 01011 01101 01010 01100 01001 01011 01000 01010 00111 null 00110 null 00101 null 00100 null 00011 00101 00010 00100 00001 00011 00000 00010 1111 1111 stack stack 1110 0000 1110 0000 What happens if stack grows to 1110 0000? 1100 0000 heap 1000 0000 heap 0111 000 data data 0101 000 0100 0000 code code 0001 0000 0000 0000 0000 0000 page # offset

Challenge: Table size equal to the # of pages in virtual memory! Review: Paging Page Table Virtual memory view Physical memory view 11111 11101 11110 11100 11101 10111 11100 10110 11011 null 11010 null 11001 null 11000 null 10111 null 10110 null 10101 null 10100 null 10011 null 10010 10000 10001 01111 10000 01110 01111 null 01110 null 01101 null 01100 null 01011 01101 01010 01100 01001 01011 01000 01010 00111 null 00110 null 00101 null 00100 null 00011 00101 00010 00100 00001 00011 00000 00010 1111 1111 stack stack 1110 0000 1110 0000 1100 0000 stack Allocate new pages where room! heap 1000 0000 heap 0111 000 data Challenge: Table size equal to the # of pages in virtual memory! data 0101 000 0100 0000 code code 0001 0000 0000 0000 0000 0000 page # offset

Design a Virtual Memory system Would you choose Segmentation or Paging? Paging Advantages Easy to allocated from free list of frames -Physical memory is allocated from free list of frames -External Fragmentation is not a problem Easy to "page out" chunks of programs -All Chunks are the same size (page size) -Use valid bit to detect references to "paged-out" pages Paging Disadvantages Can Still have internal fragmentation -process may not use memory in exact multiples of pages Memory reference overhead -2 references per address lookup Memory required to hold page tables can be large

Design a Virtual Memory system How would you choose your Page Size?

Design a Virtual Memory system How would you choose your Page Size? Page Table Size TLB Size Internal Fragmentation Disk Access

Design a Virtual Memory system You have a 32 bit virtual and physical memory and 2 KB pages. How many bits would you use for the index and how many for the offset?

Design a Virtual Memory system You have a 32 bit virtual memory and 2 KB pages. 31 Virtual Page Number Byte Offset 10

Design a Virtual Memory system How many bytes are required for a page table entry?

Design a Virtual Memory system How many bytes are required for a page table entry? PPN V R D 21 3

Design a Virtual Memory system If you only had 16 MB of physical memory, how many bytes are required for a page table entry?

Design a Virtual Memory system If you only had 16 MB of physical memory, how many bytes are required for a page table entry? PPN V R D 13 3

Design a Virtual Memory system How large would the page table be?

Design a Virtual Memory system How large would the page table be? PPN V R D PPN V R D Total Size: 221 x 2 B = 4MB PPN V R D 221 Entries PPN V R D

Review: Two-Level Paging Virtual memory view Page Tables (level 2) Physical memory view 1111 1111 stack 11 11101 10 11100 01 10111 00 10110 stack 1110 0000 1110 0000 Page Table (level 1) 1100 0000 stack 111 110 null 101 null 100 011 null 010 001 null 000 11 null 10 10000 01 01111 00 01110 heap 1000 0000 heap 0111 000 11 01101 10 01100 01 01011 00 01010 data data 0101 000 0100 0000 11 00101 10 00100 01 00011 00 00010 code page2 # code 0001 0000 0000 0000 0000 0000 page1 # offset

Review: Two-Level Paging Virtual memory view Page Tables (level 2) Physical memory view stack 11 11101 10 11100 01 10111 00 10110 stack 1110 0000 Page Table (level 1) stack 111 110 null 101 null 100 011 null 010 001 null 000 11 null 10 10000 01 01111 00 01110 1001 0000 heap 1000 0000 heap 11 01101 10 01100 01 01011 00 01010 data In best case, total size of page tables ≈ number of pages used by program. But requires one additional memory access! data 11 00101 10 00100 01 00011 00 00010 code code 0001 0000 0000 0000

Design a Virtual Memory system You have the same 32 bit virtual address space with 2 KB pages. If you had a full 21-level page table. How much memory would be consumed in total?

Design a Virtual Memory system 2 22 221

Review: Inverted Table Virtual memory view Physical memory view 1111 1111 stack stack 1110 0000 1110 0000 Inverted Table hash(virt. page #) = physical page # 1100 0000 stack 11111 11101 11110 11100 11101 10111 10110 10010 10000 10001 01111 01110 01011 01101 01010 01100 01001 01011 0 01010 00011 00101 00010 00100 00001 00011 00000 00010 heap 1000 0000 heap 0111 000 data Total size of page table ≈ number of pages used by program. But hash more complex. data 0101 000 0100 0000 code code 0001 0000 0000 0000 0000 0000 page # offset

Address Translation Comparison Advantages Disadvantages Segmentation Fast context switching: Segment mapping maintained by CPU External fragmentation Paging (single-level page) No external fragmentation Large size: Table size ~ virtual memory Internal fragmentation Paged segmentation Table size ~ memory used by program Multiple memory references per page access Two-level pages Inverted Table Hash function more complex

Caches, TLBs

Review: Sources of Cache Misses Compulsory (cold start): first reference to a block “Cold” fact of life: not a whole lot you can do about it Note: When running “billions” of instruction, Compulsory Misses are insignificant Capacity: Cache cannot contain all blocks access by the program Solution: increase cache size Conflict (collision): Multiple memory locations mapped to same cache location Solutions: increase cache size, or increase associativity Two others: Coherence (Invalidation): other process (e.g., I/O) updates memory Policy: Due to non-optimal replacement policy (Capacity miss) That is the cache misses are due to the fact that the cache is simply not large enough to contain all the blocks that are accessed by the program. The solution to reduce the Capacity miss rate is simple: increase the cache size. Here is a summary of other types of cache miss we talked about. First is the Compulsory misses. These are the misses that we cannot avoid. They are caused when we first start the program. Then we talked about the conflict misses. They are the misses that caused by multiple memory locations being mapped to the same cache location. There are two solutions to reduce conflict misses. The first one is, once again, increase the cache size. The second one is to increase the associativity. For example, say using a 2-way set associative cache instead of directed mapped cache. But keep in mind that cache miss rate is only one part of the equation. You also have to worry about cache access time and miss penalty. Do NOT optimize miss rate alone. Finally, there is another source of cache miss we will not cover today. Those are referred to as invalidation misses caused by another process, such as IO , update the main memory so you have to flush the cache to avoid inconsistency between memory and cache. +2 = 43 min. (Y:23)

Direct Mapped Cache Cache index selects a cache block “Byte select” selects byte within cache block Example: Block Size=32B blocks Cache tag fully identifies the cached data Data with same “cache tag” shares the same cache entry Conflict misses Cache Index 4 31 Cache Tag Byte Select 8 Compare Ex: 0x01 Valid Bit : Cache Tag : Cache Data Byte 0 Byte 1 Byte 31 Byte 32 Byte 33 Byte 63 Hit

Set Associative Cache N-way set associative: N entries per Cache Index N direct mapped caches operates in parallel Example: Two-way set associative cache Two tags in the set are compared to input in parallel Data is selected based on the tag result Cache Index 4 31 Cache Tag Byte Select 8 Cache Data Cache Block 0 Cache Tag Valid : This is called a 2-way set associative cache because there are two cache entries for each cache index. Essentially, you have two direct mapped cache works in parallel. This is how it works: the cache index selects a set from the cache. The two tags in the set are compared in parallel with the upper bits of the memory address. If neither tag matches the incoming address tag, we have a cache miss. Otherwise, we have a cache hit and we will select the data on the side where the tag matches occur. This is simple enough. What is its disadvantages? +1 = 36 min. (Y:16) Cache Data Cache Block 0 Cache Tag Valid : Compare Mux 1 Sel1 Sel0 OR Hit Cache Block

Fully Associative Cache Fully Associative: Every block can hold any line Address does not include a cache index Compare Cache Tags of all Cache Entries in Parallel Example: Block Size=32B blocks We need N 27-bit comparators Still have byte select to choose from within block 4 Cache Tag (27 bits long) Byte Select 31 = Ex: 0x01 While the direct mapped cache is on the simple end of the cache design spectrum, the fully associative cache is on the most complex end. It is the N-way set associative cache carried to the extreme where N in this case is set to the number of cache entries in the cache. In other words, we don’t even bother to use any address bits as the cache index. We just store all the upper bits of the address (except Byte select) that is associated with the cache block as the cache tag and have one comparator for every entry. The address is sent to all entries at once and compared in parallel and only the one that matches are sent to the output. This is called an associative lookup. Needless to say, it is very hardware intensive. Usually, fully associative cache is limited to 64 or less entries. Since we are not doing any mapping with the cache index, we will never push any other item out of the cache because multiple memory locations map to the same cache location. Therefore, by definition, conflict miss is zero for a fully associative cache. This, however, does not mean the overall miss rate will be zero. Assume we have 64 entries here. The first 64 items we accessed can fit in. But when we try to bring in the 65th item, we will need to throw one of them out to make room for the new item. This bring us to the third type of cache misses: Capacity Miss. +3 = 41 min. (Y:21) Valid Bit : Cache Tag : Cache Data Byte 0 Byte 1 Byte 31 Byte 32 Byte 33 Byte 63

Where does a Block Get Placed in a Cache? Example: Block 12 placed in 8 block cache 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 32-Block Address Space: 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 Block no. 0 1 2 3 4 5 6 7 Block no. Direct mapped: block 12 (01100) can go only into block 4 (12 mod 8) Set associative: block 12 can go anywhere in set 0 (12 mod 4) 0 1 2 3 4 5 6 7 Block no. Set 1 2 3 Fully associative: block 12 can go anywhere 0 1 2 3 4 5 6 7 Block no. 01 100 tag index 011 00 tag index 01100 tag

Review: Caching Applied to Address Translation Problem: address translation expensive (especially multi-level) Solution: cache address translation (TLB) Instruction accesses spend a lot of time on the same page (since accesses sequential) Stack accesses have definite locality of reference Data accesses have less page locality, but still some… TLB Virtual Address Physical Memory CPU Physical Address Cached? Yes Save Result No Data Read or Write (untranslated) Translate (MMU)

TLB organization How big does TLB actually have to be? Usually small: 128-512 entries Not very big, can support higher associativity TLB usually organized as fully-associative cache Lookup is by Virtual Address Returns Physical Address What happens when fully-associative is too slow? Put a small (4-16 entry) direct-mapped cache in front Called a “TLB Slice” When does TLB lookup occur? Before cache lookup? In parallel with cache lookup?

Reducing translation time further As described, TLB lookup is in serial with cache lookup: Machines with TLBs go one step further: they overlap TLB lookup with cache access. Works because offset available early Virtual Address TLB Lookup V Access Rights PA V page no. offset 10 P page no. Physical Address

Overlapping TLB & Cache Access Here is how this might work with a 4K cache: What if cache size is increased to 8KB? Overlap not complete Need to do something else. See CS152/252 Another option: Virtual Caches Tags in cache are virtual addresses Translation only happens on cache misses TLB 4K Cache 10 2 00 4 bytes index 1 K page # disp 20 assoc lookup 32 Hit/ Miss PA Data =

Putting Everything Together

Paging & Address Translation Physical Memory: Virtual Address: Virtual P1 index Virtual P2 index Offset Page Table (2nd level) PageTablePtr Page Table (1st level) Physical Address: Physical Page # Offset

Translation Look-aside Buffer Physical Memory: Virtual Address: Virtual P1 index Virtual P2 index Offset PageTablePtr Physical Address: Physical Page # Offset Page Table (1st level) Page Table (2nd level) … TLB:

Caching … … Physical Memory: Virtual Address: Virtual P1 index Virtual Offset PageTablePtr Physical Address: Physical Page # Offset index byte tag Page Table (1st level) cache: … tag: block: Page Table (2nd level) TLB: …