Lecture 11 PA2, lock, and CV. Lab 3: Demand Paging Implement the following syscalls xmmap, xmunmap, vcreate, vgetmem/vfreemem, srpolicy Deadline: March.

Slides:



Advertisements
Similar presentations
Tutorial 8 March 9, 2012 TA: Europa Shang
Advertisements

CSCC69: Operating Systems
Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
CS 333 Introduction to Operating Systems Class 11 – Virtual Memory (1)
1 CS318 Project #3 Preemptive Kernel. 2 Continuing from Project 2 Project 2 involved: Context Switch Stack Manipulation Saving State Moving between threads,
Memory Management (II)
Translation Buffers (TLB’s)
CS 333 Introduction to Operating Systems Class 13 - Virtual Memory (3) Jonathan Walpole Computer Science Portland State University.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Memory Management in Windows and Linux &. Windows Memory Management Virtual memory manager (VMM) –Executive component responsible for managing memory.
Operating Systems Lecture 11 MIPS TLB Structure Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of Software.
CS 346 – Chapter 8 Main memory –Addressing –Swapping –Allocation and fragmentation –Paging –Segmentation Commitment –Please finish chapter 8.
Review of Memory Management, Virtual Memory CS448.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Chapter 28 Locks Chien-Chung Shen CIS, UD
1 Linux Operating System 許 富 皓. 2 Memory Addressing.
© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory.
Lecture 10 Locks.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Lecture 12 CV. Last lecture Controlling interrupts Test and set (atomic exchange) Compare and swap Load linked and store conditional Fetch and add and.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Processes and Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Virtual Memory Implementation.
Consider the Java code snippet below. Is it a legal use of Java synchronization? What happens if two threads A and B call get() on an object supporting.
Lecture 14 PA2. Lab 2: Demand Paging Implement the following syscalls xmmap, xmunmap, vcreate, vgetmem/vfreemem, srpolicy Deadline: November , 10:00.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
MIDTERM REVIEW CSCC69 Winter 2016 Kanwar Gill. What is an OS? What are processes and threads? Process states? Diagram showing the state changes What data.
1 Virtual Memory. 2 Outline Case analysis –Pentium/Linux Memory System –Core i7 Suggested reading: 9.7.
Virtual Memory What if program is bigger than available memory?
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
CS161 – Design and Architecture of Computer
Memory Caches & TLB Virtual Memory
Modeling Page Replacement Algorithms
Paging Adapted from: © Ed Lazowska, Hank Levy, Andrea And Remzi Arpaci-Dussea, Michael Swift.
Virtual Memory © 2004, D. J. Foreman.
CSE 153 Design of Operating Systems Winter 2018
Chien-Chung Shen CIS/UD
Modeling Page Replacement Algorithms
Translation Lookaside Buffer
CSE 451: Operating Systems Autumn 2005 Memory Management
Translation Buffers (TLB’s)
CS510 Operating System Foundations
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2004 Page Tables, TLBs, and Other Pragmatics Hank Levy 1.
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Winter 2005 Page Tables, TLBs, and Other Pragmatics Steve Gribble 1.
CSE 153 Design of Operating Systems Winter 2019
Lecture 9: Caching and Demand-Paged Virtual Memory
Virtual Memory © 2004, D. J. Foreman.
Lecture 12 CV and Semaphores
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Review What are the advantages/disadvantages of pages versus segments?
Chapter 8 & 9 Main Memory and Virtual Memory
Virtual Memory 1 1.
Paging Adapted from: © Ed Lazowska, Hank Levy, Andrea And Remzi Arpaci-Dussea, Michael Swift.
Presentation transcript:

Lecture 11 PA2, lock, and CV

Lab 3: Demand Paging Implement the following syscalls xmmap, xmunmap, vcreate, vgetmem/vfreemem, srpolicy Deadline: March , 10:00 PM

Demand Paging – OS From the OS perspective: Evict pages to disk (backing store) when memory is full Pages loaded from disk when referenced again References to evicted pages cause a TLB miss Page table entry (PTE) was invalid, causes fault OS allocates a page frame, reads page from disk When I/O completes, the OS fills in PTE, marks it valid, and restarts faulting process Dirty vs. clean pages Only dirty pages need to be written to disk Clean pages do not – but you need to know where on disk to read them from again

Demand Paging – process From the process perspective: Demand paging is also used when it first starts up When a process is created, it has A brand new page table with all valid bits off No pages in memory When the process starts executing Instructions fault on code and data pages Faulting stops when necessary code/data pages are in memory Only code and data needed by a process needs to be loaded, which will change over time … When the process terminates All related pages reclaimed back to OS

Physica l Memory Layout Virtual Heap (pages 4096 and beyond) (8M-4G) Backing stores (pages ) (8M) Free Frames (pages ) (4M) Kernel Memory (pages ) Kernel Memory, HOLE (pages ) Kernel Memory (pages ) Xinu text, data, bss (pages )

Backing Stores There are 16 backing stores in total: APIs: get_bs/release_bs, read_bs/write_bs Emulated by physical memory Skeleton already given You may want to add some sanity check!

Other Issues The NULL process No private heap Global page table entries The entire 16M physical memory Identity mapping Page fault ISR paging/pfintr.S, paging/pfint.c Support data structures Inverted page table Help functions E.g., finding a backing store from a virtual address

Intel System Programming Outer Page Table (Page Directory) = 1024 page directory entries in a page directory Page Table = 1024 page table entries in a page table Page - 4-KB PDBR = Page Directory Base Register (CR3): points to the start address of Page Directory (Outer Page Table) TLB - lookup in page tables in memory are performed only when the TLBs do not contain the translation information for a requested page. invalidate - automatically invalidated any time the CR3 register is loaded.

From Boot 1.Initialize (zero out the values) backing store - (create data structures) frames - (create data structures) install page fault handler 2.Create new page table for null process: create page directory (outer page table) initialize 1:1 mapping for the first 4096 pages allocate 4 page tables (4x1024 pages) assign each page table entry to the address starting from page number 0 to 1023 this page tables should be shared between processes

From Boot Enable paging set bit 31st of the CR0 register take care that PDBR is set, because subsequent memory address access will be virtual memory addresses 4.Creating new process (eg. main): create page directory (same as with null process) share the first 4096 pages with null process 5.Context switch every process has separate page directory before ctxsw() load CR3 with the process's PDBR

Using Virtual Memory 1.Allocate pages in backing store 2.Map it to virtual page using xmmap() for example if you do xmmap(A, backingstore, 10) then the mapping would be made to consecutive locations in backingstore for virtual pages: A, A+1, A+2,..., A+9 3.Then try accessing the virtual address 4.If the page is not present a Page Fault is generated

Page Fault 1.Address that caused page fault content of CR2 register 2.Search for the page table entry. Two cases: a). second level page table does not exist b). second level page table exists but the page table entry does not exist How do we know? Use the P flag for page directory/table entry

Page Fault - 2 Case a) allocate a frame -> initialize (zero out the page table frame) update the page directory entry with base address of the page table frame Now this case becomes Case (b)

Page Fault - 3 Case b) Locate backing store id of the faulted page, the page number in the backing store. Find a free frame to store the page from backing store if found: use the free frame if not found: evict a page frame (Page Replacement Algorithm) Update the page table entry for the page and possibly for evicted page frame

Using Virtual Memory 1.Allocate pages in backing store 2.Map it to virtual page using xmmap() for example if you do xmmap(A, backingstore, 10) then the mapping would be made to consecutive locations in backingstore for virtual pages: A, A+1, A+2,..., A+9 3.Then try accessing the virtual address 4.If the page is not present a Page Fault is generated 5.Finally: Flush TLB content, by reloading CR3 with page directory address

Virtual address has page table offset as well as page directory offset. PageTableNumber(31-22) PageNumber(21-12) Offset(11-0) Page Directory/Table Entry Format PFA page frame address Avail available to OS 8 0 must be 0 7 L PTE -- Must be 0. Dir Entry -- 4MB page 6 D dirty (PTE only -- documented as undefined in directory entry) 5 A accessed 4 PCD page cache disable (can't cache data on this page) 3 PWT page write transparent (tell external cache to use write-through strategy for this page) 2 U user accessible 1 W writeable 0 P present

Last lecture Controlling interrupts Test and set (atomic exchange) Compare and swap Load linked and store conditional Fetch and add and ticket locks

typedef struct __lock_t { int ticket; int turn; } lock_t; void lock_init(lock_t *lock) { lock->ticket = 0; lock->turn = 0; } void lock(lock_t *lock) { int myturn = FetchAndAdd(&lock->ticket); while (lock->turn != myturn) ; // spin } void unlock(lock_t *lock) { FetchAndAdd(&lock->turn); }

Sleeping Instead Of Spinning On Solaris, OS provides two calls: park() to put a calling thread to sleep unpark(threadID) to wake a particular thread as designated by threadID

typedef struct __lock_t { int flag; int guard; queue_t *q; } lock_t; void lock_init(lock_t *m) { m->flag = 0; m->guard = 0; queue_init(m->q); }

void lock(lock_t *m) { while (TestAndSet(&m->guard, 1) == 1) ; //acquire guard lock by spinning if (m->flag == 0) { m->flag = 1; // lock is acquired m->guard = 0; } else { queue_add(m->q, gettid()); setpark(); m->guard = 0; park(); } }

void unlock(lock_t *m) { while (TestAndSet(&m->guard, 1) == 1) ; //acquire guard lock by spinning if (queue_empty(m->q)) m->flag = 0; // let go of lock; no one wants it else // hold lock (for next thread!) unpark(queue_remove(m->q)); m->guard = 0; }

Different Support on Linux On Linux, OS provides two calls: futex_wait(address, expected) puts the calling thread to sleep, assuming the value at address is equal to expected. If it is not equal, the call returns immediately. futex_wake(address) wakes one thread that is waiting on the queue.

void lock(lock_t *m) { int v; /* Bit 31 was clear, we got the mutex (fastpath) */ if (atomic_bit_test_set (m, 31) == 0) return; atomic_increment (m); while (1) { if (atomic_bit_test_set (m, 31) == 0) { atomic_decrement (m); return; } /* We have to wait now. First make sure the futex value we are monitoring is truly negative (i.e. locked). */ v = *m; if (v >= 0) continue; futex_wait (m, v); }

void unlock(lock_t *m) { /* Adding 0x to the counter results in 0 if & only if there are not other interested threads */ if (atomic_add_zero (mutex, 0x )) return; /* There are other threads waiting for this mutex, wake one of them up. */ futex_wake (mutex); }

Lock Usage Examples Concurrent Counters Concurrent Linked Lists Concurrent Queues Concurrent Hash Table

Concurrency Objectives Mutual exclusion (e.g., A and B don’t run at same time) solved with locks Ordering (e.g., B runs after A) solved with condition variables

Condition Variables CV’s are more like channels than variables. B waits for a signal on channel before running. A sends signal when it is time for B to run. A CV also has a queue of waiting threads. A CV is usually PAIRED with some kind state variable 。

Broken CV’s wait(cond_t *cv) puts caller to sleep (and on queue) signal(cond_t *cv) wake a single waiting thread (if >= 1 thread is waiting) if there is no waiting thread, just return w/o doing anything

When to Call wait if (!ready) wait(&cv); lock(&mutex); // critical section unlock(&mutex); lock(&mutex); // critical section if (!ready) wait(&cv); unlock(&mutex);

Correct CV’s wait(cond_t *cv, mutex_t *lock) assumes the lock is held when wait() is called puts caller to sleep + releases the lock (atomically) when awoken, reacquires lock before returning signal(cond_t *cv) wake a single waiting thread (if >= 1 thread is waiting) if there is no waiting thread, just return w/o doing anything

Ordering Example: Join pthread_t p1, p2; printf("main: begin [balance = %d]\n", balance); Pthread_create(&p1, NULL, mythread, "A"); Pthread_create(&p2, NULL, mythread, "B"); // join waits for the threads to finish Pthread_join(p1, NULL); Pthread_join(p2, NULL); printf("main: done\n [balance: %d]\n [should: %d]\n", balance, max*2); return 0;

Implementing Join with CV’s (attempt 1) void thread_exit() { Mutex_lock(&m); // a Cond_signal(&c); // b Mutex_unlock(&m); // c } void thread_join() { Mutex_lock(&m); // x Cond_wait(&c, &m); // y Mutex_unlock(&m); // z }

Implementing Join with CV’s (attempt 2) void thread_exit() { done = 1; // a Cond_signal(&c); // b } void thread_join() { Mutex_lock(&m); // w if (done == 0) // x Cond_wait(&c, &m); // y Mutex_unlock(&m); // z }

Good Rule of Thumb Keep state in addition to CV’s! CV’s are used to nudge threads when state changes. If state is already as needed, don’t wait for a nudge! Always do wait and signal while holding the lock!