Presentation is loading. Please wait.

Presentation is loading. Please wait.

Processes and Virtual Memory

Similar presentations


Presentation on theme: "Processes and Virtual Memory"— Presentation transcript:

1 Processes and Virtual Memory

2 Outline Interaction of processes with virtual memory system
Page sharing and memory-mapped files

3 Processes and Virtual Memory
We have seen that the virtual memory system implements three main abstractions Address space, physical memory management, swap OS implement the virtual memory hierarchy using these abstractions The rest of the OS invokes the virtual memory system when address space of the current process changes: Process creation (fork) Process execution (execv) Process termination (exit) Context switch Memory allocation or deallocation (sbrk, stack)

4 Fork Fork creates a new address space
Copies parent’s address space structure All regions will have the same location, sizes and permissions as the parent’s structure Creates a new page table All valid pages must be copied from the parent to the child Allocate memory frames for text, data, heap, stack regions Copy contents from parent's regions Create valid PTE for each allocated frame If parent’s page is in swap, create a shared swap page

5 Execv Execv starts running a new process
Destroy old address space structure Free all mapped page frames in the page table Free all swap regions used by the process Free space used by address space structure, page table Must check reference counts in coremap If frames are shared, free frame only when reference count is zero Create new address space structure Set size text and code regions according to executable Set size of heap and stack according to defaults Initialize a new page table with all invalid entries

6 Exit, Context Switch Exit terminates a process Context switch
The same as destroying old address space in execv Context switch Need to change the currently active page table Hardware managed TLB (x86) Change the page table register, flush TLB Software managed TLB (os161) Flush TLB, TLB misses are handled in s/w

7 Memory Allocation or Deallocation
When stack or heap grows, page is requested, OS allocates a new page, thread continues Page 2^20 - 1 Stack Unallocated pages Heap Data Text Page 1 Page 0 Virtual address space

8 Heap Region User-level malloc implementation manages heap memory using bitmap, list, etc. Malloc services allocation requests using a free pool When the pool runs out of memory, malloc requests more heap memory from the OS by using the sbrk() system call sbrk(increment) This system call increase the heap size by increment bytes It increases the heap region associated with address space Initializes appropriate PTEs Returns the old size of the heap

9 Stack Region The stack contains local variables, parameters, and return values Stack operations involve basic machine instructions such as push and pop Unlike heap, we cannot use a system call to grow the stack OS takes advantage of page faults to grow stack When faulting address is *close* to stack, extend the stack region, run the standard page fault handler code How do you know whether address is "close" to stack? Why is stack grown automatically, while the heap requires a system call? Both heap and stack could be grown automatically, but with the heap, requiring a system call allows OS to detect errors

10 Page Sharing Processes do not share any memory
Each process has its own address space (page table) Strong isolation, but slow communication via system calls Threads share all memory They have the same address space (same page table) Fast communication via memory accesses, but poor isolation (bugs in one thread affect can affect all threads) Paging allows processes to share memory at page granularity Multiple pages can share memory when they are mapped to the same frame

11 Page Sharing Benefits Applications
Allows fast communication via shared memory pages Provides good isolation, only share what is needed E.g., the child processes of a web server program may wish to only share some data (e.g., the web server data cache) Applications Sharing text regions Copy-on-write pages Memory-mapped files

12 Sharing Text Regions Multiple instances of a program or dynamically loaded libraries can share all the pages in the text region Sharing is achieved by mapping pages to the same frames Note that Must update all pages (i.e., PTE) when frame is evicted

13 Sharing Text Regions Physical memory Stack (rw) Thread 1 address space
page table Data (rw) Text (rx) Shared text pages - Go over include files Go over static libraries, dynamic libraries, the differences between them, their benefits and drawbacks With dynamic libraries, the text pages can be shared across applications Thread 2 address space Thread 2 page table

14 Copy-on-Write (COW) Page Sharing
Fork system call copies parent's address space to child Copying all pages is expensive However, processes can't notice difference between copying and sharing unless pages are modified With copy-on-write (COW), child shares pages with parent until the pages are modified

15 Copy-on-Write Implementation
Initialize new page table for child on fork() However, page table shares parent's page frames, i.e., page table is a copy of the parent's page table, but the pages are not copied Mark all writeable page table entries in both page tables temporarily as “read-only” These pages are called COW pages When a process modifies a COW page, this will cause a TLB "read-only“ protection fault We can take advantage of this protection fault to make a copy of a page on a write

16 Copy-on-Write Implementation
On protection fault: Allocate a new frame Copy the original frame to the new frame Remap page in address space from old frame to new frame Make page in address space writable and update TLB entry Resume execution Evicting shared pages: Must update all shared pages to point to swap when frame is evicted On a protection fault, when the page is made writeable, can the original page be made writable? You need to check the number of references on the original frame (nr. of pages that map to this frame), and if there is only one reference, then make the corresponding page writeable as well, because it will save an extra fault.

17 Memory-Mapped Files Memory-mapped files allow accessing and sharing a file using a memory interface Threads read and write files using memory load/store instructions rather than read/write system calls The mmap system call maps a file at a given offset contiguously within an address space mmap(addr, length, prot, flags, fd, offset) addr: virtual address of mapped region length: length of mapped region prot: protection flags (writeable, readable, executable) fd: descriptor of file offset: offset in file After mmap, accessing addr + N refers to offset + N in file fd

18 Example of Memory-Mapped File
Any part of the file can be mapped to an arbitrary region in the address space Page 2^20 - 1 Stack Mapped file File Unallocated pages Similarly, for shared libraries Heap Data Text Page 1 Page 0 Virtual address space

19 Memory-Mapped Files Memory-mapped file operation
File data is loaded in memory on page fault (demand paging) When dirty page is evicted, page frame is written to file Essentially, file is used for backing store instead of swap area

20 Shared Memory When threads map the same file region, they can share file data by reading and writing to the mapped memory region Page table entries can have different protection E.g., one thread has read, another write enabled Memory can be mapped at same or different virtual address in each process Different virtual addresses allow more flexibility, but shared memory pointers become invalid

21 Summary The virtual memory system is invoked when address space of the current process changes TLB and page fault handling Process creation, process termination, context switch Memory allocation or deallocation Loading dynamic libraries Paging enables sharing pages between processes Reduces memory footprint because pages can be shared Enables copy-on-write, memory-mapped files, and shared memory applications

22 Think Time Since processes can share memory with mmap(), what is the difference between processes and threads? Does the OS use page tables for its memory? Do the different processors in an SMP use a single page table or different page tables? Why is it hard to program with shared data structures that are mapped at different virtual addresses in two processes? Why is it more efficient to access a memory-mapped file than using a read or write system call? difference between process and threads: mmap allows fine-grained sharing of parts of the processes address space OS use page tables: Normally, OS accesses its memory using a simple translation mechanism that doesn't require page tables. However, the OS uses user page tables to copy in or out the data from a user process when a system call is made. processors and page table - Each processor/core uses its own page table because it has its own MMU. Cores can use the same page table, e.g., when different threads of a process are running on different cores. hard to program: Data structures often contain pointers. The pointer values are virtual addresses. The pointer value will not make sense if the data structure is mapped at a different virtual address in another process. memory mapped file access: a read or a write system call makes a copy of data from kernel to user (read) or user to kernel (write). a memory mapped file access does not require this copy – on a page fault, data is copied from disk to the user’s program’s memory directly, without requiring a file buffer in the kernel.

23 Kernel Address Space Typically, OS lies in low physical memory
How does OS access this memory? Several options Paging is turned off in kernel mode OS can access physical memory directly, including page table Problems? OS uses separate address space OS sets up its own page table that maps to low physical memory OS mapped to address space of each thread OS uses page table of current thread to access its memory, or Hardware allows OS virtual addresses to bypass TLB and page tables Paging off problem: OS needs to simulate paging, i.e., do address translation in software, to copy in or out user parameters on a system call, deal with dirty bit, etc. When paging is turned off, TLB is not used, so it does not need to be flushed, unless context switch occurs.

24 OS Uses Separate Address Space
Pros: Clean design, every process has entire address space Cons: On system call entry/exit, requires switching MMU context Changing active page tables (i.e., TLB flush) is expensive Copy in/out of system call parameters requires traversing page tables in software P1 invokes system call CS (context switch): thread switch + MMU switch P2 returns from system call thread switch + MMU switch thread switch system call handler, invoke thread_yield system call handler, thread_yield returns

25 OS Mapped to Thread Address Space
OS uses current thread’s address space Typically, OS is mapped to high addresses in the virtual address space of each process If OS executes in the address space of current thread, how does it protect itself? OS code and data protect itself: MMU hardware provides protection bits. OS is located in a region that is accessible in privileged mode only (accessing this region in user mode will cause protection fault) stack SP Address space (user and system addresses) data PC text 25

26 OS Mapped to Thread Address Space
Pros: On system call entry/exit, no MMU switch needed Copy in/out of system call parameters can reuse paging hardware Cons: Address space of processes is reduced Page table of each process needs to be setup to access OS code Alternatively, hardware may allow OS to bypass page tables, e.g., MIPS CS (context switch): thread switch + MMU switch Notice that in this method, if we make a system call and return to the same thread, then no MMU switch occurs. Notice that there are two thread switches happening in the diagram: In the first thread switch (when mode switch happens), the user thread state is stored on P1’s kernel stack. This thread state allows the process to resume execution in user space. In the second thread switch (when switching from P1 thread to P2 thread, which also switches the MMU context), the kernel thread state is stored in Process 1’s thread pcb structure (process control block). This thread state allows a thread in kernel mode to resume execution (e.g., when thread_yield returns). Interrupts can occur in user mode or kernel mode. Thread state is saved on current thread’s kernel stack in both cases. P1 invokes system call P2 returns from system call thread switch thread switch + MMU switch system call handler, invoke thread_yield system call handler, thread_yield returns 26


Download ppt "Processes and Virtual Memory"

Similar presentations


Ads by Google