Download presentation
Presentation is loading. Please wait.
Published byMorris Walker Modified over 9 years ago
1
CS 162 Discussion Section Week 6 3/6 – 3/7
2
Today’s Section ●Administrivia (5 min) ●Quiz (5 min) ●Lecture Review (15 min) ●Worksheet and Discussion (25 min)
3
Administrivia Initial Design due Thurs 3/6 at 11:59pm submit proj2-initial-design Please also sign up for a design review if you haven’t already! Midterm 1 is 3/12, 4:00-5:30pm in 245 Li Ka Shing (A-L) and 105 Stanley (M-Z) –Covers lectures 1-12, readings, handouts, projs 1 & 2
4
Lecture Review
5
Modern programs require a lot of physical memory –DRAM per system growing faster than 25%-30%/year But they don’t use all their memory all of the time –90-10 rule: programs spend 90% of their time in 10% of their code –Wasteful to require all of user’s code to be in memory Solution: use main memory as cache for disk/SSD Demand Paging Core Secondary Storage (Disk) Processor Main Memory (DRAM) Secondary Storage (SSD) Caching
6
PTE helps us implement demand paging –Valid Page in memory, PTE points at physical page –Not Valid Page not in memory; use info in PTE to find it on disk when necessary Suppose user references page with invalid PTE? –Memory Management Unit (MMU) traps to OS Resulting trap is a “Page Fault” –What does OS do on a Page Fault?: Choose an old page to replace If old page modified (“D=1”), write contents back to disk Change its PTE and any cached TLB to be invalid Load new page into memory from disk Update page table entry, invalidate TLB for new entry Continue thread from original faulting location –TLB for new page will be loaded when thread continued! –While pulling pages off disk for one process, OS runs another process from ready queue Suspended process sits on wait queue Demand Paging Mechanisms
7
Steps in Handling a Page Fault
8
What Factors Lead to Misses? Compulsory Misses: –Pages that have never been paged into memory before –How might we remove these misses? Prefetching: loading them into memory before needed Need to predict future somehow! More later. Capacity Misses: –Not enough memory. Must somehow increase size. –Can we do this? One option: Increase amount of DRAM (not quick fix!) Another option: If multiple processes in memory: adjust percentage of memory allocated to each one! Conflict Misses: –Technically, conflict misses don’t exist in virtual memory, since it is a “fully-associative” cache Policy Misses: –Caused when pages were in memory, but kicked out prematurely because of the replacement policy –How to fix? Better replacement policy
9
Page Replacement Policies Why do we care about Replacement Policy? –Replacement is an issue with any cache –Particularly important with pages The cost of being wrong is high: must go to disk Must keep important pages in memory, not toss them out FIFO (First In, First Out) –Throw out oldest page. Be fair – let every page live in memory for same amount of time. –Bad, because throws out heavily used pages instead of infrequently used pages MIN (Minimum): –Replace page that won’t be used for the longest time –Great, but can’t really know future… –Makes good comparison case, however RANDOM: –Pick random page for every replacement –Typical solution for TLB’s. Simple hardware –Unpredictable
10
Compare and Contrast Consider A B C D A B C D A B C D FIFO MIN LRU
11
Implementing LRU & Second Chance Perfect: –Timestamp page on each reference –Keep list of pages ordered by time of reference –Too expensive to implement in reality for many reasons Second Chance Algorithm: –Approximate LRU Replace an old page, not the oldest page –FIFO with “use” bit Details –A “use” bit per physical page set when page accessed –On page fault check page at head of queue If use bit=1 clear bit, and move page to tail (give the page second chance!) If use bit=0 replace page –Moving pages to tail still complex
12
Clock Algorithm Clock Algorithm: more efficient implementation of second chance algorithm –Arrange physical pages in circle with single clock hand Details: –On page fault: Check use bit: 1 used recently; clear and leave it alone 0 selected candidate for replacement Advance clock hand (not real time) –Will always find a page or loop forever?
13
User Kernel (System Call) Can’t let inmate (user) get out of padded cell on own –Would defeat purpose of protection! –So, how does the user program get back into kernel? System call: Voluntary procedure call into kernel –Hardware for controlled User Kernel transition –Can any kernel routine be called? No! Only specific ones –System call ID encoded into system call instruction Index forces well-defined interface with kernel I/O: open, close, read, write, lseek Files: delete, mkdir, rmdir, chown Process: fork, exit, join Network: socket create, select
14
User Kernel (Exceptions: Traps and Interrupts) System call instr. causes a synchronous exception (or “trap”) –In fact, often called a software “trap” instruction Other sources of Synchronous Exceptions: –Divide by zero, Illegal instruction, Bus error (bad address, e.g. unaligned access) –Segmentation Fault (address out of range) –Page Fault Interrupts are Asynchronous Exceptions –Examples: timer, disk ready, network, etc…. –Interrupts can be disabled, traps cannot! SUMMARY – On system call, exception, or interrupt: –Hardware enters kernel mode with interrupts disabled –Saves PC, then jumps to appropriate handler in kernel –For some processors (x86), processor also saves registers, changes stack, etc.
15
How Does User Deal with Timing? Blocking Interface: “Wait” –When request data (e.g., read() system call), put process to sleep until data is ready –When write data (e.g., write() system call), put process to sleep until device is ready for data Non-blocking Interface: “Don’t Wait” –Returns quickly from read or write request with count of bytes successfully transferred to kernel –Read may return nothing, write may write nothing Asynchronous Interface: “Tell Me Later” –When requesting data, take pointer to user’s buffer, return immediately; later kernel fills buffer and notifies user –When sending data, take pointer to user’s buffer, return immediately; later kernel takes data and notifies user
16
I/O Device Notifying the OS The OS needs to know when: –The I/O device has completed an operation –The I/O operation has encountered an error I/O Interrupt: –Device generates an interrupt whenever it needs service –Pro: handles unpredictable events well –Con: interrupts relatively high overhead Polling: –OS periodically checks a device-specific status register I/O device puts completion information in status register –Pro: low overhead –Con: may waste many cycles on polling if infrequent or unpredictable I/O operations Actual devices combine both polling and interrupts –For instance – High-bandwidth network adapter: Interrupt for first incoming packet Poll for following packets until hardware queues are empty
17
Worksheet…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.