Download presentation
Presentation is loading. Please wait.
Published byMatilda Cain Modified over 6 years ago
1
Networks and Operating Systems: Exercise Session 3
Salvatore di Girolamo (TA) Networks and Operating Systems: Exercise Session 3
2
Physical vs Virtual Memory
Which is the difference between physical and logical addresses? Physical: address as seen by the memory unit. It refers to an actual address in memory. Logical: address issued by the processor. It does not refers to an actual existing address but to an abstract memory location. How are they translated? When logical addresses are assigned? When physical addresses are assigned?
3
trap: addressing error
Segmentation Generalize base + limit: Physical memory divided into segments Logical address = (segment id, offset) Segment identifier supplied by: Explicit instruction reference Explicit processor segment register Implicit instruction or process state < + Memory CPU Segment table Physical address yes no trap: addressing error s d base limit Segment table – each entry has: - base – starting physical address of segment - limit – length of the segment Segment-table base register (STBR) Current segment table location in memory Segment-table length register (STLR) Current size of segment table [ ] check
4
Segmentation summary Fast context switch Fast translation
Simply reload STBR/STLR Fast translation 2 loads, 2 compares Segment table can be cached Segments can easily be shared Segments can appear in multiple segment tables Physical layout must still be contiguous (External) fragmentation still a problem
5
Paging Solves contiguous physical memory problem
Process can always fit if there is available free memory Divide physical memory into frames Size is power of two, e.g., 4096 bytes Divide logical memory into pages of the same size For a program of n pages in size: Find and allocate n frames Load program Set up page table to translate logical pages to physical frames
6
Recall: P6 Page tables (32bit)
20 12 Logical address: VPN VPO 20 12 VPN1 VPN2 PFN PFO PDE p=1 PTE p=1 data PDBR – Page Dir Base Reg PDBR Page directory Page table Data page Pages, page directories, page tables all 4kB
7
x86-64 paging Virtual address What is the main problem here? Solution?
9 9 9 9 12 VPN1 VPN2 VPN3 VPN4 VPO Page Directory Pointer Table What is the main problem here? Page Directory Table Every logical memory access needs more than two physical memory accesses Page Map Table Page Table Solution? Cache page tables entries 48 bits = 256 TiB PAE allows 52 bits (4 PiB) – Physical Address Extension PM4LE PDPE PDE PTE PDBR 40 12 PFN PFO Physical address
8
Translating with the P6 TLB
Partition VPN into TLBT and TLBI. Is the PTE for VPN cached in set TLBI? Yes: Check permissions, build physical address No: Read PTE (and PDE if not cached) from memory and build physical address CPU virtual address 20 12 VPN VPO 16 4 TLBT TLBI 1 2 TLB hit TLB miss TBLT, TBLI = TLB Tag and TLB Index PDE PTE 3 ... 20 12 partial TLB hit PFN PFO physical address page table translation 4
9
Protection: P6 Page Table Entry (PTE)
31 12 11 9 8 7 6 5 4 3 2 1 Page physical base address Avail G D A CD WT U/S R/W P=1 Page base address: 20 most significant bits of physical page address (forces pages to be 4 KB aligned) Avail: available for system programmers G: global page (don’t evict from TLB on task switch) D: dirty (set by MMU on writes) A: accessed (set by MMU on reads and writes) CD: cache disabled or enabled WT: write-through or write-back cache policy for this page U/S: user/supervisor R/W: read/write P: page is present in physical memory (1) or not (0) Protection information typically includes: Readable Writeable Executable (can fetch to i-cache) Reference bits used for demand paging
12
Difference between user- and kernel-level threads?
Kernel-level threads: implemented in kernel-space, the OS sees them User-level threads: managed by user-level libraries, much cheaper than kernel threads Advantages/disadvantages Good for frequently blocking applications Scheduling can be inefficient Can block the entire process Significant memory overhead in the kernel Fast switching CPU-time can be assigned according to the their number Slow management operations No OS support required
13
Threads: user- to kernel- level threads mapping
One-to-one Many-to-one Many-to-many Kernel User CPU 0 CPU 1 Kernel User CPU 0 CPU 1 Kernel User CPU 0 CPU 1 Text Data BSS Thread 1 stack Thread 2 stack Thread 3 stack Text BSS Thread 1 stack Thread 2 stack Thread 3 stack Data
14
Scheduling Life of a scheduler: What to schedule? When to schedule? For how long? Scheduler needs CPU to decide what to schedule Any time spent in scheduler is “wasted” time Want to minimize overhead of decisions To maximize utilization of CPU But low overhead is no good if your scheduler picks the “wrong” things to run! Trade-off between: scheduler complexity/overhead and quality of resulting schedule
15
When to schedule? When: A running process blocks (or calls yield())
e.g., initiates blocking I/O or waits on a child A blocked process unblocks I/O completes A running or waiting process terminates An interrupt occurs I/O or timer 2 or 4 can involve preemption
16
Scheduling: Preemption
Difference between preemptive and non-preemptive scheduling? Preemptive: Processes dispatched and descheduled without warning How? timer interrupt, page fault, etc. Non-preemptive: Each process explicitly gives up the scheduler How? Start I/O, executes a “yield()” call, etc. What could be the problem with non-preemptive scheduling?
17
Round-robin Simplest interactive algorithm
Run all runnable tasks for fixed quantum in turn Advantages: It’s easy to implement It’s easy to understand, and analyze Higher turnaround time than SJF, but better response Disadvantages: It’s rarely what you want Treats all tasks the same
18
Priority Very general class of scheduling algorithms
Assign every task a priority Dispatch highest priority runnable task Priorities can be dynamically changed Schedule processes with same priority using Round Robin FCFS etc. Priority 100 Priority 4 Priority 3 Priority 2 Priority 1 T Runnable tasks Priority …
19
Scheduling: First-come first-served
Simplest algorithm! Task Execution time A 24 B 3 C What about the waiting time? Waiting times: 0, 24, 27 Avg. = ( )/3 = 17 A B C 24 27 30
20
Scheduling: First-come first-served
Different arrival order Task Execution time B 3 C A 24 What about the waiting time? Waiting times: 6, 0, 3 Avg. = (0+3+6)/3 = 3 Much better But unpredictable B C A 3 6 30
21
Scheduling: Shortest-job first Task Execution time A 6 B 8 C 7 D 3
Always run process with the shortest execution time. Optimal: minimizes waiting time (and hence turnaround time) Task Execution time A 6 B 8 C 7 D 3 What to do if new jobs arrive (interactive load)? Shortest Remaining Time First: New, short jobs may preempt longer jobs already running D A C B
22
RT Scheduling: Rate-monotonic scheduling
Schedule periodic tasks by always running task with shortest period first. Static (offline) scheduling algorithm Suppose: m tasks Ci is the execution time of i’th task Pi is the period of i’th task Then RMS will find a feasible schedule if: (Proof is beyond scope of this course)
23
RT Scheduling: Earliest Deadline First
Schedule task with earliest deadline first (duh..) Dynamic, online. Tasks don’t actually have to be periodic… More complex - O(n) – for scheduling decisions EDF will find a feasible schedule if: Which is very handy. Assuming zero context switch time…
24
Scheduling: exercise Process Burst time Priority P1 10 1 2 5 3 1 4 2
Notes: SJF: equal burst length processes are scheduled in FCFS Priority: non-preemptive priority, small priority number means high priority, equal priority processes are scheduled in FCFS) RR: quantum=1. P2 P3 P4 P5 Time: 1 2 3 4 5 6 7 8 10 9 11 12 13 19 16 14 15 17 18 FCFS: P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P2 P3 P3 P4 P5 P5 P5 P5 P5 SJF: P2 P4 P3 P3 P5 P5 P5 P5 P5 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 RR: P1 P2 P3 P4 P5 P1 P3 P5 P1 P5 P1 P5 P1 P5 P1 P1 P1 P1 P1 Source:
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.