Networks and Operating Systems: Exercise Session 3

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

CS 149: Operating Systems February 3 Class Meeting
Operating Systems ECE344 Ding Yuan Final Review Lecture 13: Final Review.
EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.
CS 153 Design of Operating Systems Spring 2015
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Memory Management 2010.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management CSE451 Andrew Whitaker. Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs.
SOCSAMS e-learning Dept. of Computer Applications, MES College Marampally MEMORYMANAGEMNT.
1 Virtual Memory. 2 Outline Pentium/Linux Memory System Core i7 Suggested reading: 9.6, 9.7.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
Chapter 4 Memory Management Virtual Memory.
Pentium III Memory.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Processes and Virtual Memory
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Virtual Memory. 2 Outline Case analysis –Pentium/Linux Memory System –Core i7 Suggested reading: 9.7.
Translation Lookaside Buffer
Non Contiguous Memory Allocation
CS161 – Design and Architecture of Computer
CSE 120 Principles of Operating
Memory Caches & TLB Virtual Memory
CS703 - Advanced Operating Systems
Paging COMP 755.
COMBINED PAGING AND SEGMENTATION
Today How was the midterm review? Lab4 due today.
Networks and Operating Systems: Exercise Session 2
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Chapter 2 Scheduling.
Chapter 8 – Processor Scheduling
CSE 153 Design of Operating Systems Winter 2018
CPU Scheduling.
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
CPU Scheduling Basic Concepts Scheduling Criteria
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Operating System Concepts
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
CS399 New Beginnings Jonathan Walpole.
Introduction to the Intel x86’s support for “virtual” memory
Virtual Memory Hardware
Translation Lookaside Buffer
Pentium III / Linux Memory System April 4, 2000
CPU scheduling decisions may take place when a process:
CSE 451: Operating Systems Autumn 2005 Memory Management
Operating System Chapter 7. Memory Management
Process Scheduling Decide which process should run and for how long
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2004 Memory Management
CSE 451: Operating Systems Winter 2003 Lecture 6 Scheduling
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Processor Scheduling Hank Levy 1.
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management CSE451 Andrew Whitaker.
Concurrency and Threading: CPU Scheduling
CSE 451: Operating Systems Winter 2004 Module 10 Memory Management
CS703 - Advanced Operating Systems
CSE 153 Design of Operating Systems Winter 2019
Chapter 6: Scheduling Algorithms Dr. Amjad Ali
Scheduling 21 May 2019.
Virtual Memory 1 1.
CSE 451: Operating Systems Winter 2001 Lecture 6 Scheduling
Presentation transcript:

Networks and Operating Systems: Exercise Session 3 Salvatore di Girolamo (TA) Networks and Operating Systems: Exercise Session 3

Physical vs Virtual Memory Which is the difference between physical and logical addresses? Physical: address as seen by the memory unit. It refers to an actual address in memory. Logical: address issued by the processor. It does not refers to an actual existing address but to an abstract memory location. How are they translated? When logical addresses are assigned? When physical addresses are assigned?

trap: addressing error Segmentation Generalize base + limit: Physical memory divided into segments Logical address = (segment id, offset) Segment identifier supplied by: Explicit instruction reference Explicit processor segment register Implicit instruction or process state < + Memory CPU Segment table Physical address yes no trap: addressing error s d base limit Segment table – each entry has: - base – starting physical address of segment - limit – length of the segment Segment-table base register (STBR) Current segment table location in memory Segment-table length register (STLR) Current size of segment table [ ] check

Segmentation summary Fast context switch Fast translation Simply reload STBR/STLR Fast translation 2 loads, 2 compares Segment table can be cached Segments can easily be shared Segments can appear in multiple segment tables Physical layout must still be contiguous (External) fragmentation still a problem

Paging Solves contiguous physical memory problem Process can always fit if there is available free memory Divide physical memory into frames Size is power of two, e.g., 4096 bytes Divide logical memory into pages of the same size For a program of n pages in size: Find and allocate n frames Load program Set up page table to translate logical pages to physical frames

Recall: P6 Page tables (32bit) 20 12 Logical address: VPN VPO 20 12 VPN1 VPN2 PFN PFO PDE p=1 PTE p=1 data PDBR – Page Dir Base Reg PDBR Page directory Page table Data page Pages, page directories, page tables all 4kB

x86-64 paging Virtual address What is the main problem here? Solution? 9 9 9 9 12 VPN1 VPN2 VPN3 VPN4 VPO Page Directory Pointer Table What is the main problem here? Page Directory Table Every logical memory access needs more than two physical memory accesses Page Map Table Page Table Solution? Cache page tables entries 48 bits = 256 TiB PAE allows 52 bits (4 PiB) – Physical Address Extension PM4LE PDPE PDE PTE PDBR 40 12 PFN PFO Physical address

Translating with the P6 TLB Partition VPN into TLBT and TLBI. Is the PTE for VPN cached in set TLBI? Yes: Check permissions, build physical address No: Read PTE (and PDE if not cached) from memory and build physical address CPU virtual address 20 12 VPN VPO 16 4 TLBT TLBI 1 2 TLB hit TLB miss TBLT, TBLI = TLB Tag and TLB Index PDE PTE 3 ... 20 12 partial TLB hit PFN PFO physical address page table translation 4

Protection: P6 Page Table Entry (PTE) 31 12 11 9 8 7 6 5 4 3 2 1 Page physical base address Avail G D A CD WT U/S R/W P=1 Page base address: 20 most significant bits of physical page address (forces pages to be 4 KB aligned) Avail: available for system programmers G: global page (don’t evict from TLB on task switch) D: dirty (set by MMU on writes) A: accessed (set by MMU on reads and writes) CD: cache disabled or enabled WT: write-through or write-back cache policy for this page U/S: user/supervisor R/W: read/write P: page is present in physical memory (1) or not (0) Protection information typically includes: Readable Writeable Executable (can fetch to i-cache) Reference bits used for demand paging

Difference between user- and kernel-level threads? Kernel-level threads: implemented in kernel-space, the OS sees them User-level threads: managed by user-level libraries, much cheaper than kernel threads Advantages/disadvantages Good for frequently blocking applications Scheduling can be inefficient Can block the entire process Significant memory overhead in the kernel Fast switching CPU-time can be assigned according to the their number Slow management operations No OS support required

Threads: user- to kernel- level threads mapping One-to-one Many-to-one Many-to-many Kernel User CPU 0 CPU 1 Kernel User CPU 0 CPU 1 Kernel User CPU 0 CPU 1 Text Data BSS Thread 1 stack Thread 2 stack Thread 3 stack Text BSS Thread 1 stack Thread 2 stack Thread 3 stack Data

Scheduling Life of a scheduler: What to schedule? When to schedule? For how long? Scheduler needs CPU to decide what to schedule Any time spent in scheduler is “wasted” time Want to minimize overhead of decisions To maximize utilization of CPU But low overhead is no good if your scheduler picks the “wrong” things to run! Trade-off between: scheduler complexity/overhead and quality of resulting schedule

When to schedule? When: A running process blocks (or calls yield()) e.g., initiates blocking I/O or waits on a child A blocked process unblocks I/O completes A running or waiting process terminates An interrupt occurs I/O or timer 2 or 4 can involve preemption

Scheduling: Preemption Difference between preemptive and non-preemptive scheduling? Preemptive: Processes dispatched and descheduled without warning How? timer interrupt, page fault, etc. Non-preemptive: Each process explicitly gives up the scheduler How? Start I/O, executes a “yield()” call, etc. What could be the problem with non-preemptive scheduling?

Round-robin Simplest interactive algorithm Run all runnable tasks for fixed quantum in turn Advantages: It’s easy to implement It’s easy to understand, and analyze Higher turnaround time than SJF, but better response Disadvantages: It’s rarely what you want Treats all tasks the same

Priority Very general class of scheduling algorithms Assign every task a priority Dispatch highest priority runnable task Priorities can be dynamically changed Schedule processes with same priority using Round Robin FCFS etc. Priority 100 Priority 4 Priority 3 Priority 2 Priority 1 T Runnable tasks Priority …

Scheduling: First-come first-served Simplest algorithm! Task Execution time A 24 B 3 C What about the waiting time? Waiting times: 0, 24, 27 Avg. = (0+24+27)/3 = 17 A B C 24 27 30

Scheduling: First-come first-served Different arrival order Task Execution time B 3 C A 24 What about the waiting time? Waiting times: 6, 0, 3 Avg. = (0+3+6)/3 = 3 Much better  But unpredictable  B C A 3 6 30

Scheduling: Shortest-job first Task Execution time A 6 B 8 C 7 D 3 Always run process with the shortest execution time. Optimal: minimizes waiting time (and hence turnaround time) Task Execution time A 6 B 8 C 7 D 3 What to do if new jobs arrive (interactive load)? Shortest Remaining Time First: New, short jobs may preempt longer jobs already running D A C B

RT Scheduling: Rate-monotonic scheduling Schedule periodic tasks by always running task with shortest period first. Static (offline) scheduling algorithm Suppose: m tasks Ci is the execution time of i’th task Pi is the period of i’th task Then RMS will find a feasible schedule if: (Proof is beyond scope of this course)

RT Scheduling: Earliest Deadline First Schedule task with earliest deadline first (duh..) Dynamic, online. Tasks don’t actually have to be periodic… More complex - O(n) – for scheduling decisions EDF will find a feasible schedule if: Which is very handy. Assuming zero context switch time…

Scheduling: exercise Process Burst time Priority P1 10 1 2 5 3 1 4 2 Notes: SJF: equal burst length processes are scheduled in FCFS Priority: non-preemptive priority, small priority number means high priority, equal priority processes are scheduled in FCFS) RR: quantum=1. P2 P3 P4 P5 Time: 1 2 3 4 5 6 7 8 10 9 11 12 13 19 16 14 15 17 18 FCFS: P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P2 P3 P3 P4 P5 P5 P5 P5 P5 SJF: P2 P4 P3 P3 P5 P5 P5 P5 P5 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 RR: P1 P2 P3 P4 P5 P1 P3 P5 P1 P5 P1 P5 P1 P5 P1 P1 P1 P1 P1 Source: https://prof.hti.bfh.ch/myf1/opsys1/Exercises/Chap5/Problems1.sol.html