Chien-Chung Shen CIS/UD

Slides:



Advertisements
Similar presentations
Chapter 9 Uniprocessor Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
Advertisements

Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
OS Spring ’ 04 Scheduling Operating Systems Spring 2004.
6/25/2015Page 1 Process Scheduling B.Ramamurthy. 6/25/2015Page 2 Introduction An important aspect of multiprogramming is scheduling. The resources that.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
7/12/2015Page 1 Process Scheduling B.Ramamurthy. 7/12/2015Page 2 Introduction An important aspect of multiprogramming is scheduling. The resources that.
1 Lecture 10: Uniprocessor Scheduling. 2 CPU Scheduling n The problem: scheduling the usage of a single processor among all the existing processes in.
Chapter 8 Multi-Level Feedback Queue Chien-Chung Shen CIS, UD
CPU Scheduling Chapter 6 Chapter 6.
Chapter 6 CPU SCHEDULING.
Chapter 7 Scheduling Chien-Chung Shen CIS, UD
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 VxWorks 5.4 Group A3: Wafa’ Jaffal Kathryn Bean.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Chapter 7 Scheduling Chien-Chung Shen CIS/UD
Scheduling Policies.
CPU SCHEDULING.
Chapter 6: CPU Scheduling
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
Topics Covered What is Real Time Operating System (RTOS)
Chapter 5a: CPU Scheduling
Chapter 8 Multi-Level Feedback Queue
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Uniprocessor Scheduling
Chapter 2.2 : Process Scheduling
Process Scheduling B.Ramamurthy 9/16/2018.
CPU Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
Process Scheduling B.Ramamurthy 11/18/2018.
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
COT 4600 Operating Systems Spring 2011
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Process Scheduling B.Ramamurthy 12/5/2018.
Process & its States Lecture 5.
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 9 Uniprocessor Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Outline Scheduling algorithms Multi-processor scheduling
COT 4600 Operating Systems Fall 2009
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 5: CPU Scheduling
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 4/11/2019.
Process Scheduling B.Ramamurthy 4/7/2019.
Uniprocessor scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Process Scheduling B.Ramamurthy 4/19/2019.
Process Scheduling B.Ramamurthy 4/24/2019.
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Process Scheduling B.Ramamurthy 5/7/2019.
Chapter 6: CPU Scheduling
Uniprocessor Scheduling
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Chapter 3: Process Management
Chien-Chung Shen CIS/UD
Presentation transcript:

Chien-Chung Shen CIS/UD cshen@udel.edu Review of Final Exam Chien-Chung Shen CIS/UD cshen@udel.edu

Semaphore Usage: Summary Mutual exclusion: binary semaphore as mutex lock Controlled access to a given resource consisting of a finite number of instances: counting semaphore semaphore is initialized to the number of instances available Synchronization: two concurrent running threads T1 and T2 with statements S1 and S2, respectively require S2 be executed only after S1 has completed (on one CPU) Semaphore s = 0; T1: S1; signal(s); T2: wait(s); S2; S1 S2 T1 T2

Bounded Buffer Multiple P/C

Buddy Allocation search for free space recursively divides free space by two until a block that is big enough to accommodate the request is found (William Stallings: OS)

$ ln Chapter3 Chapter3.hard $ ls –il (show attributes of files) two names for the same file $ ln Chapter3 Chapter3.hard $ ls –il (show attributes of files)

Get Info about Files [cisc361:/usa/cshen/361 1077] echo hello > foo [cisc361:/usa/cshen/361 1078] more foo hello [cisc361:/usa/cshen/361 1079] stat foo   File: 'foo'   Size: 6         Blocks: 3          IO Block: 1048576 regular file Device: 29h/41d Inode: 73985       Links: 1 Access: (0664/-rw-rw-r--)  Uid: ( 4157/   cshen)   Gid: ( 4157/   cshen) Access: 2018-12-02 23:06:59.243498589 -0400 Modify: 2018-12-02 23:08:42.979511362 -0400 Change: 2018-12-02 23:08:42.979511362 -0400 Birth: - [cisc361:/usa/cshen/361 1080] ls -i foo 73985 foo [cisc361:/usa/cshen/361 1081]  All info of each file is stored in the inode (persistent) structure

Soft/Symbolic Links $ ln –s Chapter3 Chapter3.soft Soft link is a file itself containing “pathname” for the file that the link file is a symbolic link to 3 files types regular file (-) directory (d) symbolic link (l)

Symbolic (Soft) Links A symbolic link is actually a file itself, of a different type, containing the pathname of the linked-to file d: directory -: regular file l: symbolic link Possible of dangling reference

Fundamental Issues A, B, and C are located at the angles of an isosceles triangle A lights up a torch upon seeing the flare, B and C press buzzers what would A, B, and C hear? observers’ views of the global state of the system depend on the observation points reason: non-instantaneous communications One fundamental issue of distributed system is lack of a global system state Two different watches real clocks drift A B C 12:01PM 11:57AM 9:00AM after 3 hours

Fundamental Issues Since we cannot count on simultaneous observations of global states in distributed systems, we need to find a property on which we can depend Distributed systems are causal the cause precedes the effect sending of a message precedes the receipt of the message

time Space-Time diagram p1 and r4? p3 and q3? concurrent causal

time

Lamport Timestamps Example Events occurring at three processors local logical clocks are initialized to 0 2 3 7 1 6 4 3 1 5

Round Robin Instead of running jobs to completion, RR runs a job for a (small) time slice (scheduling quantum) and switches to the next job in the ready queue; repeatedly does so until jobs are finished Time slicing – length of time slice = multiple of timer-interrupt period SJF response time = 5 RR response time = 1 Length of time slice is critical under response time, the shorter the better too short, overhead of context switching dominates

Incorporating I/O Process is blocked waiting for I/O completion When I/O completes, an interrupt is raised, and OS runs and moves the process that issued the I/O from blocked state back to ready state Poor use of CPU Overlap CPU & I/O higher CPU utilization Interactive jobs get run frequently; while they are performing I/O, other CPU-intensive jobs run job B has no I/O

Summary – History is the Guide Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t) Rule 2: If Priority(A) = Priority(B), A & B run in RR Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue) Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue) Rule 5: After some time period S, move all the jobs in the system to the topmost queue Instead of demanding a priori knowledge of a job, it instead observes the execution of a job and prioritizes it accordingly It manages to achieve the best of both worlds: it can deliver excellent overall performance (similar to SJF/STCF) for interactive jobs, and is fair and makes progress for long-running CPU-intensive workloads

Working Set When memory is simply oversubscribed (or memory demands of the set of running processes exceeds the available physical memory), the system will constantly be paging - thrashing Working set W(t, Δ) = the set of pages that a process has been referencing in the last Δ virtual time units Admission control - given a set of processes, system decides not to run a subset of processes, with the hope that the reduced set of processes working sets (the pages that they are using actively) fit in memory and thus can make progress time