Outline for Today Objectives: Announcements Interrupts (continued)

Slides:



Advertisements
Similar presentations
Computer Architecture
Advertisements

LOTTERY SCHEDULING: FLEXIBLE PROPORTIONAL-SHARE RESOURCE MANAGEMENT
Embedded System Lab. 1 Embedded system Lab.
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Tutorial 3 - Linux Interrupt Handling -
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Architectural Support for OS March 29, 2000 Instructor: Gary Kimura Slides courtesy of Hank Levy.
G Robert Grimm New York University Lottery Scheduling.
OS Fall ’ 02 Introduction Operating Systems Fall 2002.
Architectural Support for Operating Systems. Announcements Most office hours are finalized Assignments up every Wednesday, due next week CS 415 section.
3.5 Interprocess Communication
Processes 1 CS502 Spring 2006 Processes Week 2 – CS 502.
OS Spring’03 Introduction Operating Systems Spring 2003.
1 Last Class: Introduction Operating system = interface between user & architecture Importance of OS OS history: Change is only constant User-level Applications.
©Brooks/Cole, 2003 Chapter 7 Operating Systems Dr. Barnawi.
OS Spring’04 Introduction Operating Systems Spring 2004.
1 OS & Computer Architecture Modern OS Functionality (brief review) Architecture Basics Hardware Support for OS Features.
Lottery Scheduling: Flexible Proportional-Share Resource Management Sim YounSeok C. A. Waldspurger and W. E. Weihl.
Contact Information Office: 225 Neville Hall Office Hours: Monday and Wednesday 12:00-1:00 and by appointment.
Outline for Today Objectives: Linux scheduler Lottery scheduling
Scheduling Basic scheduling policies, for OS schedulers (threads, tasks, processes) or thread library schedulers Review of Context Switching overheads.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
1 CS/COE0447 Computer Organization & Assembly Language Chapter 5 part 4 Exceptions.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Scheduling Lecture 6. What is Scheduling? An O/S often has many pending tasks. –Threads, async callbacks, device input. The order may matter. –Policy,
1 CSE451 Architectural Supports for Operating Systems Autumn 2002 Gary Kimura Lecture #2 October 2, 2002.
1 Review of Process Mechanisms. 2 Scheduling: Policy and Mechanism Scheduling policy answers the question: Which process/thread, among all those ready.
We will focus on operating system concepts What does it do? How is it implemented? Apply to Windows, Linux, Unix, Solaris, Mac OS X. Will discuss differences.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Module 2.0: Threads.
1 Lecture 1: Computer System Structures We go over the aspects of computer architecture relevant to OS design  overview  input and output (I/O) organization.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
Managing Processors Jeff Chase Duke University. The story so far: protected CPU mode user mode kernel mode kernel “top half” kernel “bottom half” (interrupt.
Interrupts and Exception Handling. Execution We are quite aware of the Fetch, Execute process of the control unit of the CPU –Fetch and instruction as.
Interrupts and Interrupt Handling David Ferry, Chris Gill CSE 522S - Advanced Operating Systems Washington University in St. Louis St. Louis, MO
1 Outline for Today Objectives: –Scheduling (continued). –System Calls and Interrupts. Announcements.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
Introduction to Operating Systems Concepts
Architectures of Digital Information Systems Part 1: Interrupts and DMA dr.ir. A.C. Verschueren Eindhoven University of Technology Section of Digital.
Threads vs. Events SEDA – An Event Model 5204 – Operating Systems.
Operating Systems CMPSC 473
Process Management Process Concept Why only the global variables?
Advanced Operating Systems (CS 202) Scheduling (2)
Mechanism: Limited Direct Execution
Computer Architecture
Intro to Processes CSSE 332 Operating Systems
Lottery Scheduling: Flexible Proportional-Share Resource Management
Lottery Scheduling Ish Baid.
Computer System Overview
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
BIC 10503: COMPUTER ARCHITECTURE
Architectural Support for OS
Operating Systems : Overview
Threads Chapter 4.
Chapter 6: CPU Scheduling
Top Half / Bottom Half Processing
CSE 451: Operating Systems Autumn 2003 Lecture 2 Architectural Support for Operating Systems Hank Levy 596 Allen Center 1.
Operating Systems : Overview
CSE 451: Operating Systems Autumn 2001 Lecture 2 Architectural Support for Operating Systems Brian Bershad 310 Sieg Hall 1.
Operating Systems : Overview
Operating Systems : Overview
CSE 451: Operating Systems Winter 2003 Lecture 2 Architectural Support for Operating Systems Hank Levy 412 Sieg Hall 1.
Architectural Support for OS
Process.
CSE 542: Operating Systems
CS703 – Advanced Operating Systems
CSE 153 Design of Operating Systems Winter 2019
Presentation transcript:

Outline for Today Objectives: Announcements Interrupts (continued) Lottery Scheduling Announcements BRING CARDS TO SHUFFLE

Role of Interrupts in I/O So, the program needs to access an I/O device… Start an I/O operation (special instructions or memory-mapped I/O) Device controller performs the operation asynchronously (in parallel with) CPU processing (between controller's buffer & device). If DMA, data transferred between controller's buffer and memory without CPU involvement. Interrupt signals I/O completion when device is done.

do_IRQ() handle_IRQ_event Ret_from_intr() CPU handles Interrupt Device raises interrupt line, CPU detects this, CPU stops current operation, disables interrupts, enters kernel mode, saves current program counter, jumps to predefined location, saves other processor state needed to continue at interrupted instruction. For each interrupt number, jumps to address of appropriate interrupt service routine. [do_IRQ()] Interrupt context Kernel stack of whatever was interrupted Can not sleep Handlers on this line do what needs to be done. [handle_IRQ_event()] Unless SA_INTERRUPT specified when registered, re-enable interrupts during handler execution If line is shared, loop through all handlers Restores saved state at interrupted instruction [ret_from_intr()], returns to user mode. ld add st mul beq sub bne do_IRQ() handle_IRQ_event Ret_from_intr() User Program Interrupt Handling

Interrupt Control local_irq_disable() and local_irq_enable() -- affecting all interrupts for this processor local_irq_save(…) and local_irq_restore(…) – save and disable interrupts on this processor and restore previous interrupt state disable_irq(irq), disable_irq_nosynch(irq), enable_irq(irq) – affecting particular interrupt line Informational: irqs_disabled() – local interrupts disabled? in_interrupt() – in interrupt context or process context? In_irq() – executing an interrupt handler?

Bottom Half Processing Deferring work that is too heavyweight for interrupt handling Mechanisms: Softirqs “soft interrupts”, statically defined (32 max.) action functions that can run concurrently on SMP pending when bit set in 32-bitmask (usually set in associated interrupt handler [raise_softirq()] run with interrupts enabled, proper locking required Tasklets dynamically created functions that are built upon softirqs, two of the same type can not run concurrently lists of tasklet_struct hooked to 2 of the softirqs Workqueue implemented as kernel-based “worker” threads with process context of their own -- thus allowed to sleep

Lottery Scheduling Waldspurger and Weihl (OSDI 94)

Claims Goal: responsive control over the relative rates of computation Support for modular resource management Generalizable to diverse resources Efficient implementation of proportional-share resource management: consumption rates of resources by active computations are proportional to relative shares allocated

Basic Idea Resource rights are represented by lottery tickets abstract, relative (vary dynamically wrt contention), uniform (handle heterogeneity) responsiveness: adjusting relative # tickets gets immediately reflected in next lottery At allocation time: hold a lottery; Resource goes to the computation holding the winning ticket.

Fairness Expected resource allocation is proportional to # tickets held - actual allocation becomes closer over time. Throughput – Expected number of lotteries won by client E[w] = n p where p = t/T Response time -- # lotteries to wait for first win E[n] = 1/p No starvation w # wins t # tickets T total # tickets n # lotteries Number of lotteries won – binomial distribution Response time – geometric distribution

Example List-based Lottery 10 2 5 1 2 Summing: 10 12 17 Random(0, 19) = 15

Bells and Whistles Ticket transfers - objects that can be explicitly passed in messages Can be used to solve priority inversions Ticket inflation Create more - used among mutually trusting clients to dynamically adjust ticket allocations Currencies - “local” control, exchange rates Compensation tickets - to maintain share with I/O use only f of quantum, ticket inflated by 1/f in next

Kernel Objects Backing tickets Currency name amount 1000 base C_name Active amount 300 ticket Issued tickets

base alice bob task1 task3 task2 thread1 thread3 thread4 thread2 3000 1000 base 2000 base 1 bob = 20 base 1 alice = 5 base alice bob 200 100 100 bob 200 alice 100 alice task1 1 task2= .4 alice = 2 base task3 100 task2 500 100 task1 300 task2 100 task3 200 task2 thread1 thread3 thread4 thread2

base alice bob task1 task3 task2 thread1 thread3 thread4 thread2 3000 1000 base 2000 base 1 bob = 20 base 1 alice = 3.33 base alice bob 300 100 100 bob 200 alice 100 alice task1 1 task2= .4 alice = 1.33 base task3 100 100 task2 500 100 task1 300 task2 100 task3 200 task2 thread1 thread3 thread4 thread2

Example List-based Lottery 1 base 2bob 5 task3 2bob 10 task2 Random(0, 2999) = 1500

Compensation A holds 400 base, B holds 400 base A runs full 100msec quantum, B yields at 20msec B uses 1/5 allotted time Gets 400/(1/5) = 2000 base at each subsequent lottery for the rest of this quantum a compensation ticket valued at 2000 - 400

Ticket Transfer Synchronous RPC between client and server create ticket in client’s currency and send to server to fund it’s currency on reply, the transfer ticket is destroyed

Control Scenarios Dynamic Control Conditionally and dynamically grant tickets Adaptability Resource abstraction barriers supported by currencies. Insulate tasks.

UI mktkt, rmtkt, mkcur, rmcur fund, unfund lstkt, lscur, fundx (shell)

Relative Rate Accuracy Figure 4: Relative Rate Accuracy. For each allocated ratio, the observed ratio is plotted for each of three 60 second runs. The gray line indicates the ideal where the two ratios are identical.

Fairness Over Time 8 second time windows over 200 sec. Execution Figure 5: Fairness Over Time. Two tasks executing the Dhry-stone benchmark with a 2 : 1 ticket allocation. Averaged over the entire run, the two tasks executed 25378 and 12619 iterations/sec., for an actual ratio of 2.01 : 1.

Client-Server Query Processing Rates Figure 7: Query Processing Rates. Three clients with an 8 : 3 : 1 ticket allocation compete for service from a multithreaded database server. The observed throughput and response time ratios closely match this allocation.

Controlling Video Rates Figure 8: Controlling Video Rates. Three MPEG viewers are given an initial A: B : C = 3 : 2 : 1 allocation, which is changed to 3 : 1 : 2 at the time indicated by the arrow. The total number of frames displayed is plotted for each viewer. The actual frame rate ratios were 1.92 : 1.50 : 1 and 1.92 : 1 : 1.53, respectively, due to distortions caused by the X server.

Insulation Figure 9: Currencies Insulate Loads. Currencies A and B are identically funded. Tasks A1 andA2 are respectively allocated tickets worth 100:A and 200:A.TasksB1 andB2 are respectively allocated tickets worth 100:B and 200:B. Halfway through the experiment, task B3 is started with an allocation of 300:B.The resulting inflation is locally contained within currency B,andaf-fects neither the progress of tasks in currency A, nor the aggregate A: B progress ratio.

Other Kinds of Resources Claim: can be used for any resource where queuing is used Control relative waiting times for mutex locks. Mutex currency funded out of currencies of waiting threads Holder gets inheritance ticket in addition to its own funding, passed on to next holder (resulting from lottery) on release. Space sharing - inverse lottery, loser is victim (e.g. in page replacement decision, processor node preemption in MP partitioning)

Lock Funding Waiting thread 1 Waiting thread 1 lock 1 holding thread 1 bt holding thread 1

Lock Funding 1 Waiting thread 1 lock 1 1 t bt New holding thread Old holding thread 1

Mutex Waiting Times Figure 11: Mutex Waiting Times. Eight threads compete to acquire a lottery-scheduled mutex. The threads are divided into two groups (A, B) of four threads each, with the ticket allocation A: B = 2 : 1. For each histogram, the solid line indicates the mean (); the dashed lines indicate one standard deviation about the mean ( ÿ). The ratio of average waiting times is A: B = 1 : 2.11; the mutex acquisition ratio is 1.80 : 1.