CS162 Operating Systems and Systems Programming Lecture 3 Thread Dispatching January 30, 2008 Prof. Anthony D. Joseph http://inst.eecs.berkeley.edu/~cs162.

Slides:



Advertisements
Similar presentations
Concurrency: Threads, Address Spaces, and Processes Sarah Diesburg Operating Systems COP 4610.
Advertisements

Why Concurrency? Allows multiple applications to run at the same time  Analogy: juggling.
CS194-3/CS16x Introduction to Systems Lecture 5 Concurrency, Processes and threads, Overview of ACID September 12, 2007 Prof. Anthony D. Joseph
CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching September 12, 2005 Prof. John Kubiatowicz
CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching January 30, 2006 Prof. Anthony D. Joseph
OS Spring’03 Introduction Operating Systems Spring 2003.
CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching September 10, 2008 Prof. John Kubiatowicz
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Threads CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
OS Spring’04 Introduction Operating Systems Spring 2004.
CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching September 11, 2006 Prof. John Kubiatowicz
Processes in Unix, Linux, and Windows CS-502 Fall Processes in Unix, Linux, and Windows CS502 Operating Systems (Slides include materials from Operating.
CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching September 9, 2009 Prof. John Kubiatowicz
Protection and the Kernel: Mode, Space, and Context.
CSC 501 Lecture 2: Processes. Process Process is a running program a program in execution an “instantiation” of a program Program is a bunch of instructions.
Threads Many software packages are multi-threaded Web browser: one thread display images, another thread retrieves data from the network Word processor:
Concurrency: Threads, Address Spaces, and Processes Andy Wang Operating Systems COP 4610 / CGS 5765.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
Chapter 41 Processes Chapter 4. 2 Processes  Multiprogramming operating systems are built around the concept of process (also called task).  A process.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
CSC139 Operating Systems Lecture 4 Thread Dispatching Adapted from Prof. John Kubiatowicz's lecture notes for CS162
Multiprogramming. Readings r Silberschatz, Galvin, Gagne, “Operating System Concepts”, 8 th edition: Chapter 3.1, 3.2.
CS 162 Discussion Section Week 2. Who am I? Haoyuan (HY) Li Office Hours: 1pm-3pm
CS 162 Discussion Section Week 2. Who am I? Wesley Chow Office Hours: 12pm-2pm 411 Soda Does Monday 1-3 work for everyone?
Chapter 4: Threads March 17 th, 2009 Instructor: Hung Q. Ngo Kyung Hee University.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Goals for Today How do we provide multiprogramming? What are Processes? How are they related to Threads and Address Spaces? Note: Some slides and/or pictures.
MIPS Lab CMPE 142 – Operating Systems. MIPS Simulator First Time Startup Setting Options.
CPS110: Implementing threads on a uni-processor Landon Cox January 29, 2008.
CS 162 Discussion Section Week 2. Who am I? Prashanth Mohan Office Hours: 11-12pm Tu W at.
Tutorial 2: Homework 1 and Project 1
Multiprogramming. Readings r Chapter 2.1 of the textbook.
Introduction to Operating Systems Concepts
Process concept.
CSCI 511 Operating Systems Chapter 3,4 (Part A) Processes and Threads
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
CSCS 511 Operating Systems Ch3, 4 (Part B) Further Understanding Threads Dr. Frank Li Skipped sections 3.5 & 3.6 
Mechanism: Limited Direct Execution
Intro to Processes CSSE 332 Operating Systems
CS162 Operating Systems and Systems Programming Lecture 3 Concurrency and Thread Dispatching January 30, 2013 Anthony D. Joseph
Chapter 4: Threads.
Profs. Anthony D. Joseph and Jonathan Ragan-Kelley
Anthony D. Joseph and John Canny
Concurrency: Threads, Address Spaces, and Processes
CSCI 511 Operating Systems Ch3, 4 (Part B) Further Understanding Threads Dr. Frank Li Skipped sections 3.5 & 3.6 
CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching September 13, 2010 Prof. John Kubiatowicz
CSCI 511 Operating Systems Chapter 3,4 (Part A) Processes and Threads
CS162 Operating Systems and Systems Programming Lecture 2 Concurrency: Processes, Threads, and Address Spaces January 28, 2008 Prof. Anthony D. Joseph.
January 24th, 2011 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 2 Concurrency: Processes, Threads,
ICS 143 Principles of Operating Systems
CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching September 10, 2007 Prof. John Kubiatowicz
Concurrency: Threads, Address Spaces, and Processes
Processes in Unix, Linux, and Windows
January 28, 2010 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching January.
CS162 Operating Systems and Systems Programming Lecture 3 Concurrency: Processes, Threads, and Address Spaces September 5, 2007 Prof. John Kubiatowicz.
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Lecture Topics: 11/1 General Operating System Concepts Processes
CSE 451: Operating Systems Autumn 2003 Lecture 2 Architectural Support for Operating Systems Hank Levy 596 Allen Center 1.
CSE 451: Operating Systems Autumn 2001 Lecture 2 Architectural Support for Operating Systems Brian Bershad 310 Sieg Hall 1.
CS162 Operating Systems and Systems Programming Lecture 3 Concurrency: Processes, Threads, and Address Spaces January 25, 2006 Prof. Anthony D. Joseph.
Operating Systems: A Modern Perspective, Chapter 6
Processes in Unix, Linux, and Windows
CSE 451: Operating Systems Winter 2003 Lecture 2 Architectural Support for Operating Systems Hank Levy 412 Sieg Hall 1.
Chapter 2 Processes and Threads 2.1 Processes 2.2 Threads
CS510 Operating System Foundations
CS162 Operating Systems and Systems Programming Lecture 3 Concurrency: Processes, Threads, and Address Spaces September 7, 2005 Prof. John Kubiatowicz.
CSE 153 Design of Operating Systems Winter 2019
CS703 – Advanced Operating Systems
Concurrency: Threads, Address Spaces, and Processes
Presentation transcript:

CS162 Operating Systems and Systems Programming Lecture 3 Thread Dispatching January 30, 2008 Prof. Anthony D. Joseph http://inst.eecs.berkeley.edu/~cs162

Recall: Modern Process with Multiple Threads Process: Operating system abstraction to represent what is needed to run a single, multithreaded program Two parts: Multiple Threads Each thread is a single, sequential stream of execution Protected Resources: Main Memory State (contents of Address Space) I/O state (i.e. file descriptors) Why separate the concept of a thread from that of a process? Discuss the “thread” part of a process (concurrency) Separate from the “address space” (Protection) Heavyweight Process  Process with one thread

Recall: Single and Multithreaded Processes Threads encapsulate concurrency “Active” component of a process Address spaces encapsulate protection Keeps buggy program from trashing the system “Passive” component of a process

Recall: Classification Many One # of addr spaces: Many One # threads Per AS: MS/DOS, early Macintosh Traditional UNIX Embedded systems (Geoworks, VxWorks, JavaOS,etc) JavaOS, Pilot(PC) Mach, OS/2, Linux, Win 95?, Mac OS X, Win NT to XP, Solaris, HP-UX Real operating systems have either One or many address spaces One or many threads per address space Did Windows 95/98/ME have real memory protection? No: Users could overwrite process tables/System DLLs

Further Understanding Threads Thread Dispatching Goals for Today Further Understanding Threads Thread Dispatching Beginnings of Thread Scheduling Note: Some slides and/or pictures in the following are adapted from slides ©2005 Silberschatz, Galvin, and Gagne. Many slides generated from my lecture notes by Kubiatowicz. Note: Some slides and/or pictures in the following are adapted from slides ©2005 Silberschatz, Galvin, and Gagne

Recall: Execution Stack Example A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); C() { A(2); A(1); A: tmp=1 ret=exit B: ret=A+2 C: ret=B+1 A: tmp=2 ret=C+1 Stack Pointer Stack Growth Stack holds temporary results Permits recursive execution Crucial to modern languages

MIPS: Software conventions for Registers 0 zero constant 0 1 at reserved for assembler 2 v0 expression evaluation & 3 v1 function results 4 a0 arguments 5 a1 6 a2 7 a3 8 t0 temporary: caller saves . . . (callee can clobber) 15 t7 16 s0 callee saves . . . (callee must save) 23 s7 24 t8 temporary (cont’d) 25 t9 26 k0 reserved for OS kernel 27 k1 28 gp Pointer to global area 29 sp Stack pointer 30 fp frame pointer 31 ra Return Address (HW) Before calling procedure: Save caller-saves regs Save v0, v1 Save ra After return, assume Callee-saves reg OK gp,sp,fp OK (restored!) Other things trashed

Single-Threaded Example Imagine the following C program: main() { ComputePI(“pi.txt”); PrintClassList(“clist.text”); } What is the behavior here? Program would never print out class list Why? ComputePI would never finish

Time Use of Threads Version of program with Threads: main() { CreateThread(ComputePI(“pi.txt”)); CreateThread(PrintClassList(“clist.text”)); } What does “CreateThread” do? Start independent thread running given procedure What is the behavior here? Now, you would actually see the class list This should behave as if there are two separate CPUs CPU1 CPU2 Time

Memory Footprint of Two-Thread Example If we stopped this program and examined it with a debugger, we would see Two sets of CPU registers Two sets of Stacks Questions: How do we position stacks relative to each other? What maximum size should we choose for the stacks? What happens if threads violate this? How might you catch violations? Code Global Data Heap Stack 1 Stack 2 Address Space

Administriva: Time for Project Signup Project Signup: Watch “Group/Section Assignment Link” 4-5 members to a group Everyone in group must be able to actually attend same section The sections assigned to you by Telebears are temporary! Only submit once per group! Everyone in group must have logged into their cs162-xx accounts once before you register the group Make sure that you select at least 2 potential sections Due date: Thursday (1/31) by 11:59pm Sections: Go to desired section this week (Thurs/Fri) Section Time Location TA 101 Th 10:00-11:00A 45 Evans Barret 102 Th 11:00-12:00P 85 Evans 103 Th 4:00-5:00P 320 Soda Man-Kit 104 F 2:00-3:00P 310 Soda Manu 105 F 3:00-4:00p 405 Soda

Nachos reader: Required! Administrivia (2) Cs162-xx accounts: Make sure you got an account form If you haven’t logged in yet, you need to do so Email addresses We need an email address from you If you haven’t given us one already, you should get prompted when you log in again (or type “register”) Nachos reader: Required! TBA: Copy Central at corner of Hearst&Euclid Includes lectures and printouts of all of the code Start Project 1 Go to Nachos page and start reading up Note that all the Nachos code is printed in your reader

Each Thread has a Thread Control Block (TCB) Per Thread State Each Thread has a Thread Control Block (TCB) Execution State: CPU registers, program counter, pointer to stack Scheduling info: State (more later), priority, CPU time Accounting Info Various Pointers (for implementing scheduling queues) Pointer to enclosing process? (PCB)? Etc (add stuff as you find a need) In Nachos: “Thread” is a class that includes the TCB OS Keeps track of TCBs in protected memory In Array, or Linked List, or …

Lifecycle of a Thread (or Process) As a thread executes, it changes state: new: The thread is being created ready: The thread is waiting to run running: Instructions are being executed waiting: Thread waiting for some event to occur terminated: The thread has finished execution “Active” threads are represented by their TCBs TCBs organized into queues based on their state

Ready Queue And Various I/O Device Queues Thread not running  TCB is in some scheduler queue Separate queue for each device/signal/condition Each queue can have a different scheduler policy Head Tail Ready Queue Tape Unit 0 Disk Unit 2 Ether Netwk 0 Other State TCB9 Link Registers TCB6 TCB16 Other State TCB2 Link Registers TCB3 Other State TCB8 Link Registers

Dispatch Loop Conceptually, the dispatching loop of the operating system looks as follows: Loop { RunThread(); ChooseNextThread(); SaveStateOfCPU(curTCB); LoadStateOfCPU(newTCB); } This is an infinite loop One could argue that this is all that the OS does Should we ever exit this loop??? When would that be? Emergency crash of operating system called “panic()”

Consider first portion: RunThread() How do I run a thread? Running a thread Consider first portion: RunThread() How do I run a thread? Load its state (registers, PC, stack pointer) into CPU Load environment (virtual memory space, etc) Jump to the PC How does the dispatcher get control back? Internal events: thread returns control voluntarily External events: thread gets preempted OS’s have almost human characteristics – unpredictable, hard to understand, … Different things share the same CPU – one thread, then another Similar to schizophrenia, like the movie Sybil, one body shared by several people, say we start with Dave Patterson Threads are like the personalities of the CPU. First one thread/personality uses the CPU, then another,…

Waiting on a “signal” from other thread Thread executes a yield() Internal Events Blocking on I/O The act of requesting I/O implicitly yields the CPU Waiting on a “signal” from other thread Thread asks to wait and thus yields the CPU Thread executes a yield() Thread volunteers to give up CPU computePI() { while(TRUE) { ComputeNextDigit(); yield(); } Yield is for really nice people – Ever see two people at the supermarket checkout line? You first, no you first, …

Stack for Yielding Thread ComputePI Stack growth yield run_new_thread kernel_yield Trap to OS switch How do we run a new thread? run_new_thread() { newThread = PickNewThread(); switch(curThread, newThread); ThreadHouseKeeping(); /* next Lecture */ } How does dispatcher switch to a new thread? Save anything next thread may trash: PC, regs, stack Maintain isolation for each thread

What do the stacks look like? Consider the following code blocks: proc A() { B(); } proc B() { while(TRUE) { yield(); Suppose we have 2 threads: Threads S and T Thread S Stack growth A B(while) yield run_new_thread switch Thread T A B(while) yield run_new_thread switch

BREAK

Saving/Restoring state (often called “Context Switch) Switch(tCur,tNew) { /* Unload old thread */ TCB[tCur].regs.r7 = CPU.r7; … TCB[tCur].regs.r0 = CPU.r0; TCB[tCur].regs.sp = CPU.sp; TCB[tCur].regs.retpc = CPU.retpc; /*return addr*/ /* Load and execute new thread */ CPU.r7 = TCB[tNew].regs.r7; CPU.r0 = TCB[tNew].regs.r0; CPU.sp = TCB[tNew].regs.sp; CPU.retpc = TCB[tNew].regs.retpc; return; /* Return to CPU.retpc */ }

How many registers need to be saved/restored? Switch Details How many registers need to be saved/restored? MIPS 4k: 32 Int(32b), 32 Float(32b) Pentium: 14 Int(32b), 8 Float(80b), 8 SSE(128b),… Sparc(v7): 8 Regs(32b), 16 Int regs (32b) * 8 windows = 136 (32b)+32 Float (32b) Itanium: 128 Int (64b), 128 Float (82b), 19 Other(64b) retpc is where the return should jump to. In reality, this is implemented as a jump There is a real implementation of switch in Nachos. See switch.s Normally, switch is implemented as assembly! Of course, it’s magical! But you should be able to follow it!

Switch Details (continued) What if you make a mistake in implementing switch? Suppose you forget to save/restore register 4 Get intermittent failures depending on when context switch occurred and whether new thread uses register 4 System will give wrong result without warning Can you devise an exhaustive test to test switch code? No! Too many combinations and inter-leavings Cautionary tail: For speed, Topaz kernel saved one instruction in switch() Carefully documented! Only works As long as kernel size < 1MB What happened? Time passed, People forgot Later, they added features to kernel (no one removes features!) Very weird behavior started happening Moral of story: Design for simplicity

What happens when thread blocks on I/O? CopyFile Stack growth read run_new_thread kernel_read Trap to OS switch What happens when a thread requests a block of data from the file system? User code invokes a system call Read operation is initiated Run new thread/switch Thread communication similar Wait for Signal/Join Networking

Answer: Utilize External Events What happens if thread never does any I/O, never waits, and never yields control? Could the ComputePI program grab all resources and never release the processor? What if it didn’t print to console? Must find way that dispatcher can regain control! Answer: Utilize External Events Interrupts: signals from hardware or software that stop the running code and jump to kernel Timer: like an alarm clock that goes off every some many milliseconds If we make sure that external events occur frequently enough, can ensure dispatcher runs Patterson’s a nice guy, so he gives up the body after using it for awhile and let’s John Kubitowicz have it. But Kubi’s not so nice, so he won’t give up control… If you want to wake up for a final, you set your clock, or ask your roommate to pour water over your head – OS does the same

Example: Network Interrupt Raise priority Reenable All Ints Save registers Dispatch to Handler  Transfer Network Packet from hardware to Kernel Buffers Restore registers Clear current Int Disable All Ints Restore priority RTI “Interrupt Handler” Disable All Ints PC saved Supervisor Mode  add $r1,$r2,$r3 subi $r4,$r1,#4 slli $r4,$r4,#2 External Interrupt Pipeline Flush lw $r2,0($r4) lw $r3,4($r4) add $r2,$r2,$r3 sw 8($r4),$r2  Restore PC User Mode An interrupt is a hardware-invoked context switch No separate step to choose what to run next Always run the interrupt handler immediately

Use of Timer Interrupt to Return Control Solution to our dispatcher problem Use the timer interrupt to force scheduling decisions Timer Interrupt routine: TimerInterrupt() { DoPeriodicHouseKeeping(); run_new_thread(); } I/O interrupt: same as timer interrupt except that DoHousekeeping() replaced by ServiceIO(). Some Routine run_new_thread TimerInterrupt Interrupt switch Stack growth

Choosing a Thread to Run How does Dispatcher decide what to run? Zero ready threads – dispatcher loops Alternative is to create an “idle thread” Can put machine into low-power mode Exactly one ready thread – easy More than one ready thread: use scheduling priorities Possible priorities: LIFO (last in, first out): put ready threads on front of list, remove from front Pick one at random FIFO (first in, first out): Put ready threads on back of list, pull them from front This is fair and is what Nachos does Priority queue: keep ready list sorted by TCB priority field

The state of a thread is contained in the TCB Summary The state of a thread is contained in the TCB Registers, PC, stack pointer States: New, Ready, Running, Waiting, or Terminated Multithreading provides simple illusion of multiple CPUs Switch registers and stack to dispatch new thread Provide mechanism to ensure dispatcher regains control Switch routine Can be very expensive if many registers Must be very carefully constructed! Many scheduling options Decision of which thread to run complex enough for complete lecture