CS 162 Discussion Section Week 2. Who am I? Haoyuan (HY) Li Office Hours: 1pm-3pm

Slides:



Advertisements
Similar presentations
CS 162 Discussion Section Week 3. Who am I? Mosharaf Chowdhury Office 651 Soda 4-5PM.
Advertisements

Processes CSCI 444/544 Operating Systems Fall 2008.
Avishai Wool lecture Priority Scheduling Idea: Jobs are assigned priorities. Always, the job with the highest priority runs. Note: All scheduling.
Computer Science 162 Discussion Section Week 2. Agenda Recap “What is an OS?” and Why? Process vs. Thread “THE” System.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
OS Spring’03 Introduction Operating Systems Spring 2003.
CSE451 Processes Spring 2001 Gary Kimura Lecture #4 April 2, 2001.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Threads CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Operating Systems (CSCI2413) Lecture 3 Processes phones off (please)
CSE 451: Operating Systems Autumn 2013 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Operating Systems CSE 411 CPU Management Sept Lecture 11 Instructor: Bhuvan Urgaonkar.
CS 153 Design of Operating Systems Spring 2015
COSC 3407: Operating Systems Lecture 3: Processes.
CS 162 Discussion Section Week 1 (9/9 – 9/13) 1. Who am I? Kevin Klues Office Hours:
Threads Many software packages are multi-threaded Web browser: one thread display images, another thread retrieves data from the network Word processor:
Implementing Processes and Process Management Brian Bershad.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Threads and Processes.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Processes and Threads CS550 Operating Systems. Processes and Threads These exist only at execution time They have fast state changes -> in memory and.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Operating Systems and Systems Programming CS162 Teaching Staff.
CS 162 Discussion Section Week 2. Who am I? Wesley Chow Office Hours: 12pm-2pm 411 Soda Does Monday 1-3 work for everyone?
CS162 Operating Systems and Systems Programming Lecture 3 Concurrency and Thread Dispatching September 7 th, 2011 Anthony D. Joseph and Ion Stoica
CPS110: Implementing threads Landon Cox. Recap and looking ahead Hardware OS Applications Where we’ve been Where we’re going.
Lecture 5: Threads process as a unit of scheduling and a unit of resource allocation processes vs. threads what to program with threads why use threads.
We will focus on operating system concepts What does it do? How is it implemented? Apply to Windows, Linux, Unix, Solaris, Mac OS X. Will discuss differences.
CS 162 Discussion Section Week 2 (9/16 – 9/20). Who am I? Kevin Klues Office Hours:
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Department of Computer Science and Software Engineering
CS162 Operating Systems and Systems Programming Lecture 2 Synchronization January 26 th, 2011 Ion Stoica
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Goals for Today How do we provide multiprogramming? What are Processes? How are they related to Threads and Address Spaces? Note: Some slides and/or pictures.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
CPS110: Implementing threads on a uni-processor Landon Cox January 29, 2008.
1 Module 3: Processes Reading: Chapter Next Module: –Inter-process Communication –Process Scheduling –Reading: Chapter 4.5, 6.1 – 6.3.
Embedded Real-Time Systems
1 Chapter 5: Threads Overview Multithreading Models & Issues Read Chapter 5 pages
CS 162 Discussion Section Week 2. Who am I? Prashanth Mohan Office Hours: 11-12pm Tu W at.
Introduction to threads
Processes and threads.
Process concept.
CSCI 511 Operating Systems Chapter 3,4 (Part A) Processes and Threads
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
CSCS 511 Operating Systems Ch3, 4 (Part B) Further Understanding Threads Dr. Frank Li Skipped sections 3.5 & 3.6 
Lecture Topics: 11/1 Processes Process Management
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Intro to Processes CSSE 332 Operating Systems
CS162 Operating Systems and Systems Programming Lecture 3 Concurrency and Thread Dispatching January 30, 2013 Anthony D. Joseph
Operating Systems and Systems Programming
Process Virtualization. Process Process is a program that has initiated its execution. A program is a passive entity; whereas a process is an active entity.
Anthony D. Joseph and John Canny
CSCI 511 Operating Systems Ch3, 4 (Part B) Further Understanding Threads Dr. Frank Li Skipped sections 3.5 & 3.6 
CSCI 511 Operating Systems Chapter 3,4 (Part A) Processes and Threads
January 24th, 2011 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 2 Concurrency: Processes, Threads,
ICS 143 Principles of Operating Systems
January 28, 2010 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 4 Thread Dispatching January.
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Lecture Topics: 11/1 General Operating System Concepts Processes
Processes Hank Levy 1.
Processes and Process Management
Anthony D. Joseph and Ion Stoica
Operating Systems: A Modern Perspective, Chapter 6
Anthony D. Joseph and Ion Stoica
CS510 Operating System Foundations
Outline Process Management Process manager Hardware process
Processes Hank Levy 1.
CS703 – Advanced Operating Systems
Presentation transcript:

CS 162 Discussion Section Week 2

Who am I? Haoyuan (HY) Li Office Hours: 1pm-3pm 465E SODA Research: Data center systems

Administrivia Get to know Nachos Start project 1

Project 1 Can be found in the course website – Under the heading “Projects and Nachos” Stock Nachos has an incomplete thread system. Your job is to – complete it, and – use it to solve several synchronization problems

Project 1 Grading Design docs [40 points] – First draft [10 points] – Design review [10 points] – Final design doc [20 points] Code [60 points]

Design Document Overview of the project as a whole along with its parts Header must contain the following info – Project Name and # – Group Members Name and ID – Section # – TA Name

Design Document Structure Each part of the project should be explained using the following structure Overview Correctness Constraints Declarations Descriptions Testing Plan

True/False 1.Threads within the same process share the same heap and stack. 2.Preemptive multithreading requires threads to give up the CPU using the yield() system call. 3.Despite the overhead of context switching, multithreading can provide speed-up even on a single-core cpu.

Short Answers 1.What is the OS data structure that represents a running process? 2.What are some of the similarities and differences between interrupts and system calls? What roles do they play in preemptive and non-preemptive multithreading?

Why Processes & Threads? Goals:

Why Processes & Threads? Multiprogramming: Run multiple applications concurrently Protection: Don’t want a bad application to crash system! Goals: Solution:

Why Processes & Threads? Multiprogramming: Run multiple applications concurrently Protection: Don’t want a bad application to crash system! Goals: Process: unit of execution and allocation Virtual Machine abstraction: give process illusion it owns machine (i.e., CPU, Memory, and IO device multiplexing) Solution: Challenge:

Why Processes & Threads? Multiprogramming: Run multiple applications concurrently Protection: Don’t want a bad application to crash system! Goals: Process: unit of execution and allocation Virtual Machine abstraction: give process illusion it owns machine (i.e., CPU, Memory, and IO device multiplexing) Solution: Process creation & switching expensive Need concurrency within same app (e.g., web server) Challenge: Solution:

Why Processes & Threads? Multiprogramming: Run multiple applications concurrently Protection: Don’t want a bad application to crash system! Goals: Process: unit of execution and allocation Virtual Machine abstraction: give process illusion it owns machine (i.e., CPU, Memory, and IO device multiplexing) Solution: Process creation & switching expensive Need concurrency within same app (e.g., web server) Challenge: Thread: Decouple allocation and execution Run multiple threads within same process Solution:

Putting it together: Process Memory I/O State (e.g., file, socket contexts) CPU state (PC, registers..) Sequential stream of instructions A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); … (Unix) Process Resources

Putting it together: Processes … Process 1Process 2Process N CPU sched. OS CPU (1 core) 1 process at a time CP U sate IO sate Mem. CP U sate IO sate Mem. CP U sate IO sate Mem. Switch overhead: high – CPU state: low – Memory/IO state: high Process creation: high Protection – CPU: yes – Memory/IO: yes Sharing overhead: high (involves at least a context switch)

Putting it together: Threads Process 1 CPU sched. OS CPU (1 core) 1 thread at a time IO sate Mem. … threads Process N IO sate Mem. … threads … Switch overhead: low (only CPU state) Thread creation: low Protection –CPU: yes –Memory/IO: No Sharing overhead: low (thread switch overhead low) CPU sate CPU sate CPU sate CPU sate

Putting it together: Multi-Cores Process 1 CPU sched. OS IO sate Mem. … threads Process N IO sate Mem. … threads … Switch overhead: low (only CPU state) Thread creation: low Protection –CPU: yes –Memory/IO: No Sharing overhead: low (thread switch overhead low) core 1Core 2Core 3Core 4 CPU 4 threads at a time CPU sate CPU sate CPU sate CPU sate

Putting it together: Hyper-Threading Process 1 CPU sched. OS IO sate Mem. … threads Process N IO sate Mem. … threads … Switch overhead between hardware- threads: very-low (done in hardware) Contention to cache may hurt performance core 1 CPU core 2core 3core 4 8 threads at a time hardware-threads (hyperthreading) CPU sate CPU sate CPU sate CPU sate

Thread State State shared by all threads in process/addr space –Content of memory (global variables, heap) –I/O state (file system, network connections, etc) State “private” to each thread –Kept in TCB  Thread Control Block –CPU registers (including, program counter) –Execution stack – what is this? Execution Stack –Parameters, temporary variables –Return PCs are kept while called procedures are executing

What is the difference between a Thread and a Process?

When and how do you switch between processes?

Scheduling states

Dispatch Loop Conceptually, the dispatching loop of the operating system looks as follows: Loop { RunThread(); ChooseNextThread(); SaveStateOfCPU(curTCB); LoadStateOfCPU(newTCB); } This is an infinite loop –One could argue that this is all that the OS does

Context Switching

An OS needs to mediate access to resources: how do we share the CPU? Strategy 1: force everyone to cooperate – a thread willingly gives up the CPU by calling yield() which calls into the scheduler, which context-switches to another thread – what if a thread never calls yield()? Strategy 2: use pre-emption – at timer interrupt, scheduler gains control and context switches as appropriate Recall, an OS needs to mediate access to resources: how do we share the CPU?

From Lecture: Two Thread Yield Consider the following code blocks:“ " proc A() { B(); } proc B() { while(TRUE) { yield(); } Suppose we have 2 threads:“ Threads S and T Thread S Stack growth A B(while) yield run_new_thread switch Thread T A B(while) yield run_new_thread switch

Detour: Interrupt Controller Interrupts invoked with interrupt lines from devices Interrupt controller chooses interrupt request to honor –Mask enables/disables interrupts –Priority encoder picks highest enabled interrupt –Software Interrupt Set/Cleared by Software –Interrupt identity specified with ID line CPU can disable all interrupts with internal flag Non-maskable interrupt line (NMI) can’t be disabled Network IntID Interrupt Interrupt Mask Control Software Interrupt NMI CPU Priority Encoder Timer Int Disable

Max Number of Processes, Threads? Windows? Linux? Mac OS? How Many Threads and Processes are Enough?

Questions / Examples about Process and Thread?

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY C: ret=addrU addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY C: ret=addrU addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; A: tmp=2 ret=addrV Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY C: ret=addrU addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example Stack holds function arguments, return address Permits recursive execution Crucial to modern languages A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; A: tmp=2 ret=addrV Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY C: ret=addrU addrX: addrY: addrU: addrV: addrZ:

Review: Execution Stack Example A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; A: tmp=2 ret=addrV Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY C: ret=addrU addrX: addrY: addrU: addrV: addrZ: Output: 2

Review: Execution Stack Example A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY C: ret=addrU addrX: addrY: addrU: addrV: addrZ: Output: 2

Review: Execution Stack Example A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ B: ret=addrY addrX: addrY: addrU: addrV: addrZ: Output: 2

Review: Execution Stack Example A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Stack Growth A: tmp=1 ret=addrZ addrX: addrY: addrU: addrV: addrZ: Output: 2 1

Review: Execution Stack Example A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; addrX: addrY: addrU: addrV: addrZ: Output: 2 1