CS 162 Discussion Section Week 2 (9/16 – 9/20). Who am I? Kevin Klues Office Hours:

Slides:



Advertisements
Similar presentations
Process management Information maintained by OS for process management  process context  process control block OS virtualization of CPU for each process.
Advertisements

Chapter 6: Process Synchronization
CS 162 Discussion Section Week 3. Who am I? Mosharaf Chowdhury Office 651 Soda 4-5PM.
Review: Chapters 1 – Chapter 1: OS is a layer between user and hardware to make life easier for user and use hardware efficiently Control program.
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
Operating Systems and Systems Programming CS162 Teaching Staff.
Computer Science 162 Discussion Section Week 2. Agenda Recap “What is an OS?” and Why? Process vs. Thread “THE” System.
Computer Science 162 Section 1 CS162 Teaching Staff.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
CSE451 Processes Spring 2001 Gary Kimura Lecture #4 April 2, 2001.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
Concurrency Recitation – 2/24 Nisarg Raval Slides by Prof. Landon Cox.
Advanced Operating Systems CIS 720 Lecture 1. Instructor Dr. Gurdip Singh – 234 Nichols Hall –
CS 162 Discussion Section Week 1 (9/9 – 9/13) 1. Who am I? Kevin Klues Office Hours:
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Threads Many software packages are multi-threaded Web browser: one thread display images, another thread retrieves data from the network Word processor:
Implementing Processes and Process Management Brian Bershad.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Operating Systems CSE 411 Multi-processor Operating Systems Multi-processor Operating Systems Dec Lecture 30 Instructor: Bhuvan Urgaonkar.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Operating Systems and Systems Programming CS162 Teaching Staff.
CS 162 Discussion Section Week 2. Who am I? Haoyuan (HY) Li Office Hours: 1pm-3pm
CS 162 Discussion Section Week 2. Who am I? Wesley Chow Office Hours: 12pm-2pm 411 Soda Does Monday 1-3 work for everyone?
CPS110: Implementing threads Landon Cox. Recap and looking ahead Hardware OS Applications Where we’ve been Where we’re going.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
1 Module 3: Processes Reading: Chapter Next Module: –Inter-process Communication –Process Scheduling –Reading: Chapter 4.5, 6.1 – 6.3.
CS 162 Discussion Section Week 2. Who am I? Prashanth Mohan Office Hours: 11-12pm Tu W at.
29 June 2015 Charles Reiss CS162: Operating Systems and Systems Programming Lecture 5: Basic Scheduling + Synchronization.
Limited Direct Execution
Processes and threads.
CSE 120 Principles of Operating
Advanced Operating Systems CIS 720
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
Sarah Diesburg Operating Systems COP 4610
CS703 – Advanced Operating Systems
Operating Systems (CS 340 D)
Sujata Ray Dey Maheshtala College Computer Science Department
CSCS 511 Operating Systems Ch3, 4 (Part B) Further Understanding Threads Dr. Frank Li Skipped sections 3.5 & 3.6 
Lecture Topics: 11/1 Processes Process Management
Intro to Processes CSSE 332 Operating Systems
Operating Systems and Systems Programming
Process management Information maintained by OS for process management
CSCI 511 Operating Systems Ch3, 4 (Part B) Further Understanding Threads Dr. Frank Li Skipped sections 3.5 & 3.6 
PROCESS MANAGEMENT Information maintained by OS for process management
Lecture Topics: 11/1 General Operating System Concepts Processes
Background and Motivation
Sujata Ray Dey Maheshtala College Computer Science Department
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
Implementing Mutual Exclusion
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CS510 Operating System Foundations
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CS703 – Advanced Operating Systems
Chapter 3: Process Management
Sarah Diesburg Operating Systems CS 3430
Presentation transcript:

CS 162 Discussion Section Week 2 (9/16 – 9/20)

Who am I? Kevin Klues Office Hours: 12:30pm-2:30pm Soda 7th Floor Alcove 2

Today’s Section Talk about the Course Projects (15 min) Overall Goals, Grading, Version Control Project 1 Details Review Lectures 3 and 4 (10 min) Worksheet and Discussion (20 min)

Project Goals Learn to work in teams Use good engineering practices Version control, collaboration Requirements specification Design Document Implementation Testing [Performance, reliability,...] analysis Understand lecture concepts at the implementation level

Project Grading Design docs [40 points] First draft [10 points] Design review [10 points] Final design doc [20 points] Code [60 points]

Good Project Lifetime Day 0: Project released on course webpage Day 1 ‐ 13: Team meets, discusses and breaks up work on design and necessary prototyping Day 14: Initial design document due – Team reviews the document with TA Day 15: Implementation begins Day 20: Implementation is finished. Team switches to writing test cases. Design doc has been updated to reflect the implementation. Day 21: Iteration and performance analysis. Day 23: Team puts finishing touches on write up and gets to bed early.

Design Documents Overview of the project as a whole along with each of its subparts Header must contain the following info Project Name and # Group Members Names and IDs Section # TA Name Example docs on the course webpage under: Projects and Nachos-> General Project Information

Design Document Structure Each part of the project should be explained using the following structure Overview Correctness Constraints Declarations Descriptions Testing Plan

Design Doc Length Keep under 15 pages Will dock points if too long!

Design Reviews Design reviews Schedule a time (outside of Section) with your Section TA to meet and discuss your design Every member must attend Will test that every member understands YOU are responsible for testing your code We provide access to a simple autograder But your project is graded against a much more extensive autograder

Project 1: Thread Programming Can be found in the course website Under the heading “Projects and Nachos” Stock Nachos has an incomplete thread system. Your job is to Complete it, and Use it to solve several synchronization problems

Version Control for the Projects Course provided SVN and Private GitHub repos for every group Use whichever you prefer Access: svn: git:

Project Questions?

Quiz (True/False) Each thread owns its own stack and heap. (True/False) Hardware provides better (higher-level) primitives than atomic load and store for constructing synchronization tools (True/False) Correct threaded programs don't need to work for all interleavings of thread instruction sequences. (True/False) Timer interrupts are an example of non- preemptive multithreading (Short Answer) What is an operation that either runs to completion or not at all called?

Lecture Review

Putting it together: Processes … Process 1Process 2Process N CPU sched. OS CPU (1 core) 1 process at a time CPU state IO state Mem. CPU state IO state Mem. CPU state IO state Mem. Switch overhead: high CPU state: low Memory/IO state: high Process creation: high Protection CPU: yes Memory/IO: yes Sharing overhead: high (involves at least a context switch)

Putting it together: Threads Process 1 CPU sched. OS CPU (1 core) 1 thread at a time IO state Mem. … threads Process N IO state Mem. … threads … Switch overhead: low (only CPU state) Thread creation: low Protection CPU: yes Memory/IO: No Sharing overhead: low (thread switch overhead low) CPU state CPU state CPU state CPU state

Why Processes & Threads? Multiprogramming: Run multiple applications concurrently Protection: Don’t want a bad application to crash system! Goals: Process: unit of execution and allocation Virtual Machine abstraction: give process illusion it owns machine (i.e., CPU, Memory, and IO device multiplexing) Solution: Process creation & switching expensive Need concurrency within same app (e.g., web server) Challenge: Thread: Decouple allocation and execution Run multiple threads within same process Solution:

Dispatch Loop Conceptually, the dispatching loop of the operating system looks as follows: Loop { RunThread(); ChooseNextThread(); SaveStateOfCPU(curTCB); LoadStateOfCPU(newTCB); } This is an infinite loop One could argue that this is all that the OS does

Yielding through Internal Events Blocking on I/O The act of requesting I/O implicitly yields the CPU Waiting on a “signal” from other thread Thread asks to wait and thus yields the CPU Thread executes a yield() Thread volunteers to give up CPU computePI() { while(TRUE) { ComputeNextDigit(); yield(); } } Note that yield() must be called by programmer frequently enough!

Review: Two Thread Yield Example Consider the following code blocks: proc A() { B(); } proc B() { while(TRUE) { yield(); } } Suppose we have two threads: Threads S and T Thread S A B(while) yield run_new_thread switch kernel_yield Thread T A B(while) yield run_new_thread switch kernel_yield

Why allow cooperating threads? People cooperate; computers help/enhance people’s lives, so computers must cooperate By analogy, the non-reproducibility/non-determinism of people is a notable problem for “carefully laid plans” Advantage 1: Share resources One computer, many users One bank balance, many ATMs What if ATMs were only updated at night? Embedded systems (robot control: coordinate arm & hand) Advantage 2: Speedup Overlap I/O and computation Multiprocessors – chop up program into parallel pieces Advantage 3: Modularity Chop large problem up into simpler pieces To compile, for instance, gcc calls cpp | cc1 | cc2 | as | ld Makes system easier to extend

Definitions Synchronization: using atomic operations to ensure cooperation between threads For now, only loads and stores are atomic We’ll show that is hard to build anything useful with only reads and writes Critical Section: piece of code that only one thread can execute at once Mutual Exclusion: ensuring that only one thread executes critical section One thread excludes the other while doing its task Critical section and mutual exclusion are two ways of describing the same thing

Better Implementation of Locks by Disabling Interrupts Key idea: maintain a lock variable and impose mutual exclusion only during operations on that variable int value = FREE; Acquire() { disable interrupts; if (value == BUSY) { put thread on wait queue; Go to sleep(); // Enable interrupts? } else { value = BUSY; } enable interrupts; } Release() { disable interrupts; if (anyone on wait queue) { take thread off wait queue; Put at front of ready queue; } else { value = FREE; } enable interrupts; }

How to Re-enable After Sleep()? Since ints are disabled when you call sleep: Responsibility of the next thread to re-enable ints When the sleeping thread wakes up, returns to acquire and re-enables interrupts Thread A Thread B.. disable ints sleep.. sleep return enable ints.. context switch yield return enable ints disable int yield

Worksheet…

Quiz (True/False) Each thread owns its own heap and stack (True/False) Hyper-threading involves only 1 hardware thread, but many virtual threads (True/False) Locks can be constructed by enabling/disabling interrupts (True/False) Finer-grained sharing leads to an increase in concurrency which leads to better performance (Short Answer) What is the section of code between lock.acquire() and lock.release() called?