Discussion Week 2 TA: Kyle Dewey. Overview Concurrency Process level Thread level MIPS - switch.s Project #1.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Operating Systems ECE344 Midterm review Ding Yuan
Secure Operating Systems Lesson 5: Shared Objects.
Discussion Week 4 TA: Kyle Dewey. Overview Project #1 debriefing System calls Project #2.
1 Overview Assignment 10: hints  Deadlocks & Scheduling Assignment 9: solution  Scheduling.
CSE 451: Operating Systems Section 6 Project 2b; Midterm Review.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
5.6.2 Thread Synchronization with Semaphores Semaphores can be used to notify other threads that events have occurred –Producer-consumer relationship Producer.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
3.5 Interprocess Communication
SMP threads an Introduction to Posix Threads. Technical Definition 1.Independent stream of instructions that can be scheduled to run by an operating system.
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 4 – Thread Concepts Outline 4.1 Introduction 4.2Definition of Thread 4.3Motivation for Threads.
CS533 Concepts of Operating Systems Class 3 Integrated Task and Stack Management.
Process Concept An operating system executes a variety of programs
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
© 2004, D. J. Foreman 2-1 Concurrency, Processes and Threads.
Slide 6-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 6.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
CSE 451: Operating Systems Autumn 2013 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Discussion Week 3 TA: Kyle Dewey. Overview Concurrency overview Synchronization primitives Semaphores Locks Conditions Project #1.
CS 153 Design of Operating Systems Spring 2015
Chapter 4: Threads. 4.2CSCI 380 Operating Systems Chapter 4: Threads Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux.
Discussion Week 8 TA: Kyle Dewey. Overview Exams Interrupt priority Direct memory access (DMA) Different kinds of I/O calls Caching What I/O looks like.
 2004 Deitel & Associates, Inc. All rights reserved. 1 Chapter 4 – Thread Concepts Outline 4.1 Introduction 4.2Definition of Thread 4.3Motivation for.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
CSE 451: Operating Systems Section 5 Midterm review.
Kernel Locking Techniques by Robert Love presented by Scott Price.
Lecture 8 Page 1 CS 111 Online Other Important Synchronization Primitives Semaphores Mutexes Monitors.
Multithreaded Programing. Outline Overview of threads Threads Multithreaded Models  Many-to-One  One-to-One  Many-to-Many Thread Libraries  Pthread.
Department of Computer Science and Software Engineering
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
1 OS Review Processes and Threads Chi Zhang
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
4.1 Introduction to Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads.
Unit 4: Processes, Threads & Deadlocks June 2012 Kaplan University 1.
CSC 660: Advanced Operating SystemsSlide #1 CSC 660: Advanced OS Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CPS110: Implementing threads on a uni-processor Landon Cox January 29, 2008.
Linux Kernel Development Chapter 8. Kernel Synchronization Introduction Geum-Seo Koo Fri. Operating System Lab.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 4 – Thread Concepts
PROCESS MANAGEMENT IN MACH
CS 6560: Operating Systems Design
Chapter 4 – Thread Concepts
Threads Threads.
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Chapter 4 Threads.
Operating Systems: A Modern Perspective, Chapter 6
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Jonathan Walpole Computer Science Portland State University
Thread Implementation Issues
Lecture 2 Part 2 Process Synchronization
Background and Motivation
Threads Chapter 5 2/17/2019 B.Ramamurthy.
Threads Chapter 5 2/23/2019 B.Ramamurthy.
Concurrency, Processes and Threads
Still Chapter 2 (Based on Silberchatz’s text and Nachos Roadmap.)
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
Implementing Processes, Threads, and Resources
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Outline Process Management Process manager Hardware process
CS703 – Advanced Operating Systems
CSE 153 Design of Operating Systems Winter 2019
CSE 542: Operating Systems
Presentation transcript:

Discussion Week 2 TA: Kyle Dewey

Overview Concurrency Process level Thread level MIPS - switch.s Project #1

Process Level UNIX/Linux: fork() Windows: CreateProcess()

fork()/waitpid() Example

while( true ) { fork(); }

Threading Overview

User-space Threads OS does not know about them Handle their own scheduling If one blocks, all block Cannot exploit SMP

Blocking Example User Thread 1 User Thread 2 User Thread 3 Process 1 OS Process 2

Thread Standpoint User Thread 1 User Thread 2 User Thread 3 Process 1 OS Process 2

OS Standpoint Process 1 OS Process 2

Blocking OS only sees a process OS blocks the process, in turn blocking all user-space threads

SMP Processes have only a single thread Without kernel assistance, this cannot be changed Only one thread means only one CPU Process 1 OS Process 2

Kernel-Assisted OS has knowledge of threads OS schedules them Act like individual processes sharing an address space

General Pros/Cons Kernel threads can exploit SMP Kernel threads will not cause all threads to block User-space threads are lightweight Context switch is cheap Likely far less code

These are the concepts!

Then implementation happened...

Question: Do Pthreads threads run in user-space or are they kernel-assisted?

Answer: Yes.

Pthreads Really just a standard with a number of possible implementations Implementation can be kernel-assisted or in user-space Most OSes are kernel-assisted

Pthreads Example

Java Threads Again, merely a standard Most implement as kernel-assisted threads

Java Example

Kernel Thread Implementation OS can implement threads however it likes Pthreads and Java are libraries built on top of the threading primitives provided by the OS

Linux vs. Windows Linux provides the clone() system call Threads are actually processes Windows provides CreateThread() Referred to as “lightweight processes”

NACHOS Threads Kernel-assisted Cannot currently handle interrupts or preemption correctly Similar to MS-DOS...until project 2

MS-DOS/NACHOS One thread of execution One process can run OS is more like a large, complex software library

Thread Primitives Fork() - acts much like pthread_create Yield() - gives up the CPU for any other available threads Sleep() - like yield, but calling thread is blocked Finish() - terminates calling thread

For Project 1 Fork() creates, but does not immediately start running, a new thread Though there is no I/O, Sleep() can still be called to block on waiting for a critical region to clear

NACHOS Threads

Concurrency Looks easy Really hard to get right Really hard No seriously, borderline impossible

Race Condition Different results are possible based on different process/thread orderings Ordering may be correct % of the time

Deadlock Two processes/threads wait for each other to do something While they wait, they do not do whatever it is they are waiting for Potential outcome of a race condition

Critical Region A point in code where the ordering matters Almost always this is some state that is shared between processes/threads Client connect to server:port1 connect to server:port2 do something with both Server accept from port1 accept from port2 do something with both

Fixing the Problem Do not share state Only share read-only state Carefully regulate write access to shared state

Regulation A critical region can be manipulated by only one thread at a time Need a way to enforce that at most one thread at any time point is in such a region

Solving in Java Java provides the synchronized keyword for blocks Only one thread at a time may access a block marked with the synchronized keyword int x = 0; public synchronized void set( int y ) {x = y;} public int get() {return x;}

Who cares about Java? Many concurrency primitives work exactly like this, just with a little more work One call upon entrance to critical region, another upon exit The entrance and exit are implicit through blocks with Java

Semaphores Simply a shared integer One call decrements, another increments By convention, 0 is locked, and values > 0 are unlocked Values < 0 mean the semaphore is not working!

Semaphores Increment/decrement are atomic - they are uninterruptible The highest possible number it can hold is equal to the max number of callers to the region it protects

Example int x = 0; Semaphore s; public void set( int y ) { s.decrement(); // wait/P/down x = y; s.increment(); } // signal/V/up public int get() {return x;}

Project 1 Task 1 Experiment according to instructions Explain the execution of multithreaded code Add semaphores and contrast the difference

Project 1 Task 2 Implement locks - essentially semaphores with a maximum of one caller at a time Given all the semaphore code to look at Hint hint it is a special case of a semaphore

Project 1 Task 3 Implement conditions Require a correct Lock implementation Allows a group of threads to synchronize on a given section of code Can enforce that all must be at the same point of execution Block until this is true

Project 1 Task 4 Identify and describe a race condition in a given section of code Fix the race condition using semaphores Fix it another way using locks and/or conditions