CS 5204 Operating Systems Lecture 5

Slides:



Advertisements
Similar presentations
Chapter 4 Threads Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
Advertisements

Operating Systems: Internals and Design Principles
Capriccio: Scalable Threads for Internet Services Rob von Behren, Jeremy Condit, Feng Zhou, Geroge Necula and Eric Brewer University of California at Berkeley.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Threads Irfan Khan Myo Thein What Are Threads ? a light, fine, string like length of material made up of two or more fibers or strands of spun cotton,
Chapter 4: Threads. Overview Multithreading Models Threading Issues Pthreads Windows XP Threads.
CS533 Concepts of Operating Systems Class 5 Integrated Task and Stack Management.
Chapter 5 Processes and Threads Copyright © 2008.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 5: Threads Overview Multithreading Models Threading Issues Pthreads Solaris.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
3.5 Interprocess Communication
Threads CSCI 444/544 Operating Systems Fall 2008.
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 4 – Thread Concepts Outline 4.1 Introduction 4.2Definition of Thread 4.3Motivation for Threads.
Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Silberschatz, Galvin and Gagne ©2007 Chapter 4: Threads.
CS533 Concepts of Operating Systems Class 3 Integrated Task and Stack Management.
A. Frank - P. Weisberg Operating Systems Introduction to Tasks/Threads.
Threads CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
ThreadsThreads operating systems. ThreadsThreads A Thread, or thread of execution, is the sequence of instructions being executed. A process may have.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
CS 153 Design of Operating Systems Spring 2015
Operating Systems Lecture 09: Threads (Chapter 4)
1 Threads, SMP, and Microkernels Chapter 4. 2 Focus and Subtopics Focus: More advanced concepts related to process management : Resource ownership vs.
Scheduling Basic scheduling policies, for OS schedulers (threads, tasks, processes) or thread library schedulers Review of Context Switching overheads.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
 2004 Deitel & Associates, Inc. All rights reserved. 1 Chapter 4 – Thread Concepts Outline 4.1 Introduction 4.2Definition of Thread 4.3Motivation for.
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership: process includes a virtual address space to hold the process image (fig 3.16)
Review The Joys and Pains of Threads and Multithreading –what is a thread –threads vs. processes –opportunities and risks.
CS 3204 Operating Systems Lecture 25 Godmar Back.
Lecture 5: Threads process as a unit of scheduling and a unit of resource allocation processes vs. threads what to program with threads why use threads.
Chapter 4: Multithreaded Programming. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts What is Thread “Thread is a part of a program.
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
Department of Computer Science and Software Engineering
Operating Systems Unit 2: – Process Context switch Interrupt Interprocess communication – Thread Thread models Operating Systems.
Low Overhead Real-Time Computing General Purpose OS’s can be highly unpredictable Linux response times seen in the 100’s of milliseconds Work around this.
Lecturer 3: Processes multithreaded Operating System Concepts Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess.
Processes Chapter 3. Processes in Distributed Systems Processes and threads –Introduction to threads –Distinction between threads and processes Threads.
Introduction to Operating Systems Concepts
Chapter 4 – Thread Concepts
Introduction to threads
Threads & Multithreading
Chapter 4: Threads.
Why Events Are A Bad Idea (for high-concurrency servers)
CS 6560: Operating Systems Design
Chapter 4 – Thread Concepts
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Chapter 4 Threads.
Chapter 4: Threads 羅習五.
Presenter: Godmar Back
Threads & multithreading
Chapter 4: Threads.
Lecture 10: Threads Implementation
CSE 451: Operating Systems Autumn 2003 Lecture 5 Threads
Lecture 4- Threads, SMP, and Microkernels
Threads Chapter 4.
CSE 451: Operating Systems Winter 2003 Lecture 5 Threads
Multithreaded Programming
Chapter 4: Threads.
Lecture 10: Threads Implementation
Why Threads Are A Bad Idea (for most purposes)
System Calls System calls are the user API to the OS
Chapter 4: Threads.
Why Threads Are A Bad Idea (for most purposes)
Why Threads Are A Bad Idea (for most purposes)
CS703 – Advanced Operating Systems
CSE 451: Operating Systems Winter 2001 Lecture 5 Threads
Threads.
Presentation transcript:

CS 5204 Operating Systems Lecture 5 Godmar Back

Announcements Project/survey paper handout on Wednesday CS 5204 Fall 2005 7/16/2019

Motivation Q: Is Lauer/Needham relevant to current systems? Which model should we pick for which application – both are available on current systems Must understand implementation trade-offs on contemporary systems In addition to programming model trade-offs CS 5204 Fall 2005 7/16/2019

Implementing Threads Issues: Who maintains thread state/stack space? How are threads mapped onto CPUs? How is coordination/synchronization implemented? How do threads interact with I/O? How do threads interact with existing APIs such as signals? How do threads interact with language runtimes (e.g., GCs)? How do terminate threads safely? CS 5204 Fall 2005 7/16/2019

Managing Stack Space Stacks require continuous virtual address space guard Stacks require continuous virtual address space virtual address space fragmentation (example 32-bit Linux) What size should stack have? How to detect stack overflow? Ignore vs. software vs. hardware Related: how to implement Get local thread id “pthread_self()” Thread-local Storage (TLS) CS 5204 Fall 2005 7/16/2019

Nonpreemptive Threads Aka Coroutines CPU switches at well-defined points (“yield”, or synchronization points: “lock”, “wait”) Low context-switch cost - similar to procedure call Advantages Can make integrating garbage collection easier Can allow for very fine-grained resource control (Capriccio) Can be implemented w/o kernel support CS 5204 Fall 2005 7/16/2019

Nonpreemptive Threads (cont’d) Disadvantages: Can increase latency Hard to extend to multiprocessor machines Makes termination of uncooperative threads hard (why?) Note: using nonpreemptive threads does not negate need for locks – why? CS 5204 Fall 2005 7/16/2019

Preemptive Threads CPU can switch at any time Advantages Disadvantages Higher context switch cost: more state to save Advantages Allows for quasi-parallelism (latency benefits) Disadvantages Requires kernel support Can make scheduling & GC control harder CS 5204 Fall 2005 7/16/2019

Example: x86 Nonpreemptive = C calling conventions: Caller-saved: eax, ecx, edx + floating point Callee-saved: ebx, esi, edi, esp ebp, eip for a jmpbuf size of 6*4 = 24 bytes Preemptive = save entire state All registers + 108 bytes for floating point context Note: context switch cost = save/restore state cost + scheduling overhead + lost locality cost CS 5204 Fall 2005 7/16/2019

On Termination If you terminate a thread, how will you clean up if you have to terminate it? Strategies: Avoid shared state where possible Disable termination Use cleanup handlers try/finally, pthread_cleanup Queue q1, q2; // shared thread_body() { while (!done) { Packet p = q1.dequeue(); q2.enqueue(p); } Queue q1, q2; // shared thread_body() { while (true) { Packet p = q1.dequeue(); q2.enqueue(p); } CS 5204 Fall 2005 7/16/2019

User-level Threads (aka 1:N) Kernel sees one thread per process Scheduling + synchronization done in user-space Potentially fast context switches fast locks (if nonpreemptive!) Threads are lightweight CS 5204 Fall 2005 7/16/2019

User-level Threads (cont’d) Drawbacks: I/O blocks entire process Often nonpreemptive (although can be advantage) Both of these can be remedied with signals Virtual timers for preemption Asynchronous I/O signals This is expensive and often fragile Not multiprocessor capable CS 5204 Fall 2005 7/16/2019

Kernel-level Threads (aka 1:1) Kernel manages threads I/O blocks only current thread Context switch requires kernel trap Synchronization may require kernel traps as well OS timer interrupts provides preemption, kernel scheduler schedules threads Allows for use of SMPs CS 5204 Fall 2005 7/16/2019

M:N Model Implemented in Solaris Idea: Implementation for Linux in NGTL Idea: Create small number M of LWPs (“light-weight processes”) onto which (larger number) N of user-level threads are being scheduled Only create LWP if all LWPs are currently blocked; time out unused LWPs CS 5204 Fall 2005 7/16/2019

Lightweight Processes (M:N) Source: Multithreading in the Solaris Operating Environment, Sun 2002 CS 5204 Fall 2005 7/16/2019

Drawbacks of M:N model Solaris discarded LWPs Why: Linux never introduced them (NPTL won over NGTL) Why: Automatic concurrency control hard How many LWPs should be allocated? Experience showed limited gain from faster context switches in user mode _schedlock contention Signal implementation difficult Needed manager thread for asynchronous signals CS 5204 Fall 2005 7/16/2019

Linux NPTL Recent 1:1 model Enabled by kernel changes: Introduction of task groups in kernel (e.g. exit_group) scalable scheduling facilities O(1) scheduler Fast-path synchronization in user-mode Futex (Fast Userspace Mutex) – avoids need to enter kernel for common case FUTEX_WAIT/FUTEX_WAKE CS 5204 Fall 2005 7/16/2019

Outlook 1996 talk by John Ousterhout: “Why threads are a bad idea” Threads are hard to program (synchronization, deadlock) Threads break abstractions Threads are hard to make fast Threads aren’t well-supported Conclusion: use threads only when their power is needed, for true CPU concurrency – else use single-threaded event-based model SEDA & Capriccio (and others) followed CS 5204 Fall 2005 7/16/2019

Summary Implementation issues: Stack management Preemptive vs. nonpreemptive “cooperative multitasking” User-level vs Kernel-level models I/O management & signal implementation CS 5204 Fall 2005 7/16/2019