CHAPTER 7 CONCURRENT SOFTWARE Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Slides:



Advertisements
Similar presentations
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Advertisements

Categories of I/O Devices
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 10 SHARED MEMORY.
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 8 SCHEDULING.
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Mutual Exclusion.
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 7 CONCURRENT SOFTWARE.
Interrupts What is an interrupt? What does an interrupt do to the “flow of control” Interrupts used to overlap computation & I/O – Examples would be console.
Embedded Real-time Systems The Linux kernel. The Operating System Kernel Resident in memory, privileged mode System calls offer general purpose services.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
CHAPTER 9: Input / Output
MicroC/OS-II Embedded Systems Design and Implementation.
Introduction to Embedded Systems
Interrupts. What Are Interrupts? Interrupts alter a program’s flow of control  Behavior is similar to a procedure call »Some significant differences.
Chapter 6: CPU Scheduling
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
CHAPTER 9: Input / Output
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
SYNCHRONIZATION Module-4. scheduling Scheduling is an operating system mechanism that arbitrate CPU resources between running tasks. Different scheduling.
2-1 The critical section –A piece of code which cannot be interrupted during execution Cases of critical sections –Modifying a block of memory shared by.
The Functions of Operating Systems Interrupts. Learning Objectives Explain how interrupts are used to obtain processor time. Explain how processing of.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
Operating Systems Process Management.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
I/O Interfacing A lot of handshaking is required between the CPU and most I/O devices. All I/O devices operate asynchronously with respect to the CPU.
Scheduling Lecture 6. What is Scheduling? An O/S often has many pending tasks. –Threads, async callbacks, device input. The order may matter. –Policy,
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
13-Nov-15 (1) CSC Computer Organization Lecture 7: Input/Output Organization.
Chapter 2 Processes and Threads Introduction 2.2 Processes A Process is the execution of a Program More specifically… – A process is a program.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
ECGR-6185 µC/OS II Nayana Rao University of North Carolina at Charlotte.
ECE291 Computer Engineering II Lecture 15 Dr. Zbigniew Kalbarczyk University of Illinois at Urbana- Champaign.
1 VxWorks 5.4 Group A3: Wafa’ Jaffal Kathryn Bean.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Chapter 2 Process Management. 2 Objectives After finish this chapter, you will understand: the concept of a process. the process life cycle. process states.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 3.
Embedded Computer - Definition When a microcomputer is part of a larger product, it is said to be an embedded computer. The embedded computer retrieves.
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
Interrupts and Exception Handling. Execution We are quite aware of the Fetch, Execute process of the control unit of the CPU –Fetch and instruction as.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Overview: Using Hardware.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
Transmitter Interrupts Review of Receiver Interrupts How to Handle Transmitter Interrupts? Critical Regions Text: Tanenbaum
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
Topics Covered What is Real Time Operating System (RTOS)
Background on the need for Synchronization
Computer Architecture
Interrupts In 8085 and 8086.
Chapter 6: CPU Scheduling
Processor Fundamentals
Chapter 2: The Linux System Part 3
Transmitter Interrupts
Lecture 2 Part 2 Process Synchronization
Threads Chapter 4.
Interrupt handling Explain how interrupts are used to obtain processor time and how processing of interrupted jobs may later be resumed, (typical.
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
CHAPTER 6 INPUT/OUTPUT PROGRAMMING
COMP3221: Microprocessors and Embedded Systems
Chapter 3: Process Management
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

CHAPTER 7 CONCURRENT SOFTWARE Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Program Organization of a Foreground/Background System IRET Interrupt ISR for Task #2 start Initialize IRET Interrupt ISR for Task #1 IRET Interrupt ISR for Task #3 Wait for Interrupts

Foreground/Background System Most of the actual work is performed in the "foreground" ISRs, with each ISR processing a particular hardware event. Main program performs initialization and then enters a "background" loop that waits for interrupts to occur. Allows the system to respond to external events with a predictable amount of latency. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Yes Output Device Ready? IRET Input Ready Input Data Process Data Output Data STI Send EOI Command to PIC ISR with Long Execution Time

Removing the Waiting Loop from the ISR Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Dequeue Data Output Device Ready? Yes Input Data Process Data Enqueue Data Input Ready Enter Background Output Data STI Send EOI Command to PIC IRET Data Enqueued ? Yes Initialize FIFO Queue

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Yes Input Data Process Data Enqueue Data Input Ready Output Ready Data Enqueued? Dequeue Data FIFO Queue Output Data STI IRET Send EOI Command to PIC IRET Send EOI Command to PIC Interrupt- Driven Output

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Kick Starting Output CALL SendData IRET SendData Subroutine IRET Data Enqueued? Yes Dequeue Data FIFO Queue Set Busy Flag Output Data No! Output Device Busy? Input Data Process Data Enqueue Data Input Ready CALL SendData (Kick Start) Send EOI Command to PIC Output Ready RET STI Clear Busy Flag

Preventing Interrupt Overrun Copyright © 2000, Daniel W. Lewis. All Rights Reserved. ISR Busy Flag Set? When interupts get re-enabled (see STI below), allow interrupts from lower priority devices (and this device too). Yes  Ignore this Interrupt! (Interrupts are re-enabled by the IRET) Input Ready Set ISR Busy Flag Clear ISR Busy Flag Send EOI Command to PIC IRET Process data, write result to output queue, & kick start. STI Allow interrupts from any device. Input Data Removes the interrupt request that invoked this ISR.

Preventing Interrupt Overrun Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Allow interrupts from lower priority devices. Input Ready Disable future interrupts from this device. IRET Process data, write result to output queue, & kick start. STI Allow interrupts from higher priority devices. Input Data Removes the interrupt request that invoked this ISR. Send EOI Command to PIC Set the mask bit for this device in the 8259 PIC Clear the mask bit for this device in the 8259 PIC Enable future interrupts from this device.

Moving Work into Background Move non-time-critical work (such as updating a display) into background task. Foreground ISR writes data to queue, then background removes and processes it. An alternative to ignoring one or more interrupts as the result of input overrun. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Limitations Best possible performance requires moving as much as possible into the background. Background becomes collection of queues and associated routines to process the data. Optimizes latency of the individual ISRs, but background begs for a managed allocation of processor time. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Multi-Threaded Architecture Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Queue ISR Background Thread Background Thread Multi-threaded run-time function library (the real-time kernel)

Thread Design Threads usually perform some initialization and then enter an infinite processing loop. At the top of the loop, the thread relinquishes the processor while it waits for data to become available, an external event to occur, or a condition to become true. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Concurrent Execution of Independent Threads Each thread runs as if it had its own CPU separate from those of the other threads. Threads are designed, programmed, and behave as if they are the only thread running. Partitioning the background into a set of independent threads simplifies each thread, and thus total program complexity. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Each Thread Maintains Its Own Stack and Register Contents Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CS:EIP SS:ESP EAX EBX EFlags StackRegisters Context of Thread 1 CS:EIP SS:ESP EAX EBX EFlags StackRegisters Context of Thread N

Concurrency Only one thread runs at a time while others are suspended. Processor switches from one thread to another so quickly that it appears all threads are running simultaneously. Threads run concurrently. Programmer assigns priority to each thread and the scheduler uses this to determine which thread to run next Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Real-Time Kernel Threads call a library of run-time routines (known as the real-time kernel) manages resources. Kernel provides mechanisms to switch between threads, for coordination, synchronization, communications, and priority. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching Each thread has its own stack and a special region of memory referred to as its context. A context switch from thread "A" to thread "B" first saves all CPU registers in context A, and then reloads all CPU registers from context B. Since CPU registers includes SS:ESP and CS:EIP, reloading context B reactivates thread B's stack and returns to where it left off when it was last suspended. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Suspended ExecutingSuspended Executing Restore context B Save context A Save context BRestore context A Thread AThread B

Non-Preemptive Multi-Tasking Threads call a kernel routine to perform the context switch. Thread relinquishes control of processor, thus allowing another thread to run. The context switch call is often referred to as a yield, and this form of multi-tasking is often referred to as cooperative multi- tasking. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Non-Preemptive Multi-Tasking When external event occurs, processor may be executing a thread other than one designed to process the event. The first opportunity to execute the needed thread will not occur until current thread reaches next yield. When yield does occur, other threads may be scheduled to run first. In most cases, this makes it impossible or extremely difficult to predict the maximum response time of non-preemptive multi-tasking systems. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Non-Preemptive Multi-Tasking Programmer must call the yield routine frequently, or else system response time may suffer. Yields must be inserted in any loop where a thread is waiting for some external condition. Yield may also be needed inside other loops that take a long time to complete (such as reading or writing a file), or distributed periodically throughout a lengthy computation. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching in a Non- Preemptive System Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Wait? Yield to other threads Thread Initialization Start Yes Scheduler selects highest priority thread that is ready to run. If not the current thread, the current thread is suspended and the new thread resumed. Data Processing

Preemptive Multi-Tasking Hardware interrupts trigger context switch. When external event occurs, a hardware ISR is invoked. After servicing the interrupt request the ISR raises the priority of the thread that processes the associated data, then switches context switch to the highest priority thread that is ready to run and returns to it. Significantly improves system response time. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preemptive Multi-Tasking Eliminates the programmer's obligation to include explicit calls to the kernel to perform context switches within the various background threads. Programmer no longer needs to worry about how frequently the context switch routine is called; it's called only when needed - i.e., in response to external events. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preemptive Context Switching Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Hardware Interrupt Thread A Executing ISR Context Switch Thread B Suspended Thread A Suspended Thread B Executing Scheduler selects highest priority thread that is ready to run. If not the current thread, the current thread is suspended and the new thread resumed. Process Interrupt Request IRET

Critical Sections Critical section: A code sequence whose proper execution is based on the assumption that it has exclusive access to the shared resources that it is using during the execution of the sequence. Critical sections must be protected against preemption, or else integrity of the computation may be compromised. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Atomic Operations Atomic operations are those that execute to completion without preemption. Critical sections must be made atomic. Disable interrupts for their duration, or Acquire exclusive access to the shared resource through arbitration before entering the critical section and release it on exit. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Threads, ISRs, and Sharing 1.Between a thread and an ISR: Data corruption may occur if the thread's critical section is interrupted to execute the ISR. 2.Between 2 ISRs: Data corruption may occur if the critical section of one ISR can be interrupted to execute the other ISR. 3.Between 2 threads: Data corruption may occur unless execution of their critical sections is coordinated. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Shared Resources A similar situation applies to other kinds of shared resources - not just shared data. Consider two or more threads that want to simultaneously send data to the same (shared) disk, printer, network card, or serial port. If access is not arbitrated so that only one thread uses the resource at a time, the data streams might get mixed together, producing nonsense at the destination. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Uncontrolled Access to a Shared Resource (the Printer) Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Thread A Thread B Shared Printer "HELLO\n""goodbye" HgoELodLO bye

Protecting Critical Sections Non-preemptive system: Programmer has explicit control over where and when context switch occurs. Except for ISRs! Preemptive system: Programmer has no control over the time and place of a context switch. Protection Options: Disabling interrupts Spin lock mutex semaphore Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Disabling Interrupts The overhead required to disable (and later re- enable) interrupts is negligible. Good for short critical sections. Disabling interrupts during the execution of a long critical section can significantly degrade system response time. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Spin Locks Copyright © 2000, Daniel W. Lewis. All Rights Reserved. No Set Flag Critical Section Clear Flag do { disable() ; ok = !flag ; flag = TRUE ; enable() ; } while (!ok) ; do { disable() ; ok = !flag ; flag = TRUE ; enable() ; } while (!ok) ; L1:MOVAL,1 XCHG[_flag],AL ORAL,AL JNZL1 L1:MOVAL,1 XCHG[_flag],AL ORAL,AL JNZL1 flag = FALSE ; MOV BYTE [_flag],0 Flag set? Spin-lock in C. Spin-lock in assembly. If the flag is set, another thread is currently using the shared memory and will clear the flag when done.

Spin Locks vs. Semaphores Non-preemptive system requires kernel call inside spin lock loop to let other threads run. Context-switching during spin lock can be a significant overhead (saving and restoring threads’ registers and stack). Semaphores eliminate the context-switch until flag is released. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.