Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 7 CONCURRENT SOFTWARE.

Slides:



Advertisements
Similar presentations
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Advertisements

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 10 SHARED MEMORY.
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 8 SCHEDULING.
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Mutual Exclusion.
Interrupts What is an interrupt? What does an interrupt do to the “flow of control” Interrupts used to overlap computation & I/O – Examples would be console.
COMP3221: Microprocessors and Embedded Systems Lecture 15: Interrupts I Lecturer: Hui Wu Session 1, 2005.
Embedded Real-time Systems The Linux kernel. The Operating System Kernel Resident in memory, privileged mode System calls offer general purpose services.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
CHAPTER 9: Input / Output
MicroC/OS-II Embedded Systems Design and Implementation.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Introduction to Embedded Systems
Interrupts. What Are Interrupts? Interrupts alter a program’s flow of control  Behavior is similar to a procedure call »Some significant differences.
Real-Time Kernel (Part 1)
Chapter 6: CPU Scheduling
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
CHAPTER 9: Input / Output
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
SYNCHRONIZATION Module-4. scheduling Scheduling is an operating system mechanism that arbitrate CPU resources between running tasks. Different scheduling.
2-1 The critical section –A piece of code which cannot be interrupted during execution Cases of critical sections –Modifying a block of memory shared by.
The Functions of Operating Systems Interrupts. Learning Objectives Explain how interrupts are used to obtain processor time. Explain how processing of.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
I/O Interfacing A lot of handshaking is required between the CPU and most I/O devices. All I/O devices operate asynchronously with respect to the CPU.
Scheduling Lecture 6. What is Scheduling? An O/S often has many pending tasks. –Threads, async callbacks, device input. The order may matter. –Policy,
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
13-Nov-15 (1) CSC Computer Organization Lecture 7: Input/Output Organization.
Chapter 2 Processes and Threads Introduction 2.2 Processes A Process is the execution of a Program More specifically… – A process is a program.
ECGR-6185 µC/OS II Nayana Rao University of North Carolina at Charlotte.
ECE291 Computer Engineering II Lecture 15 Dr. Zbigniew Kalbarczyk University of Illinois at Urbana- Champaign.
1 VxWorks 5.4 Group A3: Wafa’ Jaffal Kathryn Bean.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 3.
Embedded Computer - Definition When a microcomputer is part of a larger product, it is said to be an embedded computer. The embedded computer retrieves.
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
CHAPTER 7 CONCURRENT SOFTWARE Copyright © 2000, Daniel W. Lewis. All Rights Reserved.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CS-280 Dr. Mark L. Hornick 1 Sequential Execution Normally, CPU sequentially executes instructions in a program Subroutine calls are synchronous to the.
Interrupts and Exception Handling. Execution We are quite aware of the Fetch, Execute process of the control unit of the CPU –Fetch and instruction as.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Overview: Using Hardware.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
2-1 Chapter 2 Real-Time Systems Concepts. 2-2 Foreground/Background Systems BackgroundForeground ISR Time Code execution application consists of an infinite.
Big Picture Lab 4 Operating Systems C Andras Moritz
Transmitter Interrupts Review of Receiver Interrupts How to Handle Transmitter Interrupts? Critical Regions Text: Tanenbaum
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
Topics Covered What is Real Time Operating System (RTOS)
Computer Architecture
Interrupts In 8085 and 8086.
System Software.
Chapter 6: CPU Scheduling
Chapter 2: The Linux System Part 3
Transmitter Interrupts
Lecture 2 Part 2 Process Synchronization
Interrupt handling Explain how interrupts are used to obtain processor time and how processing of interrupted jobs may later be resumed, (typical.
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
CHAPTER 6 INPUT/OUTPUT PROGRAMMING
COMP3221: Microprocessors and Embedded Systems
Chapter 3: Process Management
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 7 CONCURRENT SOFTWARE

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Program Organization of a Foreground/Background System IRET Interrupt ISR for Task #2 start Initialize IRET Interrupt ISR for Task #1 IRET Interrupt ISR for Task #3 Wait for Interrupts

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Foreground/Background System Most of the actual work is performed in the "foreground" ISRs, with each ISR processing a particular hardware event. Main program performs initialization and then enters a "background" loop that waits for interrupts to occur. Allows the system to respond to external events with a predictable amount of latency.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Task State and Serialization unsigned int byte_counter ; void Send_Request_For_Data(void) { outportb(CMD_PORT, RQST_DATA_CMD) ; byte_counter = 0 ; } void interrupt Process_One_Data_Byte(void) { BYTE8 data = inportb(DATA_PORT) ; switch (++byte_counter) { case 1: Process_Temperature(data) ;break ; case 2: Process_Altitude(data) ;break ; case 3: Process_Humidity(data) ;break ; …… }

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Yes Output Device Ready? IRET Input Ready Input Data Process Data Output Data STI Send EOI Command to PIC ISR with Long Execution Time

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Removing the Waiting Loop from the ISR Dequeue Data Output Device Ready? Yes Input Data Process Data Enqueue Data Input Ready Enter Background Output Data STI Send EOI Command to PIC IRET Data Enqueued ? Yes Initialize FIFO Queue

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Yes Input Data Process Data Enqueue Data Input Ready Output Ready Data Enqueued? Dequeue Data FIFO Queue Output Data STI IRET Send EOI Command to PIC IRET Send EOI Command to PIC Interrupt- Driven Output

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Kick Starting Output CALL SendData IRET SendData Subroutine IRET Data Enqueued? Yes Dequeue Data FIFO Queue Clear Busy Flag Output Data No! Output Device Busy? Input Data Process Data Enqueue Data Input Ready CALL SendData (Kick Start) Send EOI Command to PIC Output Ready RET STI Set Busy Flag

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Preventing Interrupt Overrun ISR Busy Flag Set? When interupts get re-enabled (see STI below), allow interrupts from lower priority devices (and this device too). Yes  Ignore this Interrupt! (Interrupts are re-enabled by the IRET) Input Ready Set ISR Busy Flag Clear ISR Busy Flag Send EOI Command to PIC IRET Process data, write result to output queue, & kick start. STI Allow interrupts from any device. Input Data Removes the interrupt request that invoked this ISR.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Preventing Interrupt Overrun Allow interrupts from lower priority devices. Input Ready Disable future interrupts from this device. IRET Process data, write result to output queue, & kick start. STI Allow interrupts from higher priority devices. Input Data Removes the interrupt request that invoked this ISR. Send EOI Command to PIC Set the mask bit for this device in the 8259 PIC Clear the mask bit for this device in the 8259 PIC Enable future interrupts from this device.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Moving Work into Background Move non-time-critical work (such as updating a display) into background task. Foreground ISR writes data to queue, then background removes and processes it. An alternative to ignoring one or more interrupts as the result of input overrun.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Limitations Best possible performance requires moving as much as possible into the background. Background becomes collection of queues and associated routines to process the data. Optimizes latency of the individual ISRs, but background begs for a managed allocation of processor time.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Multi-Threaded Architecture Queue ISR Background Thread Background Thread Multi-threaded run-time function library (the real-time kernel)

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Thread Design Threads usually perform some initialization and then enter an infinite processing loop. At the top of the loop, the thread relinquishes the processor while it waits for data to become available, an external event to occur, or a condition to become true.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Concurrent Execution of Independent Threads Each thread runs as if it had its own CPU separate from those of the other threads. Threads are designed, programmed, and behave as if they are the only thread running. Partitioning the background into a set of independent threads simplifies each thread, and thus total program complexity.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Each Thread Maintains Its Own Stack and Register Contents CS:EIP SS:ESP EAX EBX EFlags StackRegisters Context of Thread 1 CS:EIP SS:ESP EAX EBX EFlags StackRegisters Context of Thread N

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Concurrency Only one thread runs at a time while others are suspended. Processor switches from one thread to another so quickly that it appears all threads are running simultaneously. Threads run concurrently. Programmer assigns priority to each thread and the scheduler uses this to determine which thread to run next

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Real-Time Kernel Threads call a library of run-time routines (known as the real-time kernel) manages resources. Kernel provides mechanisms to switch between threads, for coordination, synchronization, communications, and priority.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Context Switching Each thread has its own stack and a special region of memory referred to as its context. A context switch from thread "A" to thread "B" first saves all CPU registers in context A, and then reloads all CPU registers from context B. Since CPU registers includes SS:ESP and CS:EIP, reloading context B reactivates thread B's stack and returns to where it left off when it was last suspended.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Context Switching Suspended ExecutingSuspended Executing Restore context B Save context A Save context BRestore context A Thread AThread B

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Non-Preemptive Multi-Tasking Threads call a kernel routine to perform the context switch. Thread relinquishes control of processor, thus allowing another thread to run. The context switch call is often referred to as a yield, and this form of multi-tasking is often referred to as cooperative multi- tasking.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Non-Preemptive Multi-Tasking When external event occurs, processor may be executing a thread other than one designed to process the event. The first opportunity to execute the needed thread will not occur until current thread reaches next yield. When yield does occur, other threads may be scheduled to run first. In most cases, this makes it impossible or extremely difficult to predict the maximum response time of non- preemptive multi-tasking systems.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Non-Preemptive Multi-Tasking Programmer must call the yield routine frequently, or else system response time may suffer. Yields must be inserted in any loop where a thread is waiting for some external condition. Yield may also be needed inside other loops that take a long time to complete (such as reading or writing a file), or distributed periodically throughout a lengthy computation.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Context Switching in a Non- Preemptive System Wait? Yield to other threads Thread Initialization Start Yes Scheduler selects highest priority thread that is ready to run. If not the current thread, the current thread is suspended and the new thread resumed. Data Processing

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Preemptive Multi-Tasking Hardware interrupts trigger context switch. When external event occurs, a hardware ISR is invoked. The ISR gets the data from the I/O device and makes a kernel call to enqueue it, causing the state of the thread that is pending on the queue to change from pending to ready. The ISR then calls the scheduler to context switch to the highest priority thread that is ready to run. Significantly improves system response time.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Preemptive Multi-Tasking Eliminates the programmer's obligation to include explicit calls to the kernel to perform context switches within the various background threads. Programmer no longer needs to worry about how frequently the context switch routine is called; it's called only when needed - i.e., in response to external events.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Preemptive Context Switching Hardware Interrupt Thread A Executing ISR Context Switch Thread B Suspended Thread A Suspended Thread B Executing Scheduler selects highest priority thread that is ready to run. If not the current thread, the current thread is suspended and the new thread resumed. Process Interrupt Request IRET

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Critical Sections Critical section: A code sequence whose proper execution is based on the assumption that it has exclusive access to the shared resources that it is using during the execution of the sequence. Critical sections must be protected against preemption, or else integrity of the computation may be compromised.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Atomic Operations Atomic operations are those that execute to completion without preemption. Critical sections must be made atomic. –Disable interrupts for their duration, or –Acquire exclusive access to the shared resource through arbitration before entering the critical section and release it on exit.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Threads, ISRs, and Sharing 1.Between a thread and an ISR: Data corruption may occur if the thread's critical section is interrupted to execute the ISR. 2.Between 2 ISRs: Data corruption may occur if the critical section of one ISR can be interrupted to execute the other ISR. 3.Between 2 threads: Data corruption may occur unless execution of their critical sections is coordinated.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Shared Resources A similar situation applies to other kinds of shared resources - not just shared data. Consider two or more threads that want to simultaneously send data to the same (shared) disk, printer, network card, or serial port. If access is not arbitrated so that only one thread uses the resource at a time, the data streams might get mixed together, producing nonsense at the destination.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Uncontrolled Access to a Shared Resource (the Printer) Thread A Thread B Shared Printer "HELLO\n""goodbye" HgoELodLO bye

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Protecting Critical Sections Non-preemptive system: Programmer has explicit control over where and when context switch occurs. –Except for ISRs! Preemptive system: Programmer has no control over the time and place of a context switch. Protection Options: –Disabling interrupts –Spin lock –mutex –semaphore

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Disabling Interrupts The overhead required to disable (and later re-enable) interrupts is negligible. –Good for short critical sections. Disabling interrupts during the execution of a long critical section can significantly degrade system response time.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Spin Locks No Set Flag Critical Section Clear Flag do { disable() ; ok = !flag ; flag = TRUE ; enable() ; } while (!ok) ; do { disable() ; ok = !flag ; flag = TRUE ; enable() ; } while (!ok) ; L1:MOVAL,1 XCHG[_flag],AL ORAL,AL JNZL1 L1:MOVAL,1 XCHG[_flag],AL ORAL,AL JNZL1 flag = FALSE ; MOV BYTE [_flag],0 Flag set? Spin-lock in C. Spin-lock in assembly. If the flag is set, another thread is currently using the shared memory and will clear the flag when done.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Spin Locks vs. Semaphores Non-preemptive system requires kernel call inside spin lock loop to let other threads run. Context-switching during spin lock can be a significant overhead (saving and restoring threads’ registers and stack). Semaphores eliminate the context-switch until flag is released.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Semaphores Semaphore “Pend” Critical Section Semaphore “Post” Kernel suspends this thread if another thread has possession of the semaphore; this thread does not get to run again until the other thread releases the semaphore with a “post” operation.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Kernel Services Initialization Threads Scheduling Priorities Interrupt Routines Semaphores Mailboxes Queues Time

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Initialization Services Multi-C: n/a  C/OS-II: OSInit() ; OSStart() ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Thread Services Multi-C: ECODE MtCCoroutine(void (*fn)(…)) ; ECODE MtCSplit(THREAD **new, MTCBOOL *old) ; ECODE MtCStop(THREAD *) ;  C/OS-II: BYTE8 OSTaskCreate(void (*fn)(void *), void *data, void *stk, BYTE8 prio) ; BYTE8 OSTaskDel(BYTE8 prio) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Scheduling Services Multi-C: ECODE MtCYield(void) ;  C/OS-II: void OSSchedLock(void) ; void OSSchedUnlock(void) ; BYTE8 OSTimeTick(BYTE8 old, BYTE8 new) ; void OSTimeDly(WORD16) ;]

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Priority Services Multi-C: ECODE MtCGetPri(THREAD *, MTCPRI *) ; ECODE MtCSetPri(THREAD *, MTCPRI) ;  C/OS-II: BYTE8 OSTaskChangePrio(BYTE8 old, BYTE8 new) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. ISR Services Multi-C: n/a  C/OS-II: OS_ENTER_CRITICAL() ; OS_EXIT_CRITICAL() ; void OSIntEnter(void) ; void OSIntExit(void) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Semaphore Services Multi-C: ECODE MtCSemaCreate(SEMA_INFO **) ; ECODE MtCSemaWait(SEMA_INFO *, MTCBOOL *) ; ECODE MtCSemaReset(SEMA_INFO *) ; ECODE MtCSemaSet(SEMA_INFO *) ;  C/OS-II: OS_EVENT *OSSemCreate(WORD16) ; void OSSemPend(OS_EVENT *, WORD16, BYTE8 *) ; BYTE8 OSSemPost(OS_EVENT *) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Mailbox Services Multi-C: n/a  C/OS-II: OS_EVENT *OSMboxCreate(void *msg) ; void *OSMboxPend(OS_EVENT *, WORD16, BYTE8 *) ; BYTE8 OSMboxPost(OS_EVENT *, void *) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Queue Services Multi-C: ECODE MtCReceive(void *msgbfr, int *msgsize) ; ECODE MtCSendTHREAD *, void *msg, int size, int pri) ; ECODE MtCASendTHREAD *, void *msg, int size, int pri) ;  C/OS-II: OS_EVENT *OSQCreate(void **start, BYTE8 size) ; void *OSQPend(OS_EVENT *, WORD16, BYTE8 *) ; BYTE8 OSQPost(OS_EVENT *, void *) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Time Services Multi-C: n/a  C/OS-II: DWORD32 OSTimeGet(void) ; void OSTimeSet(DWORD32) ;