Inter-Thread Comm & Synch - ITC

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Chapter 7 - Resource Access Protocols (Critical Sections) Protocols: No Preemptions During Critical Sections Once a job enters a critical section, it cannot.
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 8 SCHEDULING.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Resource management and Synchronization Akos Ledeczi EECE 354, Fall 2010 Vanderbilt University.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Interprocess Communication
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
1 Concurrency: Deadlock and Starvation Chapter 6.
1 Concurrency: Deadlock and Starvation Chapter 6.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Copyright © 2004 Texas Instruments. All rights reserved. 1.Introduction 2.Real-Time System Design Considerations 3.Hardware Interrupts (HWI) 4.Software.
DSP/BIOS System Integration Workshop Copyright © 2004 Texas Instruments. All rights reserved. T TO Technical Training Organization Introduction.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Deadlock and Starvation
CSE 120 Principles of Operating
Operating systems Deadlocks.
Topics Covered What is Real Time Operating System (RTOS)
Chapter 5: Process Synchronization
Scheduling and Resource Access Protocols: Basic Aspects
Auburn University COMP 3500 Introduction to Operating Systems Synchronization: Part 4 Classical Synchronization Problems.
Background on the need for Synchronization
G.Anuradha Reference: William Stallings
PThreads.
Deadlock and Starvation
Chapter 5: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
COT 5611 Operating Systems Design Principles Spring 2014
Midterm review: closed book multiple choice chapters 1 to 9
Semaphore Originally called P() and V() wait (S) { while S <= 0
Operating systems Deadlocks.
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
Conditions for Deadlock
Computer Science & Engineering Electrical Engineering
CSCI1600: Embedded and Real Time Software
Chapter 6 Synchronization Principles
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSCI1600: Embedded and Real Time Software
CSE 451 Section 1/27/2000.
CSE 542: Operating Systems
CSE 542: Operating Systems
Akos Ledeczi EECE 6354, Fall 2017 Vanderbilt University
Akos Ledeczi EECE 6354, Fall 2015 Vanderbilt University
Presentation transcript:

Inter-Thread Comm & Synch - ITC TI BIOS Inter-Thread Comm & Synch - ITC 14 November 2018 Dr. Veton Këpuska

Introduction LCK MBX QUE ATM MSGQ Semaphores (SEM) have been used in prior chapters to synchronize threads, and are a common approach in many systems. While there are times that the SEM is a good choice, there are other situations in which their use can lead to problems. In this chapter, a variety of inter-thread communication and synchronization examples will be considered employing a variety of BIOS thread comm API, including: LCK MBX QUE ATM MSGQ 14 November 2018 Dr. Veton Këpuska

Inter-Thread Comm & Synch ATM Atomic Fxns SEM Semaphore LCK Lock MBX Mailbox QUE Queue MSGQ Message Queue 14 November 2018 Dr. Veton Këpuska 2

Critical Section Resource Protection Digital Systems: Hardware Organization and Design 11/14/2018 Critical Section Resource Protection Pass a copy of resources to 2nd thread No possibility of conflict, allows both to use concurrently Doubles storage requirements, adds copy time Assign concurrent threads to the same priority – FIFO + No possibility of conflict, no memory/time overhead, easy - Forces involved threads to be same priority Disable interrupts during critical sections + Needed if a hardware interrupt shares data - Affects response time of all threads in the system Disable SWI/TSK scheduler during critical section + Assures one SWI/TSK to finish with resource before another begins - Affects the response times of all other SWIs This is a (long) list of a number of ways to protect access of one thread to a (shared) resource at critical times. Copy of resource prevents cross-usage, but adds time and size, also may not avoid things getting out of sequence The ideas in the yellow box are all scheduler techniques seen in prior chapters, and their use should be a first choice, since it creates a ‘built-in’ solution with noted impacts on system performance In this chapter, additional BIOS API specific to inter-thread communication/data passing/ synchronization will be considered 14 November 2018 Dr. Veton Këpuska Architecture of a Respresentative 32 Bit Processor

Critical Section Resource Protection Raise priority of SWI/TSK during critical section * Set priority to highest priority which may access the resource + Equal priority SWIs/TSKs run in FIFO order, avoiding competition - Can affect response times of SWIs/TSKs of intervening priority Use atomic functions on shared resources + Imposes minimal jitter on interrupt latencies - Only allows minimal actions on shared resources Regulate access rights via Semaphores + No conflict or memory/time overhead of passing copy - Multiple semaphore schemes can introduce problems 14 November 2018 Dr. Veton Këpuska

1 2 3 4 5 6 Module Topics ATM Atomic Fxns SEM Semaphore LCK Lock MBX Mailbox 4 QUE QueueFlashburn 5 MSGQ Message Queue 6 14 November 2018 Dr. Veton Këpuska 2

Atm – Atomic Functions 14 November 2018 Dr. Veton Këpuska

DSP/BIOS Atomic Functions Allows thread to manipulate variables without interrupt intervention Are C callable functions optimized in assembly Allows reliable use of global variables shared between SWI and HWI ATM_dec ATM_inc - Good for inter-thread counters ATM_clear ATM_set - Counters - to start/restart - Generic pass of value between threads ATM_and ATM_or - Perform boolean op’s on shared variables - Good for event flags, etc x 10 11? 10/0 11 TSK … x = x+1; LD reg, x ADD reg, 1 ST x, reg a HWI ... for(x...) x = 0; 14 November 2018 Dr. Veton Këpuska

1 2 3 4 5 6 Module Topics ATM Atomic Fxns SEM Semaphore LCK Lock MBX Mailbox 4 QUE QueueFlashburn 5 MSGQ Message Queue 6 14 November 2018 Dr. Veton Këpuska 2

sem – sEMAPHORES 14 November 2018 Dr. Veton Këpuska

Synchronization Semaphore A and B same priority, A is ready before B SEM_pend(semObj) Priority=1 B SEM_post(semObj) Priority=1 A time B higher priority than A, B is ready before A SEM_pend(semObj) block! Priority=2 Not dependent on A Depends on A B SEM_post(semObj) preempted! Priority=1 Precondition for B A time 14 November 2018 Dr. Veton Këpuska

Semaphores and Priority Both B and C depend on A B depends on the semaphore first, then C When A posts, B runs first because it pended first Semaphores use a FIFO Queue for pending tasks! 14 November 2018 Dr. Veton Këpuska

Semaphores and Priority SEM_pend(&semObj) block! interrupt! Priority=2 C Not dependent on A SEM_pend(&semObj) block! Priority=1 Depends on A B Not dependent on A SEM_post(&semObj) preempted! Priority=1 Precondition for B and C A time 14 November 2018 Dr. Veton Këpuska

Mutual Exclusion Semaphore Two or more tasks need concurrent access to a serial reusable resource Mutual exclusion using semaphore Initialize semaphore count to 1 pend before accessing resource - lock out other task(s) post after accessing resource - allow other task(s) Void task0() { SEM_pend(&semMutex); `critical section` SEM_post(&semMutex); } Void task1() { SEM_pend(&semMutex); `critical section` SEM_post(&semMutex); } Priority=2 SEM_pend(&semMutex) block! SEM_post(&semMutex) Task1 SEM_pend(&semMutex) SEM_post(&semMutex) preempted! Priority=1 Task0 14 November 2018 Dr. Veton Këpuska

Mutual Exclusion Semaphore Problems may occur when tasks compete for more than one resource Priority inversion Deadlock 14 November 2018 Dr. Veton Këpuska

Priority Inversion Priority B B C D A A Time higher Initially High-priority tasks block while waiting for lower-priority tasks to relinquish semaphore Priority Pend(mutex) blocks post (mutex) higher B B C Initially Mutex = 1 D Post(mutex) preempted Pend(mutex) interrupt! lower A A Time 14 November 2018 Dr. Veton Këpuska

Priority Inversion “The failure turned out to be a case of priority inversion” — Mars Pathfinder Flight Software Cognizant Engineer 14 November 2018 Dr. Veton Këpuska

Inversion Solution: Priority Inheritance Interrupt readies B Elevate task priority before calling accessing resource Lower task priority after accessing resource Priority post(mutex), setpri(A,oldpri) pend(mutex) A B pend(mutex) C post(mutex) D setpri(A,newpri) lower A A time 14 November 2018 Dr. Veton Këpuska

Question: Do we even need to use semaphore in this situation? - No Do not use TSK_yield() if semaphore is removed 14 November 2018 Dr. Veton Këpuska

Deadlock Also known as deadly embrace Tasks cannot complete because they have blocked each other TaskA and TaskB require the use of resource 1 and 2. Neither task will release a resource until it is completed static Void TaskA() { SEM_pend(&res_1); // use resource1 Task_A may get stuck here SEM_pend(&res_2); // use resource2 SEM_post(&res_1); SEM_post(&res_2); } static Void TaskB() { SEM_pend(&res_2); // use resource2 Task_B may get stuck here SEM_pend(&res_1); // use resource1 SEM_post(&res_2); SEM_post(&res_1); } 14 November 2018 Dr. Veton Këpuska

Deadlock Conditions Mutual exclusion : Access to shared resource protected with mutual exclusion SEM Circular pend: Circular chain of tasks hold resources that are needed by others in the chain (cyclic processing) Multiple pend and wait: Tasks lock more than one resource at a time Preemption: Tasks needing mutual exclusion are at a different priorities 14 November 2018 Dr. Veton Këpuska

Deadlock: Detect, Recover, Eliminate Difficult to detect May happen infrequently Use timeouts in blocking API; monitor timeout via SYS_error Monitor SWI with implicit STS Recover is not easy Reset the system Rollback to a pre-deadlock state Solution Careful design Rigorous testing 14 November 2018 Dr. Veton Këpuska

Deadlock: Detect, Recover, Eliminate Eliminating Deadlock: Remove one of these conditions Mutual exclusion: Make resources sharable Circular pend: Set a particular order Multiple pend and post: Lock only one resource at a time or all resources that will be used (starvation potential) Preemption: Assign tasks that need mutual exclusivity to the same priority Better: Use more sophisticated BIOS API 14 November 2018 Dr. Veton Këpuska

1 2 3 4 5 6 Module Topics ATM Atomic Fxns SEM Semaphore LCK Lock MBX Mailbox 4 QUE QueueFlashburn 5 MSGQ Message Queue 6 14 November 2018 Dr. Veton Këpuska 2

LCK – Lock 14 November 2018 Dr. Veton Këpuska

Nested Semaphore Calls: LCK Digital Systems: Hardware Organization and Design 11/14/2018 Nested Semaphore Calls: LCK Use of SEMaphore with nested pend yields permanent block Void Task_A() { SEM_pend(&semUser); funcInner(); SEM_post(&semUser); } Void funcInner() { SEM_pend(&semUser); // use resource guarded by sem SEM_post(&semUser); } Unrecoverable blocking call Use of LCK (Lock) with nested pend avoids blockout Void Task_A() { LCK_pend(&lckUser); funcInner(); LCK_post(&lckUser); } Void funcInner() { LCK_pend(&lckUser); // use resource guarded by lck LCK_post(&lckUser); } 14 November 2018 Dr. Veton Këpuska Architecture of a Respresentative 32 Bit Processor

Nested Semaphore Calls: LCK BIOS MEM Manager and selected RTS functions internally use LCK, can cause TSK switch 14 November 2018 Dr. Veton Këpuska

1 2 3 4 5 6 Module Topics ATM Atomic Fxns SEM Semaphore LCK Lock MBX Mailbox 4 QUE QueueFlashburn 5 MSGQ Message Queue 6 14 November 2018 Dr. Veton Këpuska 2

MBX – Mailbox 14 November 2018 Dr. Veton Këpuska

Mailbox MBX_pend MBX_post MBX_create MBX_delete MBX NOTE: The MBX API are not related to the mailbox component of the SWI object! + mailbox message can be any desired structure + semaphore signaling built in (read and write) + allows multiple readers and/or writers - fixed depth of messaging - copy based – 2 copies made in/out of MBX 14 November 2018 Dr. Veton Këpuska

Example: Passing Buffer Info Via Mailbox MBX_post - add message to end of mailbox typedef struct MsgObj { Int len; Int * addr; }; Void writer(Void) { MsgObj msg; Int myBuf[SIZE]; ... msg.addr = myBuf; msg.len = SIZE*sizeof(Int); MBX_post(&mbx, &msg, SYS_FOREVER); } block if MBX is already full handle to msg obj * msg to put/get timeout MBX_pend - get next message from mailbox Void reader(Void) { MsgObj mail; Int size, *buf; ... MBX_pend(&mbx, &mail,SYS_FOREVER); buf = mail.addr; size = mail.len; } block until mail received or timeout 14 November 2018 Dr. Veton Këpuska

Creating Mailbox Objects Message Object creation via GCONF MBX.OBJMEMSEG = prog.get(“ISRAM"); var myMBX = MBX.create("myMBX"); myMBX.comment = "my MBX"; myMBX.messageSize = 1; myMBX.length = 1; myMBX.elementSeg = prog.get(“IRAM"); Message Object creation via TCONF Dynamic Message Object Creation hMbx = MBX_create(msgsize, mbxlen, attrs); MBX_delete(hMbx); struct MBX_Attrs { Int segid; } Message Size = MADUs per message Mailbox Length = max # messages queued 14 November 2018 Dr. Veton Këpuska

1 2 3 4 5 6 Module Topics ATM Atomic Fxns SEM Semaphore LCK Lock MBX Mailbox 4 QUE Queue 5 MSGQ Message Queue 6 14 November 2018 Dr. Veton Këpuska 2

QUE - Queue 14 November 2018 Dr. Veton Këpuska

Technical Training Organization QUE - Queue + any number of messages can be passed + message can be anything desired (beginning with QUE_elem) + atomic API are provided to assure correct sequencing - no semaphore signaling built in T TO Technical Training Organization 19

Queues : QUE QUE message is anything you like, starting with QUE_Elem QUE_Elem is a set of pointers that BIOS uses to manage a double linked list Items queued are NOT copied – only the QUE_Elem ptrs are managed! 14 November 2018 Dr. Veton Këpuska

Queues : QUE QUE_put(hQue,*msg3) add message to end of queue (writer) struct MyMessage { QUE_Elem elem; first field for QUE Int x[1000]; array/structure sent not copy based! } Message1; typedef struct QUE_Elem { struct QUE_Elem *next; struct QUE_Elem *prev; } QUE_Elem; QUE_put(hQue,*msg3) add message to end of queue (writer) QUE_Obj msg1 msg2 msg3 *elem = QUE_get(hQue) get message from front of queue (reader) msg1 QUE_Obj msg2 msg3 14 November 2018 Dr. Veton Këpuska

QUE API Summary QUE API Description QUE_put Add a message to end of queue – atomic write QUE_get Get message from front of queue – atomic read QUE_enqueue Non-atomic QUE_put QUE_dequeue Non-atomic QUE_get QUE_head Returns ptr to head of queue (no de-queue performed) QUE_empty Returns TRUE if queue has no messages QUE_next Returns next element in queue QUE_prev Returns previous element in queue QUE_insert Inserts element into queue in front of specified element QUE_remove Removes specified element from queue QUE_new …. QUE_create Create a queue QUE_delete Delete a queue Mod 10 14 November 2018 Dr. Veton Këpuska

“Synchronous QUE” Option Talker QUE_put(&myQ,msg); SEM_post(&myQSem); Listener SEM_pend(&myQSem,-1); msg=QUE_get(&myQ); T TO Technical Training Organization 23

BIOS Instrumentation: Inter-Thread Comm & Synch - ITC END 14 November 2018 Dr. Veton Këpuska