Half-Sync/Half-Async (HSHA) and Leader/Followers (LF) Patterns

Slides:



Advertisements
Similar presentations
E81 CSE 532S: Advanced Multi-Paradigm Software Development Chris Gill Department of Computer Science and Engineering Washington University in St. Louis.
Advertisements

More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Chapter 7 Protocol Software On A Conventional Processor.
Presented By Srinivas Sundaravaradan. MACH µ-Kernel system based on message passing Over 5000 cycles to transfer a short message Buffering IPC L3 Similar.
ECE 526 – Network Processing Systems Design Software-based Protocol Processing Chapter 7: D. E. Comer.
Precept 3 COS 461. Concurrency is Useful Multi Processor/Core Multiple Inputs Don’t wait on slow devices.
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
Server Architecture Models Operating Systems Hebrew University Spring 2004.
Memory Management, File Systems, I/O How Multiprogramming Issues Mesh ECEN 5043 Software Engineering of Multiprogram Systems University of Colorado, Boulder.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
1 I/O Management in Representative Operating Systems.
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services
Pattern Oriented Software Architecture for Networked Objects Based on the book By Douglas Schmidt Michael Stal Hans Roehnert Frank Buschmann.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
A Comparative Study of the Linux and Windows Device Driver Architectures with a focus on IEEE1394 (high speed serial bus) drivers Melekam Tsegaye
E81 CSE 532S: Advanced Multi-Paradigm Software Development Chris Gill and Venkita Subramonian Department of Computer Science and Engineering Washington.
E81 CSE 532S: Advanced Multi-Paradigm Software Development Venkita Subramonian, Christopher Gill, Guandong Wang, Zhenning Hu, Zhenghui Xie Department of.
Proactor Pattern Venkita Subramonian & Christopher Gill
E81 CSE 532S: Advanced Multi-Paradigm Software Development Chris Gill Department of Computer Science and Engineering Washington University, St. Louis
E81 CSE 532S: Advanced Multi-Paradigm Software Development Christopher Gill and Venkita Subramonian Department of Computer Science and Engineering Washington.
E81 CSE 532S: Advanced Multi-Paradigm Software Development Venkita Subramonian, Christopher Gill, Ying Huang, Marc Sentany Department of Computer Science.
E81 CSE 532S: Advanced Multi-Paradigm Software Development Chris Gill Department of Computer Science and Engineering Washington University in St. Louis.
Low Overhead Real-Time Computing General Purpose OS’s can be highly unpredictable Linux response times seen in the 100’s of milliseconds Work around this.
Major OS Components CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
CSE 532: Lab 2 design overview Overview and Discussion of Lab 2 Design Expand the work from a single scene to entire play –I.e., process a sequence of.
Layers Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
Kernel Synchronization David Ferry, Chris Gill CSE 522S - Advanced Operating Systems Washington University in St. Louis St. Louis, MO
C++11 Atomic Types and Memory Model
Chapter 13: I/O Systems.
Component Configurator
Last Class: Introduction
Module 12: I/O Systems I/O hardware Application I/O Interface
Chapter 13: I/O Systems Modified by Dr. Neerja Mhaskar for CS 3SH3.
Processes and threads.
Linux Details: Device Drivers
EMERALDS Landon Cox March 22, 2017.
Event Handling Patterns Asynchronous Completion Token
LWIP TCP/IP Stack 김백규.
Operating Systems (CS 340 D)
Processes and Threads Processes and their scheduling
OPERATING SYSTEMS CS3502 Fall 2017
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
CSE451 I/O Systems and the Full I/O Path Autumn 2002
Semester Review Chris Gill CSE 422S - Operating Systems Organization
Operating Systems (CS 340 D)
CSCI 315 Operating Systems Design
The Active Object Pattern
I/O Systems I/O Hardware Application I/O Interface
Operating System Concepts
13: I/O Systems I/O hardwared Application I/O Interface
CS703 - Advanced Operating Systems
Software models - Software Architecture Design Patterns
Monitor Object Pattern
Linux Details: Device Drivers
Top Half / Bottom Half Processing
Implementing Mutual Exclusion
Threaded Programming in Python
Testing and Debugging Concurrent Code
CSCI1600: Embedded and Real Time Software
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Linux Block I/O Layer Chris Gill, Brian Kocoloski
CSE 153 Design of Operating Systems Winter 19
CSCI1600: Embedded and Real Time Software
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 13: I/O Systems.
Message Passing Systems Version 2
Module 12: I/O Systems I/O hardwared Application I/O Interface
Message Passing Systems
Presentation transcript:

Half-Sync/Half-Async (HSHA) and Leader/Followers (LF) Patterns E81 CSE 532S: Advanced Multi-Paradigm Software Development Half-Sync/Half-Async (HSHA) and Leader/Followers (LF) Patterns Venkita Subramonian, Chris Gill, Nick Haddad, and Steve Donahue Department of Computer Science and Engineering Washington University, St. Louis cdgill@cse.wustl.edu Title slide

HSHA and LF Patterns Both are (architectural) concurrency patterns Both decouple asynchronous, synchronous processing Synchronous layer may reduce programming complexity and overhead (e.g., fewer context switches, cache misses) Asynchronous layer improves responsiveness to events Key differences HSHA dedicates a thread to handle input, others to do work LF lets threads take turns rotating between those roles

HSHA and LF Context A concurrent system with asynchronous events Performs both synchronous and asynchronous services Synchronous: copy data from one container to another Asynchronous: socket or input stream reads and writes Both kinds of services must interact Thread share event sources A thread may handle events for other threads A thread may also handle its own events Efficient processing of events is important

HSHA and LF Design Problem Asynchronous processing to make a system responsive E.g., dedicated input thread (or reactive socket handling) Services may map directly to asynchronous mechanisms E.g., hardware interrupts, signals, asynchronous I/O, etc. Synchronous processing may be simpler, easier to design, implement, and (especially) debug How to bring these paradigms together? How to achieve high performance multi-threading? Service requests arrive from many sources Concurrency overhead must be minimized Context switches, locking and unlocking, copying, etc. Threads must avoid race conditions for event sources And for messages, buffers, and other event processing artifacts

HSHA Solution Decompose architecture into two service layers Synchronous Asynchronous Add a queueing layer between them to facilitate communication event notification computation filtering classification (long duration operations) (short duration operations) Queue get (put) put (get) synchronous layer asynchronous layer

LF Solution Threads in a pool take turns accessing event sources Waiting threads (followers) queue up for work Thread whose turn it is acts as the “leader” May dispatch events to other appropriate (passive) objects May hand off events to other threads (e.g., active objects), i.e., “sorting the mail” until it finds its own work to do Leader thread eventually takes event and processes it At which point it leaves the event source and … … when it’s done processing becomes a follower again … … but meanwhile, another thread becomes leader Protocol for choosing and activating a new leader Can be sophisticated (queue w/ cond_t) or simple (mutex) Should be appropriately efficient

Example: “Half-Sync/Half-Reactive” HSHA Variant Notice asynchronous, queuing, synchronous layers (HSHA) Generalizes dedicated input thread to multi-connection case Easy to multi-thread synchronous layer as shown below Ask Chris about Priority Inversion Issues

Limitations of the HS/HR Approach Data passing overhead I/O thread to Worker thread Dynamic memory allocation Synchronization operations Blocking factors Context switches May see unnecessary latency Best case and common case may differ if work is heterogeneous May suffer more cache misses Due to multiple threads touching the same data

Example Revised: Applying LF Pattern Allocate a pool of application threads, as with HSHR However, don’t separate synchronous/reactive layers Allow threads from the pool down into the reactor Threads from pool take turns being the leader Leader gets an event Looks at its destination info ACT, handle, etc. If for leader, just handle it Leader leaves reactive layer Threads “elect” a new leader Leader enters reactor Other threads block on CV(s) Otherwise, leader queues it Notify queue CV(s) Thread(s) wake up, access queue, get their events, handle them Leader