Download presentation
Presentation is loading. Please wait.
Published byThomasina Stephens Modified over 9 years ago
1
Proactor Pattern Venkita Subramonian & Christopher Gill
E81 CSE 532S: Advanced Multi-Paradigm Software Development Proactor Pattern Venkita Subramonian & Christopher Gill Department of Computer Science and Engineering Washington University, St. Louis Title slide
2
Proactor An architectural pattern for asynchronous, decoupled operation initiation and completion In contrast to Reactor architectural pattern Synchronous, coupled initiation and completion I.e., reactive initiation completes when hander call returns Except for reactive completion only, e.g., for connector Proactor separates initiation and completion more Without multi-threading overhead/complexity Performs additional bookkeeping to match them up Dispatches a service handler upon completion Asynch Handler does post-operation processing Still separates application from infrastructure A small departure vs. discussing other patterns We’ll focus on using rather than implementing proactor I.e., much of the implementation already given by the OS
3
Context Asynchronous operations used by application
Application thread should not block Application needs to know when an operation completes Decoupling application/infrastructure is useful Reactive performance is insufficient Multi-threading incurs excessive overhead or programming model complexity an application must not block indefinitely waiting on any single source unnecessary utilization of the CPU(s) should be avoided Minimal modifications and maintenance effort should be required to integrate new or enhanced services Application should be shielded from the complexity of multi-threading and synchronization mechanism
4
Design Forces Separation of application from infrastructure
Flexibility to add new application components Performance benefits of concurrency Reactive has coarse interleaving (handlers) Multi-threaded has fine interleaving (instructions) Complexity of multi-threading Concurrency hazards: deadlock, race conditions Coordination of multiple threads Performance issues with multi-threading Synchronization re-introduces coarser granularity Overhead of thread context switches Sharing resources across multiple threads an application must not block indefinitely waiting on any single source unnecessary utilization of the CPU(s) should be avoided Minimal modifications and maintenance effort should be required to integrate new or enhanced services Application should be shielded from the complexity of multi-threading and synchronization mechanism
5
Compare Reactor vs. Proactor Side by Side
Application Application ASYNCH accept/read/write handle_events Reactor Handle handle_event handle_events Event Handler Proactor accept/read/write handle_event Handle Completion Handler
6
Proactor in a nutshell create handlers 1 2 register handlers asynch_io
create handlers 1 2 register handlers asynch_io ACT1 ACT2 1 2 handle events 4 Completion Handler2 Completion Handler1 Application ACT 8 handle_event 1 2 3 associate handles with I/O completion port wait 5 complete 7 Proactor I/O Completion port ACT 6 completion event OS (or AIO emulation)
7
Motivating Example: A Web Server
Remote clients use server to record status information Logging records are written to various output devices Clients and server use a connection-oriented protocol, such as TCP Clients and server bound to transport endpoints that uniquely ID them Multiple clients can access server simultaneously Each client maintains its own connection with the logging server A new client connection request is indicated by a CONNECT event A request to process logging records is indicated by a READ event The logging records and connection requests that clients issue can arrive concurrently at the logging server May incur the following liabilities: may be inefficient and non-scalable due to context switching, synchronization, and data movement among CPUs . may require the use of complex concurrency control schemes not available on all operating systems or non-portable semantics may be better to align threading strategy to available resources From
8
First Approach: Reactive (1/2)
Web Server Acceptor 5 create HTTP Handler Web Browser 3 connect 4 connection request 6 register for socket read Reactor register acceptor 1 handle events 2
9
First Approach: Reactive (2/2)
Web Server read request 3 parse request 4 Acceptor 1 HTTP Handler Web Browser GET/etc/passwd 5 send file socket read ready 10 register for file read 2 8 7 register for socket write read file Reactor 6 file read ready File System 9 socket write ready
10
Analysis of the Reactive Approach
Application-supplied acceptor creates, registers handlers A factory Single-threaded A handler at a time Concurrency Good with small jobs (e.g., TCP/IP stream fragments) With large jobs? Remote clients use server to record status information Logging records are written to various output devices Clients and server use a connection-oriented protocol, such as TCP Clients and server bound to transport endpoints that uniquely ID them Multiple clients can access server simultaneously Each client maintains its own connection with the logging server A new client connection request is indicated by a CONNECT event A request to process logging records is indicated by a READ event The logging records and connection requests that clients issue can arrive concurrently at the logging server May incur the following liabilities: may be inefficient and non-scalable due to context switching, synchronization, and data movement among CPUs . may require the use of complex concurrency control schemes not available on all operating systems or non-portable semantics may be better to align threading strategy to available resources From
11
A Second Approach: Multi-Threaded
Acceptor spawns, e.g., a thread-per-connection Instead of registering handler with a reactor Handlers are active Multi-threaded Highly concurrent May be physically parallel Concurrency hazards Any shared resources between handlers Locking / blocking costs Remote clients use server to record status information Logging records are written to various output devices Clients and server use a connection-oriented protocol, such as TCP Clients and server bound to transport endpoints that uniquely ID them Multiple clients can access server simultaneously Each client maintains its own connection with the logging server A new client connection request is indicated by a CONNECT event A request to process logging records is indicated by a READ event The logging records and connection requests that clients issue can arrive concurrently at the logging server May incur the following liabilities: may be inefficient and non-scalable due to context switching, synchronization, and data movement among CPUs . may require the use of complex concurrency control schemes not available on all operating systems or non-portable semantics may be better to align threading strategy to available resources From
12
A Third Approach: Proactive
Acceptor/handler registers itself with OS, not with a separate dispatcher Acts as a completion dispatcher itself OS performs work E.g., accepts connection E.g., reads a file E.g., writes a file OS tells completion dispatcher it’s done Accepting connect Performing I/O Remote clients use server to record status information Logging records are written to various output devices Clients and server use a connection-oriented protocol, such as TCP Clients and server bound to transport endpoints that uniquely ID them Multiple clients can access server simultaneously Each client maintains its own connection with the logging server A new client connection request is indicated by a CONNECT event A request to process logging records is indicated by a READ event The logging records and connection requests that clients issue can arrive concurrently at the logging server May incur the following liabilities: may be inefficient and non-scalable due to context switching, synchronization, and data movement among CPUs . may require the use of complex concurrency control schemes not available on all operating systems or non-portable semantics may be better to align threading strategy to available resources From
13
Proactor Dynamics Asynch Operation Processor Asynch Operation
Completion Dispatcher Completion Handler Application Asynch operation initiated invoke execute Operation runs asynchronously Operation completes dispatch handle_event Completion handler notified 1. application registers a concrete event handler with the reactor 2. application indicates the event(s) for which event handler will be notified 3. the reactor instructs each event handler to provide its internal handle this identifies event sources to demultiplexer and OS 4. application starts the reactor's event loop 5. reactor creates a handle set from the handles of all registered handlers 6. reactor calls the demultiplexer to wait for events 7. call to demultiplexer returns indicating that some handle is “ready” 8. reactor uses the ready handles as `keys' to locate appropriate handler(s) 9. reactor iteratively dispatches the handler’s hook method(s) hook methods carry out services Completion handler runs From
14
Asynch I/O Factory classes
ACE_Asynch_Read_Stream Initialization prior to initiating read: open() Initiate asynchronous read: read() (Attempt to) halt outstanding read: cancel() ACE_Asynch_Write_Stream Initialization prior to initiating write: open() Initiate asynchronous write: write() (Attempt to) halt outstanding write: cancel()
15
Asynchronous Event Handler Interface
ACE_Handler Proactive handler Distinct from reactive ACE_Event_Handler Return handle for underlying stream handle() Read completion hook handle_read_stream() Write completion hook handle_write_stream() Timer expiration hook handle_time_out()
16
Proactor Interface (C++NPV2 Section 8.5)
Lifecycle Management Initialize proactor instance: ACE_Proactor(), open () Shut down proactor: ~ACE_Proactor(), close() Singleton accessor: instance() Event Loop Management Event loop step: handle_events() Event loop: proactor_run_event_loop() Shut down event loop: proactor_end_event_loop() Event loop completion: proactor_event_loop_done() Timer Management Start/stop timers: schedule_timer(), cancel_timer() I/O Operation Facilitation Input: create_asynch_read_stream() Output: create_asynch_write_stream()
17
Proactor Consequences
Benefits Separation of application, concurrency concerns Potential portability, performance increases Encapsulated concurrency mechanisms Separate lanes, no inherent need for synchronization Separation of threading and concurrency policies Liabilities Difficult to debug Opaque and non-portable completion dispatching Controlling outstanding operations Ordering, correct cancellation notoriously difficult an application must not block indefinitely waiting on any single source unnecessary utilization of the CPU(s) should be avoided Minimal modifications and maintenance effort should be required to integrate new or enhanced services Application should be shielded from the complexity of multi-threading and synchronization mechanism
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.