The Publisher-Subscriber Interface Timm Morten Steinbeck, KIP, University Heidelberg Timm Morten Steinbeck Technical Computer Science Kirchhoff Institute.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

XFEL 2D Pixel Clock and Control System Train Builder Meeting, DESY 22 October 2009 Martin Postranecky, Matt Warren, Matthew Wing.
Performance Analysis of Daisy- Chained CPUs Based On Modeling Krzysztof Korcyl, Jagiellonian University, Krakow Radoslaw Trebacz Jagiellonian University,
VIA and Its Extension To TCP/IP Network Yingping Lu Based on Paper “Queue Pair IP, …” by Philip Buonadonna.
JXTA P2P Platform Denny Chen Dai CMPT 771, Spring 08.
Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg A Framework for.
A CHAT CLIENT-SERVER MODULE IN JAVA BY MAHTAB M HUSSAIN MAYANK MOHAN ISE 582 FALL 2003 PROJECT.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
HLT Online Monitoring Environment incl. ROOT - Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HOMER - HLT Online Monitoring.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
Matthias Richter, University of Bergen & Timm M. Steinbeck, University of Heidelberg 1 AliRoot - Pub/Sub Framework Analysis Component Interface.
Timm Morten Steinbeck, KIP, University Heidelberg 1/15 How to Marry the ALICE High Level Trigger DiMuon Tracking Algorithm and the ALICE High Level Trigger.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
M. Richter, University of Bergen & S. Kalcher, J. Thäder, T. M. Steinbeck, University of Heidelberg 1 AliRoot - Pub/Sub Framework Analysis Component Interface.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Timm M. Steinbeck HLT Data Transport Framework.
Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg Alice High Level.
TCP: Software for Reliable Communication. Spring 2002Computer Networks Applications Internet: a Collection of Disparate Networks Different goals: Speed,
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HLT and the Alignment & Calibration DB.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
MODULE IV SWITCHED WAN.
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
LWIP TCP/IP Stack 김백규.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
 Socket  The combination of an IP address and a port number. (RFC 793 original TCP specification)  The name of the Berkeley-derived application programming.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
DAQ Status Graham. EMU / EB status EMU framework prototype is complete. Prototype read, process and send modules are complete. XML configuration mechanism.
Vassil Roussev 2 A socket is the basic remote communication abstraction provided by the OS to processes. controlled by operating system.
Implement An Online Management System for PBX Presented by: Bui Phuong Nhung Advisor: Dr. Wei, Chao-Huang.
The Socket Interface Chapter 21. Application Program Interface (API) Interface used between application programs and TCP/IP protocols Interface used between.
Introduction to Sockets “A socket is one endpoint of a two-way communication link between two programs running on the network. A socket is bound to a port.
CSE 6590 Department of Computer Science & Engineering York University 111/9/ :26 AM.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
3rd April 2001A.Polini and C.Youngman1 GTT status Items reviewed: –Results of GTT tests with 3 MVD-ADC crates. Aims Hardware and software setup used Credit.
1 RECONSTRUCTION OF APPLICATION LAYER MESSAGE SEQUENCES BY NETWORK MONITORING Jaspal SubhlokAmitoj Singh University of Houston Houston, TX Fermi National.
Hwajung Lee.  Interprocess Communication (IPC) is at the heart of distributed computing.  Processes and Threads  Process is the execution of a program.
Lecture 4 Overview. Ethernet Data Link Layer protocol Ethernet (IEEE 802.3) is widely used Supported by a variety of physical layer implementations Multi-access.
SKYPIAX, how to add Skype capabilities to FreeSWITCH (and Asterisk) CHICAGO, USA, September 2009.
1 OSI and TCP/IP Models. 2 TCP/IP Encapsulation (Packet) (Frame)
Author George Peck EVLA Hardware Monitor & Control PDR March 13, MIB FUNCTIONALITY.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 11, 2006 Session 23.
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
The Client-Server Model And the Socket API. Client-Server (1) The datagram service does not require cooperation between the peer applications but such.
Design of a Cooperative Video Streaming System on Community based Resource Sharing Networks 2010 International Conference on P2P, Parallel, Grid, Cloud.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
NASA AMS Prototyping Activities Scott Burleigh Jet Propulsion Laboratory, California Institute of Technology 11 March 2008.
AMQP, Message Broker Babu Ram Dawadi. overview Why MOM architecture? Messaging broker like RabbitMQ in brief RabbitMQ AMQP – What is it ?
An approach to Web services Management in OGSA environment By Shobhana Kirtane.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
By Nitin Bahadur Gokul Nadathur Department of Computer Sciences University of Wisconsin-Madison Spring 2000.
Socket programming in C. Socket programming Socket API introduced in BSD4.1 UNIX, 1981 explicitly created, used, released by apps client/server paradigm.
LACSI 2002, slide 1 Performance Prediction for Simple CPU and Network Sharing Shreenivasa Venkataramaiah Jaspal Subhlok University of Houston LACSI Symposium.
KIP Ivan Kisel, Uni-Heidelberg, RT May 2003 A Scalable 1 MHz Trigger Farm Prototype with Event-Coherent DMA Input V. Lindenstruth, D. Atanasov,
Mitglied der Helmholtz-Gemeinschaft FairMQ with FPGAs and GPUs Simone Esch –
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Matthias Richter, Sebastian Kalcher, Jochen Thäder & Timm M. Steinbeck
AMS Prototyping Activities
Controlling a large CPU farm using industrial tools
CERN-Russia Collaboration in CASTOR Development
CS 286 Computer Organization and Architecture
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Interacting With Protocol Software
Event Building With Smart NICs
TCP/IP Protocol Suite: Review
Network Processors for a 1 MHz Trigger-DAQ System
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 13: I/O Systems.
Virtual LAN (VLAN).
Presentation transcript:

The Publisher-Subscriber Interface Timm Morten Steinbeck, KIP, University Heidelberg Timm Morten Steinbeck Technical Computer Science Kirchhoff Institute for Physics University Heidelberg

The Problem Timm Morten Steinbeck, KIP, University Heidelberg  Create an interface to transport event data between several analysis steps needed for the Alice Level 3 Trigger.  Communication is presumed to be primarily local.  Transport of actual data should be kept to a minimum - only descriptors should be sent between processes.  One data source should support multiple destinations.  Two kinds of data destinations: Data processing and monitoring.

The Approach Timm Morten Steinbeck, KIP, University Heidelberg  Two kinds of processes: Publisher - Subscriber.  Subscribers announce interest in events to publisher (Subscription).  Publisher informs subscribers of new events when they arrive.  Subscribers inform the publisher when they are done working on the event.  Event is freed when all subscribers are finished with it. Subscriber Publisher Subscriber Publisher Subscriber Publisher Subscribe Event Done New Event

Publisher-Subscriber Principle Timm Morten Steinbeck, KIP, University Heidelberg Subscriber Publisher Subscriber Publisher Subscriber Publisher Subscribe Event Done New Event

Publisher-Subscriber Communication Timm Morten Steinbeck, KIP, University Heidelberg  Communication between publisher & subscriber via named pipes.  Event data is kept in shared memory area.  Only event descriptors are sent via named pipes.  Descriptors describe location of data in shared memory.  Event descriptor can describe multiple datablocks. Publisher Shared Memory Event M Data Block n Data Block 0 New Event Event M Descriptor Block n Block 0 Subscriber

Event Descriptors Timm Morten Steinbeck, KIP, University Heidelberg Event descriptors contain  The ID of the event  The number of data blocks described For each data block:  The size of the block  The shared memory ID  The starting offset in the shared memory  The type of data  The ID of the originating node Shared Memory Event M Data Block 0 Event M Data Block n Event ID: M Block count: n Block n  Shm ID  Offset  Size  Datatype  Producer ID Block 0  Shm ID  Offset  Size  Datatype  Producer ID

Persistent and Transient Subscribers Timm Morten Steinbeck, KIP, University Heidelberg Two kinds of subscribers supported: Persistent and Transient  Persistent subscribers get all events.  Publisher frees events only when all persistent subscribers are finished.  Transient subscribers can specify event selection criteria (Event number modulo, trigger words).  Transient subscribers can have events canceled by the publisher. Subscriber Publisher Subscriber Publisher Cancel Event New Event Publisher Subscriber Publisher Subscriber Publisher Event Done New Event Free EventFree Event Publisher Free EventFree Event

Analysis Components Timm Morten Steinbeck, KIP, University Heidelberg Analysis processes made up of three parts:  Subscriber as data input for new event data.  Processing code that works on the data in shared memory and possibly writes some new output data into shared memory.  Publisher to make the resulting output data available. Subscriber Publisher Analysis Code Shared Memory New Event Event Input Data Event Output Data New Event Read data Write data

Data Transport Components Timm Morten Steinbeck, KIP, University Heidelberg A number of components exist to transport event data and to build an analysis chain:  Event Scatterer distributes events in round-robin fashion for load balancing.  Event Gatherer merges blocks from multiple event data streams into one event.  Event Collector merges multiple event data streams into one stream.  Bridge transports event data over network between computers.

Event Scatterer Timm Morten Steinbeck, KIP, University Heidelberg  One subscriber input.  Multiple publisher outputs.  Incoming events are distributed to output publishers in round-robin fashion. Subscriber Publisher New Event New Event Subscriber Publisher New Event New Event

Event Gatherer Timm Morten Steinbeck, KIP, University Heidelberg  Multiple subscribers attached to different publishers.  One output publisher.  Data blocks from incoming events are merged into one outgoing event. Publisher Merging Code Subs Shared Memory Event M Block 1 Event M Block 0 Event M Block 0 Event M Block 1 Event M Block 0 Block 1

Event Collector Timm Morten Steinbeck, KIP, University Heidelberg  Multiple subscribers attached to different publishers.  One output publisher.  Each incoming event is published unchanged by output publisher. Publisher Subs New Event Publisher Subs New Event

Bridge Timm Morten Steinbeck, KIP, University Heidelberg Bridge consists of two programs:  SubscriberBridgeHead  Contains subscriber.  Gets input data from publisher.  Sends data over network.  PublisherBridgeHead  Reads data from network.  Publishes data again. Node B Data consumer Publisher Subscriber Bridge Network Code New Event Node A Data producer Publisher Subscriber Bridge Network Code New Event Network data transport

Bridge Networking Timm Morten Steinbeck, KIP, University Heidelberg  Bridge networking code is based on abstract communication classes.  Different communication classes for (short) message transfers and (big) datablock transfers.  Prototype implementation currently exists for SCI.  Further implementations possible for TCP/IP, Scheduled Transfer Protocol STP, raw (Gigabit) Ethernet, other (future) system area networks.

TCP/IP Bridge Timm Morten Steinbeck, KIP, University Heidelberg Second approach exists for TCP/IP bridging:  Publisher-Subscriber interface ported to TCP (from named pipes + shared memory) by Paul Starzetz.  Pipe-TCP Bridge application can convert between pipe-shm and TCP publisher-subscriber code. Node B Data consumer Pipe Publ Pipe Subs Bridge New Event Node A Data producer Pipe Publ Pipe Subs Bridge New Event TCP New Event TCP Publ TCP Subs

Analysis Chain Framework Timm Morten Steinbeck, KIP, University Heidelberg  Building of arbitrary analysis chains possible.  Chain setup determined at program start.  No recompilation of source code needed.  Chain functionality can be tested on one node.  Performance tests or production runs on multiple nodes.

First Benchmarks Timm Morten Steinbeck, KIP, University Heidelberg Benchmarks of basic publisher-subscriber communication via named pipes:  Platform: 733 MHz dual Pentium III PC.  Minimal event data size (1 block, 4 byte).  One publisher, no subscriber: 95  s/msg.  One publisher, one subscriber, no data processing * : 1,3 ms/msg. * But full descriptor processing.

Status Timm Morten Steinbeck, KIP, University Heidelberg  Implementation of most needed components finished.  Now testing and debugging code.  Further performance tests.  Also integration tests with analysis code in progress.