1 DPS for DC2 Summary Model Implementation –Pipeline & Slice in Python and C++ Stage Loop, Policy configuration, Event handling in Python MPI env and communications.

Slides:



Advertisements
Similar presentations
INTRODUCTION TO SIMULATION WITH OMNET++ José Daniel García Sánchez ARCOS Group – University Carlos III of Madrid.
Advertisements

GCT Software Jim Brooke GCT ESR, 7 th November 2006.
13,000 Jobs and counting…. Advertising and Data Platform Our System.
NOAO Brown Bag May 13, 2008 Tucson, AZ 1 Data Management Middleware NOAO Brown Bag Tucson, AZ May 13, 2008 Jeff Kantor LSST Corporation.
Chapter 15 Design, Coding, and Testing. Copyright © 2005 Pearson Addison-Wesley. All rights reserved Design Document The next step in the Software.
Portability Issues. The MPI standard was defined in May of This standardization effort was a response to the many incompatible versions of parallel.
Session-02. Index. Jsp in Struts 2 Web.xml File in Struts 2.
Batch VIP — A backend system of video processing VIEW Technologies The Chinese University of Hong Kong.
Enterprise Reporting with Reporting Services SQL Server 2005 Donald Farmer Group Program Manager Microsoft Corporation.
System Design/Implementation and Support for Build 2 PDS Management Council Face-to-Face Mountain View, CA Nov 30 - Dec 1, 2011 Sean Hardman.
Storage Systems in HPC John A. Chandy Department of Electrical and Computer Engineering University of Connecticut.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 18 Slide 1 Software Reuse 2.
High Throughput Parallel Computing (HTPC) Dan Fraser, UChicago Greg Thain, Uwisc.
1 Integrating GPUs into Condor Timothy Blattner Marquette University Milwaukee, WI April 22, 2009.
Learning Alfresco Forms Service By Examples
KARMA with ProActive Parallel Suite 12/01/2009 Air France, Sophia Antipolis Solutions and Services for Accelerating your Applications.
Database Monitoring with BusyBee Agenda  What is BusyBee ?  Architecture  XML Configuration File  Domain Inspectors  Alert Examples  Interface to.
THE GITB TESTING FRAMEWORK Jacques Durand, Fujitsu America | December 1, 2011 GITB |
M i SMob i S Mob i Store - Mobile i nternet File Storage Platform Chetna Kaur.
AUTOBUILD Build and Deployment Automation Solution.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting June 13-14, 2002.
Business Unit or Product Name © 2007 IBM Corporation Introduction of Autotest Qing Lin.
The Pipeline Processing Framework LSST Applications Meeting IPAC Feb. 19, 2008 Raymond Plante National Center for Supercomputing Applications.
Extending ArcGIS for Server
University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY License4Grid: Adopting DRM for Licensed.
Metadata in the iPlant Collaborative Cyberinfrastructure Birds of a Feather meeting at PAG XXII, Jan. 14, 2014.
Some Design Notes Iteration - 2 Method - 1 Extractor main program Runs from an external VM Listens for RabbitMQ messages Starts a light database engine.
1 Sergio Maffioletti Grid Computing Competence Center GC3 University of Zurich Swiss Grid School 2012 Develop High Throughput.
3 Copyright © 2009, Oracle. All rights reserved. Accessing Non-Oracle Sources.
Towards Exascale File I/O Yutaka Ishikawa University of Tokyo, Japan 2009/05/21.
Framework for MDO Studies Amitay Isaacs Center for Aerospace System Design and Engineering IIT Bombay.
PMI: A Scalable Process- Management Interface for Extreme-Scale Systems Pavan Balaji, Darius Buntinas, David Goodell, William Gropp, Jayesh Krishna, Ewing.
ABone Architecture and Operation ABCd — ABone Control Daemon Server for remote EE management On-demand EE initiation and termination Automatic EE restart.
CS 591 x I/O in MPI. MPI exists as many different implementations MPI implementations are based on MPI standards MPI standards are developed and maintained.
Workshop BigSim Large Parallel Machine Simulation Presented by Eric Bohm PPL Charm Workshop 2004.
Measurement Data Workspace and Archive: Current State and Next Steps GEC15 Oct 2012 Giridhar Manepalli Corporation for National Research Initiatives
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Apr. 8, 2002Calibration Database Browser Workshop1 Database Access Using D0OM H. Greenlee Calibration Database Browser Workshop Apr. 8, 2002.
5 Feb 2008 Roberta Allsman LSST Corporation LSST Data Management DC2 Post-mortem.
Security Vulnerabilities in A Virtual Environment
7 Strategies for Extracting, Transforming, and Loading.
LSST DC2 Post-Mortem February 5–6, DataProperty Generally works Uses: –FITS headers –Events –Persistence parameters (additionalData)‏ 1.
LQCD Workflow Project L. Piccoli October 02, 2006.
System/SDWG Update Management Council Face-to-Face Flagstaff, AZ August 22-23, 2011 Sean Hardman.
ESG-CET Meeting, Boulder, CO, April 2008 Gateway Implementation 4/30/2008.
Process Manager Specification Rusty Lusk 1/15/04.
IMS 3253: Validation and Errors 1 Dr. Lawrence West, MIS Dept., University of Central Florida Topics Validation and Error Handling Validation.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
W3C Multimodal Interaction Activities Deborah A. Dahl August 9, 2006.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
IceCube DAQ Mtg. 10,28-30 IceCube DAQ: Implementation Plan.
CONFIDENTIAL Overview NTP Software Object Store and Cloud Connector™ (OSCC™) has a carefully structured architecture that includes a number of collaborative.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
I/O aspects for parallel event processing frameworks Workshop on Concurrency in the many-Cores Era Peter van Gemmeren (Argonne/ATLAS)
APEL Architecture Alison Packer. Overview Grid jobs accounting tool APEL Client software - installed in sites (CEs, gLite- APEL node) APEL Server accepts.
LSST DC2 Post-Mortem February 5–6, DataProperty Generally works Uses: –FITS headers –Events –Persistence parameters (additionalData) 1.
Raymond Plante National Center for Supercomputing Applications Coming DC3 Middleware Development.
Report on the FCT MDC Stuart Anderson, Kent Blackburn, Philip Charlton, Jeffrey Edlund, Rick Jenet, Albert Lazzarini, Tom Prince, Massimo Tinto, Linqing.
Enterprise Library 3.0 Memi Lavi Solution Architect Microsoft Consulting Services Guy Burstein Senior Consultant Advantech – Microsoft Division.
Mini-Workshop on multi-core joint project Peter van Gemmeren (ANL) I/O challenges for HEP applications on multi-core processors An ATLAS Perspective.
Working with Logic App Cloud Adapters, Functions, and Storage
Data and Applications Security Developments and Directions
Abstract Machine Layer Research in VGrADS
HMA Follow On Activities
Data and Applications Security Developments and Directions
Overview of Workflows: Why Use Them?
Data and Applications Security Developments and Directions
TN19-TCI: Integration and API management using TIBCO Cloud™ Integration
Presentation transcript:

1 DPS for DC2 Summary Model Implementation –Pipeline & Slice in Python and C++ Stage Loop, Policy configuration, Event handling in Python MPI env and communications in C++ Executable scripts (run by mpiexec): runPipeline.py, runSlice.py Pipeline and Slice configured from same Policy file –Clipboard, Queue, Stage in Python One Clipboard per Pipeline/Slice used in DC2 –New: Generic Stages InputStage, OutputStage, EventStage, SymLinkStage –Model elements not completed Complete C++ implementation Pipeline-Slice communication of data (DataProperty’s) Full Queue capabilities Clipboard metadata: less ad hoc mechanism (schema?)

2 DPS for DC2 Summary (cont.) Key Features –Events handled prior to Stage execution Policy designates the stages that require a trigger event Pipeline receives Events from external sources => events to Slices –MPI Communications are collective All Slices need to be present, running thru Stage loop Slices process each Stage in sync : MPI_Bcast, MPI_Barrier –Exception handling in important places Exceptions from stage preprocess(), process(), postprocess() caught If one Slice catches an Exception, others are undisturbed. –Multiple visits supported –Shutdown event implemented Clean shutdown of MPI env/Slices at the end of Stage loop Todo: a “no more data event” of the same topic as trigger events –Logging integrated into Pipeline/Slice –Memory management (Clipboard cleanup) stabilized

3 DPS: DC2 and Beyond Results –Three Parallel Pipelines executing Application Stages Reasonable stability observed (~36 Slices across 6 nodes) Performance: e.g., Utilization of 8 cores ? Open Questions –Stage API : preprocess(), process(), postprocess() Has this model been useful (validated)? –Direct MPI Communications Finer communication between Pipeline/Slices? –Avoid events, collective operations? Restart a Slice that disappears? –Slice/CCD mapping Should these mapping strategies be integral part of dps? –High level script to run pipelines run.sh, startPipeline.py? Should dc2pipe/ scripts be incorporated into dps?

4 lsst1lsst2lsst3lsstN ActiveMQ Mule MySQL Event System