Interactive European Grid Environment for HEP Application with Real Time Requirements Lukasz Dutka 1, Krzysztof Korcyl 2, Krzysztof Zielinski 1,3, Jacek.

Slides:



Advertisements
Similar presentations
Challenges for Interactive Grids a point of view from Int.Eu.Grid project Remote Instrumentation Services in Grid Environment RISGE BoF Manchester 8th.
Advertisements

Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
High Performance Computing Course Notes Grid Computing.

Polish Infrastructure for Supporting Computational Science in the European Research Space EUROPEAN UNION Services and Operations in Polish NGI M. Radecki,
Optimizing of data access using replication technique Renata Słota 1, Darin Nikolow 1,Łukasz Skitał 2, Jacek Kitowski 1,2 1 Institute of Computer Science.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Task 3.5 Tests and Integration ( Wp3 kick-off meeting, Poznan, 29 th -30 th January 2002 Santiago González de la.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
Workload Management Massimo Sgaravatto INFN Padova.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space User Oriented Provisioning of Secure Virtualized.
Client/Server Grid applications to manage complex workflows Filippo Spiga* on behalf of CRAB development team * INFN Milano Bicocca (IT)
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space Cracow Grid Workshop’10 Kraków, October 11-13,
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space Towards scalable, semantic-based virtualized storage.
Virtual Organization Approach for Running HEP Applications in Grid Environment Łukasz Skitał 1, Łukasz Dutka 1, Renata Słota 2, Krzysztof Korcyl 3, Maciej.
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
Worldwide event filter processing for calibration Calorimeter Calibration Workshop Sander Klous September 2006.
Cracow Grid Workshop, October 15-17, 2007 Polish Grid Polish Grid: National Grid Initiative in Poland Jacek Kitowski Institute of Computer Science AGH-UST.
DORII Joint Research Activities DORII Joint Research Activities Status and Progress 4 th All-Hands-Meeting (AHM) Alexey Cheptsov on.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Polish Infrastructure for Supporting Computational Science in the European Research Space QoS provisioning for data-oriented applications in PL-Grid D.
Łukasz Skitał 2, Renata Słota 1, Maciej Janusz 1 and Jacek Kitowski 1,2 1 Institute of Computer Science AGH University of Science and Technology, Mickiewicza.
DORII review Deployment and management of production infrastructure SA2 Ioannis Liabotis Greek Research and Technology Network - GRNET.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.

Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
SmartGRID Ongoing research work in Univ. Fribourg and Univ. Applied Sciences of Western Switzerland (HES-SO) SwiNG Grid Day, Bern, Nov. 26th, 2009 Ye HUANG.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
Remote Online Farms Sander Klous
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
Polish Infrastructure for Supporting Computational Science in the European Research Space FiVO/QStorMan: toolkit for supporting data-oriented applications.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
© 2006 STEP Consortium ICT Infrastructure Strand.
Service-oriented Resource Broker for QoS-Guaranteed in Grid Computing System Yichao Yang, Jin Wu, Lei Lang, Yanbo Zhou and Zhili Sun Centre for communication.
Prospects for the use of remote real time computing over long distances in the ATLAS Trigger/DAQ system R. W. Dobinson (CERN), J. Hansen (NBI), K. Korcyl.
Interactive Workflows Branislav Šimo, Ondrej Habala, Ladislav Hluchý Institute of Informatics, Slovak Academy of Sciences.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Terena conference, June 2004, Rhodes, Greece Norbert Meyer The effective integration of scientific instruments in the Grid.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
CoreGRID Workpackage 5 Virtual Institute on Grid Information and Monitoring Services Michał Jankowski, Paweł Wolniewicz, Jiří Denemark, Norbert Meyer,
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Cracow Grid Workshop, October 15-17, 2007 Polish Grid Polish NGI Contribution to EGI Resource Provisioning Function Automatized Direct Communication Tomasz.
Scheduling MPI Workflow Applications on Computing Grids Juemin Zhang, Waleed Meleis, and David Kaeli Electrical and Computer Engineering Department, Northeastern.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Support to MPI and interactivity on gLite infrastructures EGEE’07 Budapest, 4th Oct 2007.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Spanish National Research Council- CSIC Isabel.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Pedro Andrade > IT-GD > D4Science Pedro Andrade CERN European Organization for Nuclear Research GD Group Meeting 27 October 2007 CERN (Switzerland)
EGEE is a project funded by the European Union under contract IST Compchem VO's user support EGEE Workshop for VOs Karlsruhe (Germany) March.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Bob Jones EGEE Technical Director
Regional Operations Centres Core infrastructure Centres
Grid Computing.
Simulation use cases for T2 in ALICE
ExaO: Software Defined Data Distribution for Exascale Sciences
Remote Online Farms TDAQ Sander Klous ACAT April
Presentation transcript:

Interactive European Grid Environment for HEP Application with Real Time Requirements Lukasz Dutka 1, Krzysztof Korcyl 2, Krzysztof Zielinski 1,3, Jacek Kitowski 1,3, Renata Slota 3, Wlodzimierz Funika 3, Kazimierz Balos 1, Lukasz Skital 1, Bartosz Kryza 1, Jan Pieczykolan 1 1 ACK Cyfronet AGH 2 Institute of Nuclear Physics Polish Academy of Sciences 3 Univeristy of Science and Technology AGH Cracow Grid Workshp 2006 – Krakow

2 Outline  int.eu.grid mission  HEP application real time requirements  int.eu.grid approach to support HEP  Summary

Cracow Grid Workshp 2006 – Krakow 3 Example: Ultrasound Computer Tomography A new method of medical imaging based on the reconstruction by numerical techniques of an image, using as input the data measured by a scanner of ultrasounds which surrounds the object of interest. The application requires analyzing about 20 Gb of data, which would take order of one month in a workstation… Researchers demand resources “Interoperable production-level e-Infrastructure for demanding interactive applications to impact the daily work of researchers” Distributed Parallel (MPI) Interactive Computing & Storage at the Tera level User Friendly Access Grid Interactive Desktop int.eu.grid VISION int.eu.grid in three slides

Cracow Grid Workshp 2006 – Krakow 4 A Real Challenge!  the int.eu.grid project aims to change the way researchers can use the available e-Infrastructure, exploiting the interactivity and collaboration possibilities  Researchers need to be convinced that they can: Transfer and process gigabytes of information in minutes Foresee more complex algorithms on larger statistics, test and tune them, use more powerful visualization techniques Collaborate across the network in a rewarding mode, from sharing information to discussing and presenting remotely through enhanced videoconference environments.

Cracow Grid Workshp 2006 – Krakow 5 Interactive European Grid ( Project acronym int.eu.grid Contract number Instrument I3 Duration 2 years may ´06- april ´08 “providing transparently the researcher’s desktop with the power of a supercomputer, using distributed resources” Coordinator: CSIC, Jesús Marco, IFCA, Santander, SPAIN

Cracow Grid Workshp 2006 – Krakow 6 Summary: the int.eu.grid Mission “To deploy and operate a production-quality Grid-empowered eInfrastructure oriented to service research communities supporting demanding interactive applications.”  Deployment of e-Infrastructure Oriented to interactive use Site integration support Grid operations service  Middleware for interactivity and MPI Adapt/integrate existing middleware guarantee interoperability with EGEE  Provide a complete interactivity suite Desktop roaming access scheduler with prioritization services complex visualization.  Support for interactive applications: setup of collaborative environment and VO consideration of performance interactivity and visualization requirements identification and selection of research oriented interactive applications  Support remote collaboration activities: research, management, integration, training  Approach target research communities  Provide security measures for interactivity

Cracow Grid Workshp 2006 – Krakow 7 HEP Requirements in Brief SFI PF Local Event Processing Farms Back End Network SFOs mass storage CERN int.eu.grid infrastructure int.eu.grid PF Remote Processing Farms PF Packet Switched WAN: GEANT Switch lightpath PF 3000 events per second1.5MB each event1second to process one event When we need more power we should be able to harness grid

Cracow Grid Workshp 2006 – Krakow 8 HEP Use Case Requirments  To locate processing resources where processing tasks (PT) will be started. These tasks will be waiting to receive data and process the data once they are received.  Access to each of the CPU found should allow to deliver 1.5 MB of event data every second (the infrastructure should not propose more CPUs per site then the available access bandwidth will allow to use)  Event data transfer time should be minimal and should not interfere with processing resources allocated for processing  Sites with allocated CPUs should have recently updated data base (new updates at CERN every day or a few times per day)  The list of sites with current resources complying with the requirements should be presented to ATLAS operator and the final decision on selection of sites will be left up to him.  Operator should be able to pass approx events per second to grid infrastructure!

Cracow Grid Workshp 2006 – Krakow How to use the Grid faster?  Job submission for each event is to slow  We need interactive communication!  Pilot job idea One job to allocate a node and start PT One PT process many events Direct communication between PT and EFD Faster than job submission EFD provides event (1.5MB/event) PT responds with events analysis results (1Kb/event) Limited lifetime of PT to allow dynamic resource allocation

Cracow Grid Workshp 2006 – Krakow Proposed HEP Architecture SFI EFD Buffer PT Local PT Farm EFD Buffer PT Dispatcher Broker CE int.eu.grid HEP VO Infrastructure monitoring Application Monitoring Remote PTs proxyPT Events HEP VO Database WNs UI

Cracow Grid Workshp 2006 – Krakow 11 Development Status  HEP Application is developed independently to the project  We need to develop additional tools used to gridify application: at the moment we already designed solution and we started the implementation process very first and very simple demo will be ready in 3-4 months a bit more mature prototype should be ready before the review in may the development involves QoS issues

Cracow Grid Workshp 2006 – Krakow 12 Summary  Interactive grid environment helps us to fulfill rigorous time requirements  The intelligent distribution of event meets QoS requirements, exploits and optimize load of the system  The solution designed for HEP application is ready to be used for other applications requiring access to grid computational power in a similar way. Events wait in many queues Intelligent dispatcher has a view at the current status of computational resources occupied by previously submitted jobs, and it distributes events to best resources

Cracow Grid Workshp 2006 – Krakow 13 Thank you!