Distributed Services for Grid Enabled Data Analysis Distributed Services for Grid Enabled Data Analysis.

Slides:



Advertisements
Similar presentations
LEAD Portal: a TeraGrid Gateway and Application Service Architecture Marcus Christie and Suresh Marru Indiana University LEAD Project (
Advertisements

Legacy code support for commercial production Grids G.Terstyanszky, T. Kiss, T. Delaitre, S. Winter School of Informatics, University.
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Sphinx Server Sphinx Client Data Warehouse Submitter Generic Grid Site Monitoring Service Resource Message Interface Current Sphinx Client/Server Multi-threaded.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Globus Toolkit 4 hands-on Gergely Sipos, Gábor Kecskeméti MTA SZTAKI
Distributed Heterogeneous Data Warehouse For Grid Analysis
GRID DATA MANAGEMENT PILOT (GDMP) Asad Samar (Caltech) ACAT 2000, Fermilab October , 2000.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
6th Biennial Ptolemy Miniconference Berkeley, CA May 12, 2005 Distributed Computing in Kepler Ilkay Altintas Lead, Scientific Workflow Automation Technologies.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Data Management for Physics Analysis in PHENIX (BNL, RHIC) Evaluation of Grid architecture components in PHENIX context Barbara Jacak, Roy Lacey, Saskia.
Notes On the GAE Notes On the GAE Harvey B. Newman Harvey B. Newman California Institute of Technology Grid-enabled Analysis Environment Workshop California.
Other servers Java client, ROOT (analysis tool), IGUANA (CMS viz. tool), ROOT-CAVES client (analysis sharing tool), … any app that can make XML-RPC/SOAP.
October 2003 Iosif Legrand Iosif Legrand California Institute of Technology.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
Workload Management Massimo Sgaravatto INFN Padova.
Member of the ExperTeam Group Ralf Ratering Pallas GmbH Hermülheimer Straße Brühl, Germany
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Data Grid Web Services Chip Watson Jie Chen, Ying Chen, Bryan Hess, Walt Akers.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Grid Information Systems. Two grid information problems Two problems  Monitoring  Discovery We can use similar techniques for both.
OSG Public Storage and iRODS
1 Resource Management of Large- Scale Applications on a Grid Laukik Chitnis and Sanjay Ranka (with Paul Avery, Jang-uk In and Rick Cavanaugh) Department.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Korea Workshop May Grid Analysis Environment (GAE) (overview) Frank van Lingen (on behalf of the GAE.
Grid Leadership Avery –PI of GriPhyN ($11 M ITR Project) –PI of iVDGL ($13 M ITR Project) –Co-PI of CHEPREO –Co-PI of UltraLight –President of SESAPS Ranka.
Virtual Logbooks and Collaboration in Science and Software Development Dimitri Bourilkov, Vaibhav Khandelwal, Archis Kulkarni, Sanket Totala University.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Miguel Branco CERN/University of Southampton Enabling provenance on large-scale e-Science applications.
ACAT 2003 Iosif Legrand Iosif Legrand California Institute of Technology.
Main Sphinx Design Concepts There are two primary design components which comprise Sphinx The Database Warehouse The Control Process The Database Warehouse.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
CPT Demo May Build on SC03 Demo and extend it. Phase 1: Doing Root Analysis and add BOSS, Rendezvous, and Pool RLS catalog to analysis workflow.
Data Mining and Exploration Middleware for Distributed and Grid Computing – University of Minnesota 1 Sphinx: A Scheduling Middleware for Data.
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
CHEP 2004 Grid Enabled Analysis: Prototype, Status and Results (on behalf of the GAE collaboration) Caltech, University of Florida, NUST, UBP Frank van.
Grid Scheduler: Plan & Schedule Adam Arbree Jang Uk In.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
ROOT-CORE Team 1 PROOF xrootd Fons Rademakers Maarten Ballantjin Marek Biskup Derek Feichtinger (ARDA) Gerri Ganis Guenter Kickinger Andreas Peters (ARDA)
The Grid Effort at UF Presented by Craig Prescott.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
MONTE CARLO EVENT GENERATION IN A MULTILANGUAGE, MULTIPLATFORM ENVIRONMENT Norman Graf Tony Johnson Stanford Linear Accelerator Center Abstract: We discuss.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Korea Workshop May GAE CMS Analysis (Example) Michael Thomas (on behalf of the GAE group)
GraDS MacroGrid Carl Kesselman USC/Information Sciences Institute.
Super Scaling PROOF to very large clusters Maarten Ballintijn, Kris Gulbrandsen, Gunther Roland / MIT Rene Brun, Fons Rademakers / CERN Philippe Canal.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
David Adams ATLAS ATLAS-ARDA strategy and priorities David Adams BNL October 21, 2004 ARDA Workshop.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
10 March Andrey Grid Tools Working Prototype of Distributed Computing Infrastructure for Physics Analysis SUNY.
Managing LIGO Workflows on OSG with Pegasus Karan Vahi USC Information Sciences Institute
1 Grid2003 Monitoring, Metrics, and Grid Cataloging System Leigh GRUNDHOEFER, Robert QUICK, John HICKS (Indiana University) Robert GARDNER, Marco MAMBELLI,
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
U.S. ATLAS Grid Production Experience
California Institute of Technology
Job workflow Pre production operations:
DUCKS – Distributed User-mode Chirp-Knowledgeable Server
Patrick Dreher Research Scientist & Associate Director
Initial job submission and monitoring efforts with JClarens
Distributed Services for Grid Distributed Services for Grid
Presentation transcript:

Distributed Services for Grid Enabled Data Analysis Distributed Services for Grid Enabled Data Analysis

Scenario Liz and John are members of CMS Liz is from Caltech and is an expert in event reconstruction John is from Florida and is an expert in statistical fits They wish to combine their expertise and collaborate on a CMS Data Analysis Project

Grid Monitoring Service MonALISA Grid Resource Service VDT Server Grid Execution Service VDT Client Grid Scheduling Service Sphinx Virtual Data Service Chimera Workflow Generation Service ShahKar Collaborative Environment Service CAVE Grid-services Web Service: Clarens Analysis Client IGUANA Analysis Client ROOT Analysis Client Web Browser Analysis Client PDA Remote Data Service Clarens Demo Goals Prototype vertically integrated system –Transparent/seamless experience Distribute grid services using a uniform web service –Clarens ! –Understand system latencies failure modes Investigate request scheduling in a resource limited and dynamic environment –Emphasize functionality over scalability Investigate interactive vs. scheduled data analysis on a grid –Hybrid example –Understand where are the difficult issues

Chimera Virtual Data System Virtual data products are pre-registered with the Chimera Virtual Data Service. Using Clarens, data products are discovered by Liz and John by remotely browsing the Chimera Virtual Data Service y.ntpl y.root x.ntpl x.root requestbrowse y.ntpl y.cards pythia h2root y.root x.ntpl x.cards pythia h2root x.root Data Discovery

Liz wants to analyse x.root using her analysis code a.C // Analysis code: a.C #include #include "TFile.h" #include "TTree.h" #include "TBrowser.h" #include "TH1.h" #include "TH2.h" #include "TH3.h" #include "TRandom.h" #include "TCanvas.h" #include "TPolyLine3D.h" #include "TPolyMarker3D.h" #include "TString.h" void a( char treefile[], char newtreefile[] ) { Int_t Nhep; Int_t Nevhep; Int_t Isthep[3000]; Int_t Idhep[3000], Jmohep[3000][2], Jdahep[3000][2]; Float_t Phep[3000][5], Vhep[3000][4]; Int_t Irun, Ievt; Float_t Weight; Int_t Nparam; Float_t Param[200]; TFile *file = new TFile( treefile ); TTree *tree = (TTree*) file -> Get( "h10 tree -> SetBranchAddress( "Nhep", &Nh x.ntpl x.cards pythia h2root x.root Data Analysis Chimera Virtual Data System

Chimera Virtual Data System registerbrowse Select CINT script Define output LFN Select input LFN Liz browses the local directory for her analysis code and the Chimera Virtual Data Service for input LFNs… x.ntpl x.cards pythia h2root x.root Interactive Workflow Generation

Chimera Virtual Data System y.ntpl y.root x.ntpl x.root registerbrowse xa.root a.C b.C c.C d.C Select CINT script Define output LFN Select input LFN She selects and registers (to the Grid) her analysis code, the appropriate input LFN, and a newly defined ouput LFN x.ntpl x.cards pythia h2root x.root Interactive Workflow Generation

Chimera Virtual Data System y.ntpl y.root x.ntpl x.root registerbrowse x.root xa.root a.C b.C c.C d.C a.C Select CINT script Define output LFN Select input LFN A branch is automatically added in the Chimera Virtual Data Catalog, and a.C is uploaded into “gridspace” and registered with RLS root a.C xa.root x.ntpl x.cards pythia h2root x.root Interactive Workflow Generation

Querying the Virtual Data Service, Liz sees that xa.root is now available to her as a new virtual data product y.ntpl y.root x.ntpl x.root xa.root requestbrowse root a.C xa.root x.ntpl x.cards pythia h2root x.root Interactive Workflow Generation Chimera Virtual Data System

She requests it…. y.ntpl y.root x.ntpl x.root xa.root requestbrowse xa.root root a.C xa.root x.ntpl x.cards pythia h2root x.root Request Submission Chimera Virtual Data System

Brief Interlude: The Grid is Busy and Resources are Limited! Busy: –Production is taking place –Other physicists are using the system –Use MonALISA to avoid congestion in the grid Limited: –As grid computing becomes standard fare, oversubscription to resources will be common ! CMS gives Liz a global high priority Based upon local and global policies, and current Grid weather, a grid- scheduler: –must schedule her requests for optimal resource use

Sphinx Scheduling Server Nerve Centre –Global view of system Data Warehouse –Information driven –Repository of current state of the grid Control Process –Finite State Machine Different modules modify jobs, graphs, workflows, etc and change their state –Flexible –Extensible Sphinx Server Control Process Job Execution Planner Graph Reducer Graph Tracker Job Predictor Graph Data Planner Job Admission Control Message Interface Graph Predictor Graph Admission Control Data Warehouse Data Management Information Gatherer Policies Accounting Info Grid Weather Resource Prop. and status Request Tracking Workflows etc

Distributed Services for Grid Enabled Data Analysis Distributed Services for Grid Enabled Data Analysis Sphinx Scheduling Service Fermilab File Service VDT Resource Service Caltech File Service VDT Resource Service RLS Replica Location Service Sphinx/VDT Execution Service MonALISA Monitoring Service ROOT Data Analysis Client Chimera Virtual Data Service Iowa File Service VDT Resource Service Florida File Service VDT Resource Service Clarens Globus GridFTP Clarens Globus MonALISA

Meanwhile, John has been developing his statistical fits in b.C by analysing the data product x.root y.ntpl y.root x.ntpl x.root xa.root xb.root requestbrowse xb.root root a.C xa.root x.ntpl x.cards pythia h2root x.root root b.C xb.root Collaborative Analysis

After Liz has finished optimising the event reconstruction, John uses his analysis code b.C on her data product xa.root to produce the final statistical fits and results ! y.root x.ntpl x.root xa.root xb.root xab.root requestbrowse root a.C xa.root x.ntpl x.cards pythia h2root x.root root b.C xb.root root xab.root Collaborative Analysis

Key Features Distributed Services Prototype in Data Analysis –Remote Data Service –Replica Location Service –Virtual Data Service –Scheduling Service –Grid-Execution Service –Monitoring Service Smart Replication Strategies for “Hot Data” –Virtual Data w.r.t. Location Execution Priority Management on a Resource Limited Grid –Policy Based Scheduling & QoS –Virtual Data w.r.t. Existence Collaborative Environment –Sharing of Datasets –Use of Provenance

Credits California Institute of Technology –Julian Bunn, Iosif Legrand, Harvey Newman, Suresh Singh, Conrad Steenberg, Michael Thomas, Frank Van Lingen, Yang Xia University of Florida –Paul Avery, Dimitri Bourilkov, Richard Cavanaugh, Laukik Chitnis, Jang-uk In, Mandar Kulkarni, Pradeep Padala, Craig Prescott, Sanjay Ranka Fermi National Accelerator Laboratory –Anzar Afaq, Greg Graham

DMC (Data Management Component) Scheduling the data transfers to achieve optimal workflow execution The problem: Combining data and Execution scheduling Various kinds of data transfers Smart replication –User initiated –Workflow based replication –Automatic replication Hot data management

Monitoring in SPHINX Scheduler needs information to make decisions. –The information needs to be as “current” as possible That brings monitoring into the picture –Load Average –Free Memory –Disk Space Virtual Organization (VO) Quota System –Different policies for resources –Needs monitoring and accounting/tracking of resource quotas MonALISA –Dynamic discovery of sites –Configurable monitoring service and parameters –View Generation using filters –Displays SPHINX job information Future Directions –As grid grows, the problem of latency becomes more potent –Solution: Data Fusion/Aggregation –Inline with the hierarchical views of grid (VO) and the hierarchical scheduler!

Distributed Services for Grid Enabled Data Analysis Distributed Services for Grid Enabled Data Analysis Sphinx Scheduling Service Fermilab File Service VDT Resource Service Caltech File Service VDT Resource Service RLS Replica Location Service Sphinx/VDT Execution Service MonALISA Monitoring Service ROOT Data Analysis Client Chimera Virtual Data Service Iowa File Service VDT Resource Service Florida File Service VDT Resource Service Clarens Globus GridFTP Clarens Globus MonALISA