J.J.Blaising April 02AMS DataGrid-status1 DataGrid Status J.J Blaising IN2P3 Grid Status Demo introduction Demo.

Slides:



Advertisements
Similar presentations
Data Management Expert Panel - WP2. WP2 Overview.
Advertisements

EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
The DataGrid Project NIKHEF, Wetenschappelijke Jaarvergadering, 19 December 2002
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
1 DataGRID Application Status and plans
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Grid Canada CLS eScience Workshop 21 st November, 2005.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
SLICE Simulation for LHCb and Integrated Control Environment Gennady Kuznetsov & Glenn Patrick (RAL) Cosener’s House Workshop 23 rd May 2002.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
INFSO-RI Enabling Grids for E-sciencE Project Gridification: the UNOSAT experience Patricia Méndez Lorenzo CERN (IT-PSS/ED) CERN,
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
INFSO-RI Enabling Grids for E-sciencE Workload Management System Mike Mineter
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
WP8 Status – Stephen Burke – 30th January 2003 WP8 Status Stephen Burke (RAL) (with thanks to Frank Harris)
- Distributed Analysis (07may02 - USA Grid SW BNL) Distributed Processing Craig E. Tull HCG/NERSC/LBNL (US) ATLAS Grid Software.
CMS Stress Test Report Marco Verlato (INFN-Padova) INFN-GRID Testbed Meeting 17 Gennaio 2003.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
28 March 2001F Harris LHCb Software Week1 Overview of GGF1 (Global Grid Forum) and Datagrid meeting, NIKHEF, Mar 5-9 F Harris(Oxford)
EGEE-III INFSO-RI Enabling Grids for E-sciencE Feb. 06, Introduction to High Performance and Grid Computing Faculty of Sciences,
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Presenter Name Facility Name UK Testbed Status and EDG Testbed Two. Steve Traylen GridPP 7, Oxford.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
29/1/2002A.Ghiselli, INFN-CNAF1 DataTAG / WP4 meeting Cern, 29 January 2002 Agenda  start at  Project introduction, Olivier Martin  WP4 introduction,
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
High-Performance Computing Lab Overview: Job Submission in EDG & Globus November 2002 Wei Xing.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
1 I.Foster LCG Grid Technology: Introduction & Overview Ian Foster Argonne National Laboratory University of Chicago.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
The DataGrid Project NIKHEF, Wetenschappelijke Jaarvergadering, 19 December 2002
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
1 Application status F.Carminati 11 December 2001.
Bob Jones – Project Architecture - 1 March n° 1 Project Architecture, Middleware and Delivery Schedule Bob Jones Technical Coordinator, WP12, CERN.
EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 1 Accomplishments of the project from the end.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
Enabling Grids for E-sciencE Claudio Cherubino INFN DGAS (Distributed Grid Accounting System)
The EDG Testbed Deployment Details
Real Time Fake Analysis at PIC
“A Data Movement Service for the LHC”
Moving the LHCb Monte Carlo production system to the GRID
Introduction to Grid Technology
SAM at CCIN2P3 configuration issues
Gridifying the LHCb Monte Carlo simulation system
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
LCG middleware and LHC experiments ARDA project
LCG experience in Integrating Grid Toolkits
5th EU DataGrid Conference
The LHCb Computing Data Challenge DC06
Presentation transcript:

J.J.Blaising April 02AMS DataGrid-status1 DataGrid Status J.J Blaising IN2P3 Grid Status Demo introduction Demo

Grid Technology: Introduction & Overview Ian Foster Argonne National Laboratory University of Chicago

J.J.Blaising April 02AMS DataGrid-status3 Harvey B. Newman, Caltech Data Analysis for Global HEP Collaborations LCG Launch Workshop, CERN l3www.cern.ch/~newman/LHCCMPerspective_hbn ppt LHC Computing Model Perspective

J.J.Blaising April 02AMS DataGrid-status4 Query (task completion time) estimation Queueing and co-scheduling strategies Load balancing (e.g. Self Organizing Neural Network) Error Recovery: Fallback and Redirection Strategies Strategy for use of tapes Extraction, transport and caching of physicists’ object-collections; Grid/Database Integration Policy-driven strategies for resource sharing among sites and activities; policy/capability tradeoffs Network Peformance and Problem Handling – Monitoring and Response to Bottlenecks – Configuration and Use of New-Technology Networks e.g. Dynamic Wavelength Scheduling or Switching Fault-Tolerance, Performance of the Grid Services Architecture Consistent transaction management, ……. FROM H.Newman

NL SURFnet CERN UK SuperJANET4 Abilene ESNET MREN IT GARR-B GEANT NewYork STAR-TAP STAR-LIGHT DataTAG project Major 2.5 Gbps circuits between Europe & USA

J.J.Blaising April 02AMS DataGrid-status6 DataGrid Goal Develop middleware to allow WAN distributed computing and data management Build a distributed batch system allowing to submit jobs on different sites with automatic site selection according to resource matching. Next: Interactive use and parallel processing Other OS (Solaris) Requirements from HEP, Earth Orbservation and Biomedical applications.

J.J.Blaising April 02AMS DataGrid-status7 User ITF Node Computing element gatekeeper Jobmanger-PBS/LSF/BQS Publish CPU resources Storage element gatekeeper Publish storage resources Worker Node Client Worker Node Resources provider Storage CPU Workload manager Information system File Catalog server Grid Services Submit job

J.J.Blaising April 02AMS DataGrid-status8 Middleware status v1.1.2 Workload manager (UI+RB+JSS+LB), WP1 still bug fixing + improvements for year 2 Data management, file catalog, replica manager, WP2 good collaboration with globus Information system, WP3 deployment of uniform FTREE/MDS/R-GMA Fabric management, WP4 LCFG, light LCFG for preinstalled systems Mass storage management, WP5, Castor, Hpss, … Successful EU review on 1 March

J.J.Blaising April 02AMS DataGrid-status9 VO Services Computing and Storage element services deployed at CERN, CC-IN2P3, CNAF, NIKHEF, RAL, more … US sites soon to test Grid interoperability For ALICE, ATLAS, CMS, LHCb, Earthobs, Biomed deployment of dedicated services LDAP server (certificates) File catalog (LFN/PFN mapping) GDMP server (automatic data replication) More to come, Metadata catalog, …

J.J.Blaising April 02AMS DataGrid-status10 Application activities (WP8) Middleware evaluation using ALICE, ATLAS, CMS, LHCb, Gen-Hep toolkits User requirements collection with ALICE, ATLAS, CMS, LHCb Common HEP uses cases Common application use case

J.J.Blaising April 02AMS DataGrid-status11 OS & Net services Bag of Services (GLOBUS) Specific application layer GLOBU S team MiddleWare MW1MW2MW3MW4MW5 MiddleWare MW1MW2MW3MW4MW5 LHC VO use cases & requirements Other apps If we manage to define ALICEATLASCMSLHCbOther apps MiddleWare MW1MW2MW3MW4MW5 VO use cases & requirements Common core use case Or even better LHCOther apps MiddleWare MW1MW2MW3MW4MW5 VO use cases & requirements It will be easier to arrive at Common use cases LHCOther apps Common use cases

J.J.Blaising April 02AMS DataGrid-status12 What we want from a GRID This is the result of our experience on TB0 & TB1 OS & Net services Basic Services High level GRID middleware LHC VO common application layer Other apps ALICEATLASCMSLHCb Specific application layer Other apps GLOBU S team GRID architecture Common use cases

J.J.Blaising April 02AMS DataGrid-status13 Demo introduction Sites involved CERN, CNAF, LYON, NIKHEF, RAL User interface in X, dg-job-submit demo.jdl => job sent to the Workload management syst at CERN The WMS selects a site according to resource attributes given in the jdl file and to the resources published via the Infornation System. The job is sent to one of the site, a data file is written the file is copied to the nearest MS and replicated on all other sites. dg-job-get-output is used to retrieve the files

J.J.Blaising April 02AMS DataGrid-status14 Add lfn/pfn to Rep Catalog Generate Raw events on local disk Raw/dst ? Job arguments Data Type : raw/dst Run Number :xxxxxx Number of evts :yyyyyy Number of wds/evt:zzzzzz Rep Catalog flag : 0/1 Mass Storage flag : 0/1 Write logbook raw_xxxxxx_dat.log dst_xxxxxx_dat.log Read raw events Write dst events Get pfn from Rep Catalog Add lfn/pfn to Rep Catalog MS Move to SE, MS ? Write logbook pfn local ? n y raw_xxxxxx_dat.log Copy raw data From SE to Local disk Generic HEP application flowchart SE Move to SE, MS? SE

J.J.Blaising April 02AMS DataGrid-status15 demo.jdl Executable = demo.csh; Arguments = raw StdInput = none; StdOutput = demo.out; StdError = demo.err; InputSandbox = {demo.csh,main.exe}; OutputSandbox={demo.out.demo.err,demo.log}; Requirements = other.OpSys==“RH 6.2; dg-job-submit demo.jdl dg-job-get-output job-id User ITF Node Input sanbox Output sandbox Workload manager Information system STORAGE COMPUTING STORAGE COMPUTING File catalog server data