Real Time Fake Analysis at PIC

Slides:



Advertisements
Similar presentations
DataTAG WP4 Meeting CNAF Jan 14, 2003 Interfacing AliEn and EDG 1/13 Stefano Bagnasco, INFN Torino Interfacing AliEn to EDG Stefano Bagnasco, INFN Torino.
Advertisements

CMS Grid Batch Analysis Framework
LNL CMS M.Biasotto, Bologna, 29 aprile LNL Analysis Farm Massimo Biasotto - LNL.
Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
Job Submission The European DataGrid Project Team
Workload Management meeting 07/10/2004 Federica Fanzago INFN Padova Grape for analysis M.Corvo, F.Fanzago, N.Smirnov INFN Padova.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
INFSO-RI Enabling Grids for E-sciencE EGEE Middleware The Resource Broker EGEE project members.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
Bookkeeping data Monitoring info Get jobs Site A Site B Site C Site D Agent Production service Monitoring service Bookkeeping service Agent © Andrei Tsaregorodtsev.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Claudio Grandi INFN Bologna CHEP'03 Conference, San Diego March 27th 2003 Plans for the integration of grid tools in the CMS computing environment Claudio.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
- Distributed Analysis (07may02 - USA Grid SW BNL) Distributed Processing Craig E. Tull HCG/NERSC/LBNL (US) ATLAS Grid Software.
CMS Stress Test Report Marco Verlato (INFN-Padova) INFN-GRID Testbed Meeting 17 Gennaio 2003.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
29 Sept 2004 CHEP04 A. Fanfani INFN Bologna 1 A. Fanfani Dept. of Physics and INFN, Bologna on behalf of the CMS Collaboration Distributed Computing Grid.
EGEE-0 / LCG-2 middleware Practical.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Daniele Spiga PerugiaCMS Italia 14 Feb ’07 Napoli1 CRAB status and next evolution Daniele Spiga University & INFN Perugia On behalf of CRAB Team.
EGEE is a project funded by the European Union under contract INFSO-RI Grid accounting with GridICE Sergio Fantinel, INFN LNL/PD LCG Workshop November.
Dynamic staging to a CAF cluster Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
CMS Production Management Software Julia Andreeva CERN CHEP conference 2004.
June 22 L. Silvestris CMS/ARDA CMS/ARDA Lucia Silvestris INFN-Bari.
Gestion des jobs grille CMS and Alice Artem Trunov CMS and Alice support.
Status of BESIII Distributed Computing BESIII Collaboration Meeting, Nov 2014 Xiaomei Zhang On Behalf of the BESIII Distributed Computing Group.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
EDG Project Conference – Barcelona 13 May 2003 – n° 1 A.Fanfani INFN Bologna – CMS WP8 – Grid Planning in CMS Outline  CMS Data Challenges  CMS Production.
BESIII data processing
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Oxana Smirnova, Jakob Nielsen (Lund University/CERN)
U.S. ATLAS Grid Production Experience
Report of Dubna discussion
Work report Xianghu Zhao Nov 11, 2014.
BOSS: the CMS interface for job summission, monitoring and bookkeeping
INFN-GRID Workshop Bari, October, 26, 2004
CMS — Service Challenge 3 Requirements and Objectives
CMS Data Challenge Experience on LCG-2
Sergio Fantinel, INFN LNL/PD
Gridifying the LHCb Monte Carlo simulation system
CRAB and local batch submission
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
CMS Data Challenge 2004 Claudio Grandi CMS Grid Coordinator
Computing Board Report CHIPP Plenary Meeting
Short update on the latest gLite status
Conditions Data access using FroNTier Squid cache Server
ALICE – FAIR Offline Meeting KVI (Groningen), 3-4 May 2010
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
Computing Infrastructure for DAQ, DM and SC
N. De Filippis - LLR-Ecole Polytechnique
Grid Computing in CMS: Remote Analysis & MC Production
Job Application Monitoring (JAM)
Production client status
The LHCb Computing Data Challenge DC06
Presentation transcript:

Real Time Fake Analysis at PIC José Hernández CIEMAT DC04 Review May 11th, 2004

Real Time Fake Analysis Goals Demonstrate data can be analyzed in real time at the T1 Fast Feedback to reconstruction (e.g. calibration, alignment, check of reconstruction code, etc.) Establish automatic data replication to T2s Make data available for offline analysis Measure time elapsed between reconstruction at T0 and analysis at T1 Architecture Set of software agents communicating via local mysql DB Replication, data set completeness, job preparation & submission Use LCG to run jobs Private Grid Information System for CMS DC04 Private Resource Broker DC04 Review May 11th, 2004

Hardware Setup 3 LCG Storage Elements with gridFTP server 1 SE attached to CASTOR with 3 TB stage pool 1 disk SE with 1 TB at PIC 1 disk SE with 80 GB at CIEMAT 2 LCG User Interface machines Replication agent Fake Analysis agents 1 LCG Resource Broker Fake analysis job submission 154 LCG Worker Node CPUs 1 Gbps LAN DC04 Review May 11th, 2004

Fake Analysis Architecture Data Transfer Fake Analysis TMDB Mysql POOL RLS catalogue Transfer agent Replication Mass Storage SE Export Buffer PIC CASTOR Storage Element MSS CIEMAT disk SE PIC disk LCG Worker Node LCG Resource Broker Drop agent Fake Analysis agent Drop Files Drop agent triggers job preparation/submission when all files are available Fake Analysis agent prepares xml catalog, orcarc, jdl script and submits job Jobs record start/end timestamps in mysql DB DC04 Review May 11th, 2004

Fake Analysis Strategy Transfer agent registers in mysql DB files at T1 Replication agent replicates them to disk at T1/T2s Drop agent checks completeness of job data set at T1/T2s Drop file contains GUIDs and PFNs of event data files and ZippedMETA file Fake Analysis agent prepares and submits jobs to LCG Get DATASET, OWNER, RUNID, JOBID, GUIDs, PFNs from drop file Prepare local xml catalog, orcarc file, envariables file, jdl script Scripts from INFN fake analysis team adapted to PIC Job submission & monitoring BOSS not used due to lack of time, big executables in InputSandbox Instead, job submission, start, end timestamps recorded in mysql DB Job running on any data set Fake Analysis executable uploaded into SE DC04 Review May 11th, 2004

From GDB to analysis at T1 Reconstruction GDB EB T1 T2 Analysis Publisher and configuration agents Transfer and replication agents Drop and Fake Analysis agents EB agent DC04 Review May 11th, 2004

From GDB to analysis at T1 Transfer Replication Job preparation Job Submission DC04 Review May 11th, 2004

Summary Real Time fake analysis established at PIC during DC04 20 minutes median time between production and analysis Automatic data replication to CIEMAT T2 established making data available for offline analysis LCG2 successfully exercised for data replication and job submission Thanks to the INFN colleagues DC04 Review May 11th, 2004