ALICE Computing Data Challenge VI

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Bernd Panzer-Steindel, CERN/IT 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) High Througput Prototype (openlab + LCG prototype) (specific.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
ALICE Data Challenge V P. VANDE VYVRE – CERN/PH LCG PEB - CERN March 2004.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
ALICE Computing Model The ALICE raw data flow P. VANDE VYVRE – CERN/PH Computing Model WS – 09 Dec CERN.
Roberto Divià, CERN/ALICE 1 CHEP 2009, Prague, March 2009 The ALICE Online Data Storage System Roberto Divià (CERN), Ulrich Fuchs (CERN), Irina Makhlyueva.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
R.Divià, CERN/ALICE Challenging the challenge Handling data in the Gigabit/s range.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
ALICE experiences with CASTOR2 Latchezar Betev ALICE.
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ALICE Full Dress Rehearsal ALICE TF Meeting 02/08/07.
GDB Meeting 12. January Bernd Panzer-Steindel, CERN/IT 1 Mass Storage at CERN GDB meeting, 12. January 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
SJ – June CERN openlab for DataGrid applications Sverre Jarp CERN openlab CTO IT Department, CERN.
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
Ian Bird WLCG Workshop San Francisco, 8th October 2016
“A Data Movement Service for the LHC”
WP18, High-speed data recording Krzysztof Wrona, European XFEL
LCG Service Challenge: Planning and Milestones
OpenLab Enterasys Meeting
NL Service Challenge Plans
IT-DB Physics Services Planning for LHC start-up
PC Farms & Central Data Recording
LHC experiments Requirements and Concepts ALICE
Service Challenge 3 CERN
Data Challenge with the Grid in ATLAS
PROOF – Parallel ROOT Facility
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Bernd Panzer-Steindel, CERN/IT
Update on Plan for KISTI-GSDC
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
The INFN TIER1 Regional Centre
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Data Challenges On the way to 1 GB/s
Computing Infrastructure for DAQ, DM and SC
US ATLAS Physics & Computing
LHC Data Analysis using a worldwide computing grid
ALICE Data Challenges Fons Rademakers Click to add notes.
ATLAS DC2 & Continuous production
Presentation transcript:

ALICE Computing Data Challenge VI Fons Rademakers Good afternoon. We will present you today of the ALICE Data Challenge. This presentation is divided in 5 parts: I‘ll present you - the data throughput needed in the ALICE experiment - the motivations and the history of the data challenge Roberto Divia will present you the part of the DAQ part of the Data Challenge Fons Rademakers will present the Input/Ouptut done with the ROOT package and Bernd Panzer will cover the setup done in the computing center for this test At the end I will draw conclusions and present the future plans

Computing Data Challenge Motivations Integrate prototypes of components of the DAQ/HLT/Offline system into a complete system covering the whole chain Measure the performances with existing software prototypes Step-wise performance and reliability progress towards the final system Establish a common ALICE test-bed dataflow chain: Using common LHC test-bed to minimize investment Test of new architectural ideas and new components Start the integration now and verify the scalability Confront the implementation with realistic test conditions Started in 1999 and to be repeated annually till LHC startup Motivations are the Data Challenge are the following: - Establish a common ALICE test-bed First Data Challenge was started in March 1999 Sept 2004 ALICE Data Challenge VI

ALICE Data Challenge VI Data Challenge Plans MBytes/s Sept 2004 ALICE Data Challenge VI

Results of Previous CDC’s Five Data Challenges performed so far: March 1999 CDC I: 30 MB/s peak, 7 TB to HPSS March/April 2000 CDC II: 100 MB/s peak, 23 TB to HPSS/CASTOR February/March 2001 CDC III: 120 MB/s peak, 110 TB to CASTOR October/November 2002 CDC IV: 300 MB/s peak, 200 MB/s to CASTOR Dummy writer, no objectifier December 2004 CDC V: 300 MB/s to CASTOR, but not sustained Sept 2004 ALICE Data Challenge VI

CDC VI Performance Goals 1.5 GB/s through event building network 450 MB/s sustained to tape 300 TB to tape (one week at 450MB/s) Maximum 150 TB of tape storage Sept 2004 ALICE Data Challenge VI

CDC VI Technology Goals Testing and validation of DATE modifications see Pierre’s talk Test on new CASTOR see Olof’s talk Online physics monitoring using HLT algorithms see Cvetan’s talk Near real-time off-site data distribution via gLite Stress test new Enterasys 10 GB switches Stress test SLC3 and Linux 2.6 kernel Stress test xrootd Sept 2004 ALICE Data Challenge VI

CDC Schematics DAQ AliMDC CASTOR gLite AliRoot ROOT I/O Raw Data GEANT3 GEANT4 FLUKA AliRoot Raw Data Simulated Data DAQ gLite File Catalogue Performance Monitoring ROOT I/O CERN TIER 0 TIER 1 AliMDC CASTOR Regional TIER 1 TIER 2 gLite Sept 2004 ALICE Data Challenge VI

ALICE Data Challenge VI High Throughput Prototype (Openlab + LCG prototype) (specific layout, September 2004) 4 * GE connections to the backbone 10GE WAN connection lxsharexxxd ENTERASYS N7 10 GE Switches 20 Disk Server (dual P4, IDE disks, ~ 1TB disk space each) 10 GE per node 10GE 10 GE per node oplapro0xx tbed00xx 80 + 40 IA32 CPU Server (dual 2.4 GHz P4, 1 GB mem.) 1 GE per node 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) lxs50xx 20 TB , IBM StorageTank Sept 2004 ALICE Data Challenge VI

ALICE Data Challenge VI High Throughput Prototype (Openlab + LCG prototype) (specific layout, October 2004) 10GE WAN connection 4 * GE connections to the backbone 4 *ENTERASYS N7 10 GE Switches 2 * Enterasys X-Series lxsharexxxd 36 Disk Server (dual P4, IDE disks, ~ 1TB disk space each) oplapro0xx 10 GE per node 10 GE per node tbed00xx 2 * 100 IA32 CPU Server (dual 2.4 GHz P4, 1 GB mem.) 10GE 1 GE per node 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) lxs50xx 20 TB , IBM StorageTank 12 Tape Server STK 9940B Sept 2004 ALICE Data Challenge VI

September October November December CPU Nodes IA32 IA64 25 30 120 60 200 60 ATA SERVER 20 (25TB) SATA SERVER 10 (25TB) Disk Server FC SATA 10 (30 TB) iSCSI ST 14 (28TB) Tape Drives STK 9940B 2 4 8 25 IBM 3592 8 LTO2 ? 6 ALICE MDC Milestone September October November December

CDC DATA Flow LDC (Local Data Concentrator) Gigabit/10 GB Ethernet Switch GDC (Global Data Collector) tbedXXXX tbedXXXX GE Switch readout recorder gdcServer eventBuilder alimdc Half-size form factor 6U + DUAL port RAM CASTOR Sept 2004 ALICE Data Challenge VI

AliMDC - the Objectifier ROOT Objectifier reads raw data via shared memory from the DATE GDC Objectifier has two output streams: Raw event database (to CASTOR) File catalog to (to gLite) 25-30 MB/s raw database output per 2 GHz PC via rootd to CASTOR Hook for HLT and physics monitoring HLT algorithm will add about 20-30 s/event Try to multithread algorithm Adds ESD data to RAW file Add hook to simulate level 1 input Add some level 1 info to event header object Sept 2004 ALICE Data Challenge VI

ALICE Data Challenge VI Raw Data All detectors except RICH, ZDC and FMD can generate raw data Used to produce raw data for input to DATE LDC’s Sept 2004 ALICE Data Challenge VI

Regional Center Participation Certain files will be copied to a disk cache Regional centers will be able to access these files online via gLite Offline, regional centers can copy/access files, via gLite, from CASTOR Candidate regional centers? Sept 2004 ALICE Data Challenge VI

ALICE Data Challenge VI To Be Done AliMDC Finalize and test TCastorFile using new CASTOR API Interface xrootd to new CASTOR API Add gLite interface HLT See Cvetan’s talk gLite To be deployed Regional Centers Verify infrastructure Sept 2004 ALICE Data Challenge VI

ALICE Data Challenge VI Timeline AliMDC ready: early Oct 2004 HLT: Oct 2004 gLite ready: Oct 2004 Regional centers ready: Nov 2004 Integration run with online: end Oct 2004 CDC VI: second half Nov till first half Dec 2004 Sept 2004 ALICE Data Challenge VI