Presentation is loading. Please wait.

Presentation is loading. Please wait.

ALICE Computing Data Challenge VI

Similar presentations


Presentation on theme: "ALICE Computing Data Challenge VI"— Presentation transcript:

1 ALICE Computing Data Challenge VI
Fons Rademakers Good afternoon. We will present you today of the ALICE Data Challenge. This presentation is divided in 5 parts: I‘ll present you - the data throughput needed in the ALICE experiment - the motivations and the history of the data challenge Roberto Divia will present you the part of the DAQ part of the Data Challenge Fons Rademakers will present the Input/Ouptut done with the ROOT package and Bernd Panzer will cover the setup done in the computing center for this test At the end I will draw conclusions and present the future plans

2 Computing Data Challenge Motivations
Integrate prototypes of components of the DAQ/HLT/Offline system into a complete system covering the whole chain Measure the performances with existing software prototypes Step-wise performance and reliability progress towards the final system Establish a common ALICE test-bed dataflow chain: Using common LHC test-bed to minimize investment Test of new architectural ideas and new components Start the integration now and verify the scalability Confront the implementation with realistic test conditions Started in 1999 and to be repeated annually till LHC startup Motivations are the Data Challenge are the following: - Establish a common ALICE test-bed First Data Challenge was started in March 1999 Sept 2004 ALICE Data Challenge VI

3 ALICE Data Challenge VI
Data Challenge Plans MBytes/s Sept 2004 ALICE Data Challenge VI

4 Results of Previous CDC’s
Five Data Challenges performed so far: March 1999 CDC I: 30 MB/s peak, 7 TB to HPSS March/April 2000 CDC II: 100 MB/s peak, 23 TB to HPSS/CASTOR February/March 2001 CDC III: 120 MB/s peak, 110 TB to CASTOR October/November 2002 CDC IV: 300 MB/s peak, 200 MB/s to CASTOR Dummy writer, no objectifier December 2004 CDC V: 300 MB/s to CASTOR, but not sustained Sept 2004 ALICE Data Challenge VI

5 CDC VI Performance Goals
1.5 GB/s through event building network 450 MB/s sustained to tape 300 TB to tape (one week at 450MB/s) Maximum 150 TB of tape storage Sept 2004 ALICE Data Challenge VI

6 CDC VI Technology Goals
Testing and validation of DATE modifications see Pierre’s talk Test on new CASTOR see Olof’s talk Online physics monitoring using HLT algorithms see Cvetan’s talk Near real-time off-site data distribution via gLite Stress test new Enterasys 10 GB switches Stress test SLC3 and Linux 2.6 kernel Stress test xrootd Sept 2004 ALICE Data Challenge VI

7 CDC Schematics DAQ AliMDC CASTOR gLite AliRoot ROOT I/O Raw Data
GEANT3 GEANT4 FLUKA AliRoot Raw Data Simulated Data DAQ gLite File Catalogue Performance Monitoring ROOT I/O CERN TIER 0 TIER 1 AliMDC CASTOR Regional TIER 1 TIER 2 gLite Sept 2004 ALICE Data Challenge VI

8 ALICE Data Challenge VI
High Throughput Prototype (Openlab + LCG prototype) (specific layout, September 2004) 4 * GE connections to the backbone 10GE WAN connection lxsharexxxd ENTERASYS N7 10 GE Switches 20 Disk Server (dual P4, IDE disks, ~ 1TB disk space each) 10 GE per node 10GE 10 GE per node oplapro0xx tbed00xx IA32 CPU Server (dual 2.4 GHz P4, 1 GB mem.) 1 GE per node 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) lxs50xx 20 TB , IBM StorageTank Sept 2004 ALICE Data Challenge VI

9 ALICE Data Challenge VI
High Throughput Prototype (Openlab + LCG prototype) (specific layout, October 2004) 10GE WAN connection 4 * GE connections to the backbone 4 *ENTERASYS N7 10 GE Switches 2 * Enterasys X-Series lxsharexxxd 36 Disk Server (dual P4, IDE disks, ~ 1TB disk space each) oplapro0xx 10 GE per node 10 GE per node tbed00xx 2 * 100 IA32 CPU Server (dual 2.4 GHz P4, 1 GB mem.) 10GE 1 GE per node 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) lxs50xx 20 TB , IBM StorageTank 12 Tape Server STK 9940B Sept 2004 ALICE Data Challenge VI

10 September October November December
CPU Nodes IA32 IA64 25 30 120 60 200 60 ATA SERVER (25TB) SATA SERVER (25TB) Disk Server FC SATA (30 TB) iSCSI ST (28TB) Tape Drives STK 9940B IBM LTO2 ? ALICE MDC Milestone September October November December

11 CDC DATA Flow LDC (Local Data Concentrator) Gigabit/10 GB
Ethernet Switch GDC (Global Data Collector) tbedXXXX tbedXXXX GE Switch readout recorder gdcServer eventBuilder alimdc Half-size form factor 6U + DUAL port RAM CASTOR Sept 2004 ALICE Data Challenge VI

12 AliMDC - the Objectifier
ROOT Objectifier reads raw data via shared memory from the DATE GDC Objectifier has two output streams: Raw event database (to CASTOR) File catalog to (to gLite) 25-30 MB/s raw database output per 2 GHz PC via rootd to CASTOR Hook for HLT and physics monitoring HLT algorithm will add about s/event Try to multithread algorithm Adds ESD data to RAW file Add hook to simulate level 1 input Add some level 1 info to event header object Sept 2004 ALICE Data Challenge VI

13 ALICE Data Challenge VI
Raw Data All detectors except RICH, ZDC and FMD can generate raw data Used to produce raw data for input to DATE LDC’s Sept 2004 ALICE Data Challenge VI

14 Regional Center Participation
Certain files will be copied to a disk cache Regional centers will be able to access these files online via gLite Offline, regional centers can copy/access files, via gLite, from CASTOR Candidate regional centers? Sept 2004 ALICE Data Challenge VI

15 ALICE Data Challenge VI
To Be Done AliMDC Finalize and test TCastorFile using new CASTOR API Interface xrootd to new CASTOR API Add gLite interface HLT See Cvetan’s talk gLite To be deployed Regional Centers Verify infrastructure Sept 2004 ALICE Data Challenge VI

16 ALICE Data Challenge VI
Timeline AliMDC ready: early Oct 2004 HLT: Oct 2004 gLite ready: Oct 2004 Regional centers ready: Nov 2004 Integration run with online: end Oct 2004 CDC VI: second half Nov till first half Dec 2004 Sept 2004 ALICE Data Challenge VI


Download ppt "ALICE Computing Data Challenge VI"

Similar presentations


Ads by Google