Download presentation
Presentation is loading. Please wait.
Published byFelicity Patricia Thomas Modified over 9 years ago
1
Roberto Divià, CERN/ALICE 1 CHEP 2009, Prague, 21-27 March 2009 The ALICE Online Data Storage System Roberto Divià (CERN), Ulrich Fuchs (CERN), Irina Makhlyueva (CERN), Pierre Vande Vyvre (CERN) Valerio Altini (CERN), Franco Carena (CERN), Wisla Carena (CERN), Sylvain Chapeland (CERN), Vasco Chibante Barroso (CERN), Filippo Costa (CERN), Filimon Roukoutakis (CERN), Klaus Schossmaier (CERN), Csaba Soòs (CERN), Barthelemy Von Haller (CERN) For the ALICE collaboration
2
Roberto Divià, CERN/ALICE 2 CHEP 2009, Prague, 21-27 March 2009 PDSCASTOR 1.25 GB/s ALICEenvironment for the GRID AliEn GDC ALICE trigger, DAQ & HLT CTP LTU TTC FEROFERO LTU TTC FEROFERO LDC LDC Storage Network Transient Data Storage (TDS) D-RORC D-RORC LDC D-RORC D-RORC LDC D-RORC HLT Farm 25 GB/s Mover DAQ Network
3
Roberto Divià, CERN/ALICE 3 CHEP 2009, Prague, 21-27 March 2009 ALICE trigger, DAQ & HLT PDSCASTOR 1.25 GB/s ALICEenvironment for the GRID AliEn GDC Storage Network Transient Data Storage (TDS) Mover DAQ Network
4
Roberto Divià, CERN/ALICE 4 CHEP 2009, Prague, 21-27 March 2009 Our objectives Ensure steady and reliable data flow up to the design specs Ensure steady and reliable data flow up to the design specs Avoid stalling the detectors with data flow slowdowns Avoid stalling the detectors with data flow slowdowns Give sufficient resources for online objectification in ROOT format via AliROOT Give sufficient resources for online objectification in ROOT format via AliROOT very CPU-intensive procedure Satisfy needs from ALICE parallel runs and from multiple detectors commissioning Satisfy needs from ALICE parallel runs and from multiple detectors commissioning Allow a staged deployment of the DAQ/TDS hardware Allow a staged deployment of the DAQ/TDS hardware Provide sufficient storage for a complete LHC spill in case the transfer between the experiment and the CERN Computer Center does not progress Provide sufficient storage for a complete LHC spill in case the transfer between the experiment and the CERN Computer Center does not progress PDSCASTOR 1.25 GB/s ALICEenvironment for the GRID AliEn GDC Storage Network Transient Data Storage (TDS) Mover DAQ Network
5
Roberto Divià, CERN/ALICE 5 CHEP 2009, Prague, 21-27 March 2009 Current TDS architecture CVFS over IP 5 switches Qlogic SANBox 5602: FC 4 Gb: equipment, PCs, storage FC 4 Gb: equipment, PCs, storage FC 10 Gb: inter-switches connections FC 10 Gb: inter-switches connections 5 * 5 Disk Arrays Infotrend A16F models G2422 & G2430 5 * 5 Disk Arrays Infotrend A16F models G2422 & G2430 5 * 15 disk volumes 5 * 15 disk volumes Total maximum space: 59 TB Total maximum space: 59 TB CVFS: StorNext 3.1.2 CVFS: StorNext 3.1.2 Handled by the Transient Data Storage Manager (TDSM) Handled by the Transient Data Storage Manager (TDSM) 5 * ( 6 * GDCs + 2 * Movers ) … … +
6
Roberto Divià, CERN/ALICE 6 CHEP 2009, Prague, 21-27 March 2009 Future upgrades Two switches SANBox 9000, 8 blades maximum each: 9 * Blades with 16 ports FC 4 Gb: equipment, hosts, storage 9 * Blades with 16 ports FC 4 Gb: equipment, hosts, storage 2 * Blades with 4 ports FC 10 Gb: inter-switches connections 2 * Blades with 4 ports FC 10 Gb: inter-switches connections
7
Roberto Divià, CERN/ALICE 7 CHEP 2009, Prague, 21-27 March 2009 emptying full filling CASTOR TDSM architecture TDS GDCGDCGDC Feedback to the CTP TDSMFileMover TDSMFileMover TDSMManager DAQ network CERNGPN MSSnetwork AliEnspooler TDSMconfiguration & control DB & control DB GDC TDSMFileMover TDSMFileMover free disabled AliEn DAQlogbook
8
Roberto Divià, CERN/ALICE 8 CHEP 2009, Prague, 21-27 March 2009 Monitoring TDSM DB to get the status and history of the TDS/TDSM
9
Roberto Divià, CERN/ALICE 9 CHEP 2009, Prague, 21-27 March 2009 Monitoring Logbook to monitor the status of the migration
10
Roberto Divià, CERN/ALICE 10 CHEP 2009, Prague, 21-27 March 2009 Monitoring Lemon to monitor the system setup and metrics
11
Roberto Divià, CERN/ALICE 11 CHEP 2009, Prague, 21-27 March 2009 Validation & testing u Small “lab style” test setups m Special running mode where: a single LDC can inject several real events the Event builder unpacks it as for the original event m Dedicated “write and forget” CASTOR pool m Ad-hoc “black hole” AliEn registration service u Profiling during detectors commissioning and cosmic runs u ALICE Data Challenges m Run between 1999 and 2006 m Periodic full-chain tests (ALICE DAQ/Offline + IT department)
12
Roberto Divià, CERN/ALICE 12 CHEP 2009, Prague, 21-27 March 2009 ALICE Data Challenges u Define an architecture (HW and SW) m Re-use existing idling components (IT and ALICE) m If needed, add some glue here and there Earlier ADCs: lots of glue! u Evaluate and profile the individual components u Put them together and check the result u Do short sustained tests (hours) u Run the final Challenge (7 days) with two targets: m Sustained overall data rate m Amount of data to PDS u Repeat the exercise year after year with more challenging objectives u Achieve quasi-ALICE results with minimum glue right before ALICE commissioning
13
Roberto Divià, CERN/ALICE 13 CHEP 2009, Prague, 21-27 March 2009 The TDS in 2008 u 25 February to 9 March 2008: ALICE Cosmic runs m 1500 runs m 340 hours m 70 TB u 3+4Q08: m 6800 runs m 3300 hours m 108 TB
14
Roberto Divià, CERN/ALICE 14 CHEP 2009, Prague, 21-27 March 2009 In conclusion… u Continuous evaluation of HW & SW components proved the feasibility of the TDS/TDSM architecture u All components validated and profiled u ADCs gave highly valuable information for the R&D process m Additional ADCs added to the ALICE DAQ planning for 2009 u Detector commissioning went smoothly & all objectives were met u No problems during cosmic and preparation runs u Staged commissioning on its way u Global tuning in progress We are ready for LHC startup
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.