Download presentation
Presentation is loading. Please wait.
Published byJuliana Carpenter Modified over 8 years ago
1
CLRC Grid Team 18.07.00 Glenn Patrick LHCb GRID Plans Glenn Patrick 18.07.00 LHCb has formed a GRID technical working group to co-ordinate practical Grid developments with reps from regional facilities. Liverpool,RAL,CERN,IN2P3,INFN,Nikhef… First meetings:14th June(RAL), 5th July(CERN) Next meeting:August(Liverpool)? A number of realistic, short-term goals have been identified which will: Initiate activity in this area. Map on to longer term LHCb applications in WP8. Provide us with practical experience in Grid tools. Globus 1.1.3 to be installed and tested at CERN,RAL and Liverpool (version 1.1.1 already at RAL and CERN). Regional centres (eg. CLRC) are production centres for simulated data archive produced MC data.
2
Reminder: LHCb WP8 Application Target Application MAP Farm(300 cpu) at Liverpool to generate 10 7 events over 4 months. “Initial” data volumes transferred between facilities: Liverpool to RAL3TB (RAW,ESD,AOD,TAG) RAL to Lyon/CERN0.3TB (AOD and TAG) Lyon to LHCb inst.0.3TB (AOD and TAG) RAL to LHCb inst.100GB (ESD for sys. studies) Physicists run jobs at regional centre or move AOD & TAG data to local institute and run jobs there. Also, copy ESD for 10% of events for systematic studies. Formal EU production scheduled start 2002 to mid-2002 But already doing distributed MC production for TDRs.
3
Data Challenges Any Grid work has to fit in with ongoing production and focus on existing data challenges (nothing “mock” about them). TDR schedule: CalorimetersSept 2000 RICHSept 2000 MuonJan 2001 Outer TrackerMarch 2001 Vertex DetectorApril 2001 Inner TrackerSept 2001 TriggerJan 2002 ComputingJuly 2002 Physicssignals, backgrounds, analysis
4
Short term plans Globus 1.1.3 installed and tested at CERN, RAL and Liverpool. Members of Grid group given access to the respective testbeds. Cross-check that jobs can be run on each others machines. Extend to other centres once we understand. Ensure that SICBMC can be run at CERN,RAL and Liverpool using same executable. Verify that data produced by SICBMC can be shipped back to CERN and written to tape (VTP, globus-copy?). Only small event samples of 500 events. Benchmarking tests between sites to identify bottlenecks. Aim to complete basic tests end of September Mainly a learning exercise using production software/systems
5
Issues along the way Interfacing to PBS,LSF and MAP batch scheduling systems. Role of “meta” batch systems? Extend existing LHCb Java tools to manage job submission, tape management and bookkeeping to use Grid technology. Where to publish the data - MDS/LDAP? To what extent can we standardise on commonarchitectures (Redhat 6.1 at the moment) to enable other institutes to easily join the Grid? Need more than one operating system to develop/debug programs (eg. NT). Requirement for filesystems like AFS? Token passing? How to fetch & access remote files - GASS server? RSL scripting, recovering log files, sending job parameters? Aim for “production” run using Grid December
6
Which Grid Topology for LHCb(UK)? Flexibility important. CERN INFN RAL IN2P3 Liverpool Glasgow Edinburgh Department Desktop users etc….
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.