Download presentation
Presentation is loading. Please wait.
1
Status Report on LHC_2 : ATLAS computing
International Center for Elementary Particle Physics (ICEPP), the University of Tokyo Hiroyuki Matsunaga Workshop FJPPL’09 May 20,
2
ATLAS distributed computing
LYON BNL FZK TRIUMF ASGC PIC SARA RAL CNAF Cloud model Each cloud consists of 1Tier-1 + nTier-2s Tier-2s are associated with only one Tier-1 ICEPP (Tokyo) site is a large and the farthest Tier-2 in FR cloud Tier-0 CERN Clermont Tokyo LAPP Romania GRIF Beijing FR Cloud 10 Tier-1 sites (and clouds)
3
LCG-France Foreign sites in ATLAS French cloud: Tokyo Beijing Romania
Role Site ALICE ATLAS CMS LHCb Tier-1 IN2P3-CC ✓ Tier-2 IN2P3-CC-T2 GRIF IN2P3-LPC IN2P3-IRES IN2P3-LAPP IN2P3-SUBATECH Tier-3 IN2P3-CPPM IN2P3-IPNL IN2P3-LPSC Foreign sites in ATLAS French cloud: Tokyo Beijing Romania
4
Members in 2008 (* leader) French group Japanese group E. Lançon* CEA
T. Mashimo* ICEPP G. Rahal IN2P3 I. Ueda F. Hernandez H. Matsunaga D. Boutigny T. Isobe J. Schwindling J. Tanaka S. Jézéquel T. Kawamoto
5
Budget Plan in 2008 Item Euro Support-ed by k Yen Travel 1,000 160
Nb travels 3 3,000 CNRS 480 ICEPP 1 CEA Per-diem 237 22.7 Nb days 15 3,555 12 272 5 1,185 Total 8,740 752
6
Visits in 2008 Visit to ICEPP (Feb. 2008)
E. Lançon, G. Rahal Visit to LAPP Tier-2 (Annecy) (Apr. 2008) I. Ueda, H. Matsunaga Visit to IRFU and LAL Tier-2’s (Paris). Participation in FJPPL WS (May 2008) T. Mashimo, I. Ueda, T. Kawamoto Visit to ICEPP (Dec. 2008) E. Lançon, S. Jézéquel, F. Chollet, E. Fede I. Ueda is a visiting researcher at LAPP since July Strengthen the communication is the main communication tool.
7
Activities in 2008 Data transfer in the ATLAS framework
MC and cosmic data Participation in ATLAS and WLCG events Milestone run Full Dress Rehearsal (FDR) Common Computing Readiness Challenge (CCRC 08) User analysis tests Stress tests on file servers Data access from many clients in LAN
8
Network path between Lyon and Tokyo
Academic network (SINET + GEANT + RENATER) 10Gbps bandwidth for the entire path RTT (round trip time) ~290ms GEANT SINET RENATER New York Lyon Tokyo
9
Traceroute 1 Lyon-OPN (193.48.99.100) 0.288 ms 0.180 ms 1.036 ms
2 Lyon-INTER ( ) ms ms ms 3 vl3114-paris1-rtr-021.noc.renater.fr ( ) ms ms ms 4 vl89-te paris1-rtr-001.noc.renater.fr ( ) ms ms ms 5 renater.rt1.par.fr.geant2.net ( ) ms ms ms 6 so rt1.lon.uk.geant2.net ( ) ms ms ms 7 so rt1.ams.nl.geant2.net ( ) ms ms ms 8 nyc-gate1-RM-GE sinet.ad.jp ( ) ms ms ms 9 tokyo1-dc-RM-P sinet.ad.jp ( ) ms ms ms 10 UTnet-1.gw.sinet.ad.jp ( ) ms ms ms 11 bwtest1.icepp.jp ( ) ms ms ms
10
Upgrades in 2008 SINET-GEANT link in New York (Feb. 2008)
2.4 to 10 Gbps Grid middleware upgraded at ICEPP (May 2008) Improved GridFTP performance Added file servers at ICEPP (May 2008) Increased number of parallel files (20) and streams (10) Configured in FTS (File Transfer Service), a scheduler of GridFTP file transfer
11
FTS monitor for IN2P3-TOKYO
Channel managers can change channel parameters and status Recent transfer details and statistics are shown in the monitor page
12
Data transfer from Lyon to Tokyo
Data export from T0 (CERN) -> Tier-1 (Lyon) -> Tier-2 (Tokyo) “quasi-online” In May 2008 In part of CCRC08 activity >500MB/s achieved In May 2009 Normal activity of MC data export ~400MB/s sustained for many hours
13
Data access in LAN User analysis is becoming more important towards the LHC start-up Mostly performed at the Tier-2 sites Tests have been performed in the French cloud for direct IO and file copy modes Direct IO (rfio) High load on the Data Storage System Troublesome to use in ATLAS software File copy (gridFTP) Retrieval of a whole file before processing Need scratch space on worker node
14
Stress test at ICEPP (rfio mode)
Submit user analysis jobs reading many input data files High load on name service (BDII, SE) at the job start-up Physical address should be obtained from logical address ~2.8GB/s of peak transfer rate (with 13 file servers)
15
rfio vs. GridFTP rfio GridFTP
In case of GridFTP, performance can be improved by local cache on worker node There may be room for improvement in rfio performance with optimization ICEPP is the best performing site Event processing rate: 15Hz vs. 20 Hz CPU/Walltime: 70% vs. >90% Probably due to good hardware configuration rfio GridFTP
16
More test on rfio reading
Data access to one file server ~750MB/s is the system limit (network, disk) Number of parallel clients (rfcp) increased: 1, 2, 4, 8, 16, 32, 64, and 128 Performance degradation seen for >32 clients
17
Activities in 2009 LHC will start collecting data this year
More users come into Grid Consolidate the system Stability and reliability Monitoring Major upgrade of computer system at ICEPP in coming winter More realistic R&D from the point of view of the physics analysis
18
Plan In 2009 New member: C. Biscarat (IN2P3) Item Euro Support-ed by
k Yen Travel 1,000 160 Nb travels 4 4,000 CNRS 3 480 ICEPP 1 IRFU Per-diem 207 22.7 Nb days 20 4,140 12 272 5 1,035 Total 10,175 752
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.