Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
BEIJING-LCG2 Tire2 Grid. Outline  Introduction about IHEP Grid  Fabric infrastructure of BeiJing-Lcg2  Middleware grid-service and supported VO’s.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Quarterly report SouthernTier-2 Quarter P.D. Gronbech.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
HEP Grid Computing in China Gang Workshop on Future PRC-U.S. Cooperation in High Energy Physics.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
LT 2 London Tier2 Status Olivier van der Aa LT2 Team M. Aggarwal, D. Colling, A. Fage, S. George, K. Georgiou, W. Hay, P. Kyberd, A. Martin, G. Mazza,
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
NCPHEP ATLAS/CMS Tier3: status update V.Mossolov, S.Yanush, Dz.Yermak National Centre of Particle and High Energy Physics of Belarusian State University.
THE UNIVERSITY OF MELBOURNE Melbourne, Australia ATLAS Tier 2 Site Status Report Marco La Rosa
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
IHEP Computing Center Site Report Shi, Jingyan Computing Center, IHEP.
Analysis in STEP09 at TOKYO Hiroyuki Matsunaga University of Tokyo WLCG STEP'09 Post-Mortem Workshop.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
LCG WLCG Accounting: Update, Issues, and Plans John Gordon RAL Management Board, 19 December 2006.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
Ismayilov Ali Institute of Physics of ANAS Creating a distributed computing grid of Azerbaijan for collaborative research NEC'2011.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Grid Computing 4 th FCPPL Workshop Gang Chen & Eric Lançon.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
IHEP Computing Center Site Report Shi, Jingyan Computing Center, IHEP.
Status of BESIII Distributed Computing
SuperB – INFN-Bari Giacinto DONVITO.
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
lcg-infosites documentation (v2.1, LCG2.3.1) 10/03/05
Update on Plan for KISTI-GSDC
The CMS Beijing Site: Status and Application
Data Processing for CDF Computing
The LHCb Computing Data Challenge DC06
Presentation transcript:

Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21

2 Outline  Site overview  ATLAS Tier2(data transferring&jobs)  CMS Tier2 (data transferring& jobs)

3 Site Resources-Infrastructure BEIJING-LCG2 : ATLAS Tier2/Tier3 CMS Tier2/Tier3 Support other VO: BES,Biomed,esr... Hardware  896 CPU cores(1400 cores at the end of 2010)  640TB storage space  Resources shared equally by CMS and ATLAS  142 nodes managed by QUATOR

4 Site Resources-Server Room Server RoomComputer Nodes Shift RoomStorage Nodes

5 SECE Site Resources-Grid Services Grid Services UI gLite APEL gLite PX Top BDII SquidVOBOX LCG CE PBS server CREAM CE PBS server WN DPMdCache Disk Pool Disk Pool Disk Pool Disk Pool CMS 498 coresATLAS 498 cores ATLAS 320TBCMS 320TB

6 CE & SE SE  DPM 2 head nodes(1 with main services, 1 for database) 8 pool nodes (GridFTP door+RFIO door) 320 TB(ATLAS)  dCache 3 head nodes(SRM, PNFS, other services) 8 pool nodes(GridFTP+dCap) 320TB(CMS) CE  LCG CE (ATLAS+CMS)  CREAM CE(ATLAS+CMS)  1 PBS server  112 work nodes (896 cores)

7 SE Usage

8 Site Resources-Network Daya Bay Shen Zhen Beijing CSTNet Hong Kong IHEP USA GLORIAD 2.5G ASGC TEIN3 CSTNet 10G Beijing Tsinghua YBJ EUR. 2.5G 2X1G 45M 155M 2.5G Others EDU.CN

9 Local Integrated Grid Services Monitoring

10 More Monitoring Ganglia Monitor Grid Job/CPU hour Monitor dCache MonitorDPM Monitor

11 Site Reliability and Availability provided by gridmap

12 ATLAS Tier2 : Data Transfer Disk EfficiencyAverage Rate Successful Files Failed Files MCDISK97%3MB/s DATADISK30%3MB/s PRODDISK86%361KB/s SRATCHDISK98%144KB/s Download 76TB data(2010/ /11) Statistics from DDM:

13 ATLAS Tier2: Finished production jobs (2010/6-2010/11) ranked by Efficiency SiteSuccessfulFailedEfficiency TOKYO-LCG % BEIJING-LCG % IN2P3-LAPP % IN2P3-LPC % IN2P3-LPSC % IN2P3-CPPM % RO % GRIF % IN2P3-CC % TOTAL % From PanDA production:

14 ATLAS Tier2: Finished Jobs 786,781 jobs finished (2010/06/ /11/10) Statistics from EGEE accounting :

15 ATLAS Tier2: Used CPU Hours 1,039,048 CPU hours used(2010/06/ /11/10) Available CPU hours for ATLAS is hours/month CPU hours efficiency is 48.4%(ATLAS), 33.3%(CMS)

16 ATLAS Tier2: CPU Efficiency Average CPU efficiency is 91.2%(2010/06/ /11/10)

17 CMS Tier2: Download 180TB

18 CMS Tier2 : Upload 77TB

19 CMS Tier2 : Download Quality

20 CMS Tier2 : Upload Quality

21 CMS Tier2 : max download rate 91MB/s

22 Max upload rate 55MB/s network problem from Beijing to IN2P3 from July to September

23 CMS Tier2: Finish 301,994 jobs

24 Site Future Plans Software distribution by CVMFS(CernVM) athena cmssw Virtualization of computer nodes via CernVM technology

25 Production status is good network problem from Beijing to IN2P3 from July to September

26 The End! Thanks