Grid Computing 6th FCPPL Workshop

Slides:



Advertisements
Similar presentations
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Advertisements

March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Integration Program Update Rob Gardner US ATLAS Tier 3 Workshop OSG All LIGO.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
ATLAS Metrics for CCRC’08 Database Milestones WLCG CCRC'08 Post-Mortem Workshop CERN, Geneva, Switzerland June 12-13, 2008 Alexandre Vaniachine.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
The production deployment of IPv6 on WLCG David Kelsey (STFC-RAL) CHEP2015, OIST, Okinawa 16 Apr 2015.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
LCG Introduction John Gordon, STFC-RAL GDB September 9 th, 2008.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Status of WLCG FCPPL project Status of Beijing site Activities over last year Ongoing work and prospects for next year LANÇON Eric & CHEN Gang.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
LCG Introduction John Gordon, STFC-RAL GDB June 11 th, 2008.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Grid Computing 4 th FCPPL Workshop Gang Chen & Eric Lançon.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
J. Shank DOSAR Workshop LSU 2 April 2009 DOSAR Workshop VII 2 April ATLAS Grid Activities Preparing for Data Analysis Jim Shank.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Claudio Grandi INFN Bologna Workshop congiunto CCR e INFNGrid 13 maggio 2009 Le strategie per l’analisi nell’esperimento CMS Claudio Grandi (INFN Bologna)
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
Status of WLCG FCPPL project
Status of BESIII Distributed Computing
(Prague, March 2009) Andrey Y Shevel
WLCG Network Discussion
Applied Operating System Concepts
Ian Bird WLCG Workshop San Francisco, 8th October 2016
The Beijing Tier 2: status and plans
Virtualization and Clouds ATLAS position
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Status Report on LHC_2 : ATLAS computing
Report of Dubna discussion
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
INFN-GRID Workshop Bari, October, 26, 2004
Status and Prospects of The LHC Experiments Computing
Distributed Databases
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
MC data production, reconstruction and analysis - lessons from PDC’04
Conditions Data access using FroNTier Squid cache Server
Univ. of Texas at Arlington BigPanDA Workshop, ORNL
Project: COMP_01 R&D for ATLAS Grid computing
Simulation use cases for T2 in ALICE
TYPES OF SERVER. TYPES OF SERVER What is a server.
ALICE Computing Model in Run3
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
WLCG Collaboration Workshop;
Organization of ATLAS computing in France
Grid Canada Testbed using HEP applications
FCPPL 10th workshop Beijing 27th March 2017
ETHZ, Zürich September 1st , 2016
The LHCb Computing Data Challenge DC06
Presentation transcript:

Grid Computing 6th FCPPL Workshop Gang Chen & Eric Lançon March 29, 2013, NJU

Main achievements in 2012-13 Network performance monitoring and debugging Multi-core simulation tests on IHEP farm Workshop in Paris (June 2012) on site operational issues, 2 Chinese participants : SUN Gongxing/孙功星, YAN Xiaofei/闫晓 飞 Presentation of ZANG Dongsong/臧冬松 (PhD. in 2013) work at CHEP 2012 conference Student (LI Sha/李莎) spent 3 months in Europe to work on ATLAS data distribution system Visit to GRIF T2 (Paris) of YAN Xiaofei in December 2012 to discuss site configuration + Frequent meetings (ATLAS & LCG) with remote connection

Beijing T2 performance Thanks to high availability/reliability, Beijing T2 is classified ‘Direct’ T2 (T2D) Can get/send data to/from every T1/T2D site in the world Host primary data Network connection performance & stability are of primary concern T2D status may be lost if network deteriorates

Beijing site availability for ATLAS services Well above 90% comp. element storage maintenance

Beijing site performance: data transferred Import Beijing being T2D repository for ATLAS JET physics group 1PB (>1M files) Data volume transferred since March 2012 Export Beijing being T2D exports data to everywhere

Processing at Beijing over 90% job efficiency for centralized activities 50% of CPU consumption only for simulation ! Site is now also heavily used for user analysis, group analysis, reconstruction...

ATLAS Jobs through PanDA Production Jobs: 311,5000 ( Job Success Rate: 94%) Analysis Jobs: 691,6000 (Job Success Rate: 81%)

Network performance monitoring ATLAS ‘sonar’ : Calibrated file transfers by ATLAS Data Distribution system, from storage to storage perfSONAR (PS) : Network performance tool (throughput, latency), from memory to memory Has to be located as close as possible to storage at site and with similar hardware connectivity

perfSonar monitoring Deployment of perfSonar machines (Fazhi Qi/齐法制) Work done in cooperation with GRIF T2 within the WLCG working group Identical configuration files for French and Chinese machines Monitoring hosted in BNL

ORIENT-plus : improved connectivity EU-China

Transfer rates from European T1s to Beijing New Line : Not same impact for all T1s To be understood

Networking monitoring and debugging Beijing connected to Europe via GEANT/ORIENT But performances not identical for all T1s, specific issues for each sites Asymmetries observed, not understood yet CERN → Beijing was using GLORIAD/KREONET changed end of 2012 on our request, ORIENT now used Firewall removed on our request at various sites Lyon→Beijing Beijing →Lyon KIT→Beijing Beijing →KIT

Multi-core processing Special multi-core (8) queue setup at Beijing spring 2012 (YAN Xiaofei/闫晓飞), pioneers ! Used to validate AthenaMP, the ATLAS parallel event processing (to save memory) AthenaMP will be used as standard software for ATLAS simulation end of 2013

CMS Jobs Total jobs 829K : production 436k,analysis 231k

CMS Jobs production data : import 158TB,export 58TB

Prospect for 2013-2014 Continue network monitoring and debugging activities Deployment of large multi-core setup for production, scaling issues to be addressed, common solutions with French T2s Deployment of WebDAV interface to storage (http access) in cooperation with French T2s Cloud computing : application for CSC-FCPPL grant for student (LI Sha/李莎) stay in Grenoble for 18 months Chinese-French-Japanese workshop at Beijing May 2013

THANK YOU Gang Chen/CC/IHEP 2018/12/5 - 17