HEP Grid Computing in China Gang 06-16-2006 Workshop on Future PRC-U.S. Cooperation in High Energy Physics.

Slides:



Advertisements
Similar presentations
Kejun Dong, Kai Nan CNIC/CAS CNIC Resources And Activities Update Resources Working Group PRAGMA11 Workshop, Oct.16/17 Osaka, Japan.
Advertisements

FP62004Infrastructures6-SSA EUChinaGRID Project F. Ruggieri (INFN) – D. Qian (Univ. Beihang) Workshop, Seoul,
Jorge Gasós Grid Technologies Unit European Commission The EU e Infrastructures Programme Workshop, Beijing, June 2005.
CNIC Grid/SDG CA Updates 2 nd APGrid PMA meeting, October 15, 2006 Morrise Xu NTARL, CNIC, China.
BEIJING-LCG2 Tire2 Grid. Outline  Introduction about IHEP Grid  Fabric infrastructure of BeiJing-Lcg2  Middleware grid-service and supported VO’s.
FP6−2004−Infrastructures−6-SSA Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Infrastructure overview Arnold Meijster &
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
The Grid in China ACGRID Symposium Hanoi, 16/Nov./2007 Gang CHEN Institute of High Energy Physics Chinese Academy of Sciences.
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
FP6−2004−Infrastructures−6-SSA EUChinaGRID Project Giuseppe Andronico Technical Manager EUChinaGRID Project INFN Sez.
What is OMII-Europe? Qin Li Beihang University. EU project: RIO31844-OMII-EUROPE 1 What is OMII-Europe? Open Middleware Infrastructure Institute for Europe.
Division Report Computing Center CHEN Gang Computing Center Oct. 24, 2013 October 24 ,
Grid Activities in China CHEP’06 Mumbai, 16/Feb/2006 Gang CHEN Institute of High Energy Physics, CAS Baoping YAN Computer Network Information Center, CAS.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
Grid Infrastructure for the ILC Andreas Gellrich DESY European ILC Software and Physics Meeting Cambridge, UK,
L ABORATÓRIO DE INSTRUMENTAÇÃO EM FÍSICA EXPERIMENTAL DE PARTÍCULAS Enabling Grids for E-sciencE Grid Computing: Running your Jobs around the World.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme The EU-IndiaGrid Project Joining.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Next steps with EGEE EGEE training community.
IHEP Computing Center Site Report Shi, Jingyan Computing Center, IHEP.
FP6−2004−Infrastructures−6-SSA EUChinaGrid Infrastructure Giuseppe Andronico - INFN Catania Concertation Meeting – Budapest,
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
FP6−2004−Infrastructures−6-SSA Interconnection & Interoperability of Grids between Europe and China the EUChinaGRID Project F. Ruggieri – INFN Project.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Interconnection & Interoperability of Grids.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
FP6−2004−Infrastructures−6-SSA EUChinaGrid status report Giuseppe Andronico INFN Sez. Di Catania CERN – March 3° 2006.
7 September 2007 AEGIS 2007 Annual Assembly Current Status of Serbian eInfrastructure: AEGIS, SEE-GRID-2, EGEE-II Antun Balaz SCL, Institute of Physics,
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Ukrainian Academic Grid Initiative (UAGI) Status and outlook G. Zinovjev Bogolyubov Institute for Theoretical Physics Kiev, Ukraine.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ICFA DDW'06 - Cracow1 (HEP) GRID Activities in Hungary Csaba Hajdu KFKI RMKI Budapest.
EUChinaGRID project Federico Ruggieri INFN – Sezione di Roma3 EGEE04 - External Projects Integration Session Pisa 25 October 2005.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Overview of Chinese Computing Grid Gang CHEN/IHEP-CC 12 December 2006 Chinese-French Workshop.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Grid Computing Activities in PKU Asso. Prof. CHEN Ping Prof. QIAN Sijin Asso. Prof. YU Huashan Peking University
Grid Computing: Running your Jobs around the World
BaBar-Grid Status and Prospects
The Beijing Tier 2: status and plans
Data Challenge with the Grid in ATLAS
Astroparticle data transfer between China and EU
Grid Computing for the ILC
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
High Energy Physics Computing Coordination in Pakistan
LHC Data Analysis using a worldwide computing grid
The GENIUS portal and the GILDA t-Infrastructure
EUChinaGRID Federico Ruggieri INFN Roma3
Presentation transcript:

HEP Grid Computing in China Gang Workshop on Future PRC-U.S. Cooperation in High Energy Physics

Gang Chen/IHEP-CC Agenda Computing Requirements Computing Requirements Grid Deployment Grid Deployment Applications Applications Networking Networking Prospects Prospects

Gang Chen/IHEP-CC BEPCII/BESIII BEPC: Beijing Electron Positron Collider Started in 1989 Started in ~5GeV/C 2~5GeV/C Being upgraded to Dual-Ring, (3~10)×10 32 cm -2 s -1 Being upgraded to Dual-Ring, (3~10)×10 32 cm -2 s Universities and Institutes from 27 Universities and Institutes from Korea, US, UK, Korea, US, UK, Japan, and China. Japan, and China.

Gang Chen/IHEP-CC BESIII Computing Requirements CPU : ~2000 P-IV/3GHz CPU for data production/reconstruction and physics analysis CPU : ~2000 P-IV/3GHz CPU for data production/reconstruction and physics analysis Storage : ~5PB in 5 years. Storage : ~5PB in 5 years. Network : for international/domestic communications Network : for international/domestic communications

Gang Chen/IHEP-CC YBJ-ARGO/AS  International cosmic ray observatories in Tibet. International cosmic ray observatories in Tibet. 200TB raw data per year. 200TB raw data per year. Data transferred to IHEP and processed with 400 CPUs. Data transferred to IHEP and processed with 400 CPUs. Rec. data accessible by collaborators. Rec. data accessible by collaborators.

Gang Chen/IHEP-CC LHC China is involved in all four LHC experiments, China is involved in all four LHC experiments, IHEP in CMS and ATLAS. IHEP in CMS and ATLAS.

Gang Chen/IHEP-CC More Collaborations Members of International Collaborations: Huge computing demands.

Gang Chen/IHEP-CC Grid Vision The GRID: networked data processing centres and ”middleware” software as the “glue” of resources. Researchers perform their activities regardless geographical location, interact with colleagues, share and access data Scientific instruments and experiments (and simulations) provide huge amount of data

Gang Chen/IHEP-CC HEP Grid in China Coordinated by IHEP Coordinated by IHEP Based on EGEE etc. Based on EGEE etc. To build a Tier-2 center at IHEP for LHC, WLCG MoU has been signed To build a Tier-2 center at IHEP for LHC, WLCG MoU has been signed Cooperating among institutes and universities Cooperating among institutes and universities

Gang Chen/IHEP-CC LCG Deployment in China UI CE_torque SE_classicl MON WN1 UI CE_torque SE_dcache BDII PX RB LFC_mysql WN2 SDU VO: ATLAS PC farm UI CE_torque SE_classicl WN1 WN2 PC farm UI WN1 WN2 PC farm PKU VO: CMS CSTNET\CERNET CNICIHEP

Gang Chen/IHEP-CC LCG Beijing Site Operation UI (User Interface) : gLite UI (User Interface) : gLite RB (Resource Broker) : LCG RB (Resource Broker) : LCG MyProxy : gLite MyProxy : gLite SE (SRM/Dcache) : gLite SE (SRM/Dcache) : gLite CE (OpenPBS->Torque) : LCG CE (OpenPBS->Torque) : LCG MON (R-GMA ) : gLite MON (R-GMA ) : gLite BDII : gLite BDII : gLite WNs : gLite WNs : gLite LFC : LHC File Catalogue LFC : LHC File Catalogue CA : IHEP CA CA : IHEP CA VOMS : for BES/YBJ-ARGO VOs VOMS : for BES/YBJ-ARGO VOs

Gang Chen/IHEP-CC CA and BES VOMS Operation IHEP CA is Unique CA accredited by EUGridPMA and APGridMPA, covering HEP and other grid applications. IHEP CA is Unique CA accredited by EUGridPMA and APGridMPA, covering HEP and other grid applications.

Gang Chen/IHEP-CC Grid Applications in HEP LHC LHC BES-III: Beijing Electron-Positron Collider/Beijing Electron Spectrometer: tau/charm physics BES-III: Beijing Electron-Positron Collider/Beijing Electron Spectrometer: tau/charm physics ARGO-YBJ: China-Italy cosmic ray observatories in Tibet ARGO-YBJ: China-Italy cosmic ray observatories in Tibet D0 D0 …

Gang Chen/IHEP-CC HEP Grid with BES Support

Gang Chen/IHEP-CC CMS PHeDex Data transfer Setup PHeDEX system for Dataset transfer. Setup PHeDEX system for Dataset transfer. Successful test for dataset subscription, data transfer, error recovery,etc. Successful test for dataset subscription, data transfer, error recovery,etc.

Gang Chen/IHEP-CC YBJ-ARGO Computing on LCG MC, Medea+ and Reconstruction in /opt/exp_soft/bes area. MC, Medea+ and Reconstruction in /opt/exp_soft/bes area. Submit jobs via BES VO to Beijing Site. Submit jobs via BES VO to Beijing Site. Results show everything is working. Results show everything is working.

Gang Chen/IHEP-CC Data Processing model Based on the sharing of the resources and the use of synchronized Data Catalogues Based on the sharing of the resources and the use of synchronized Data Catalogues A common ARGO Virtual Organization A common ARGO Virtual Organization A top ARGO-VO BDII and RB A top ARGO-VO BDII and RB The output files in the “local” Data Catalogue The output files in the “local” Data Catalogue Synchronization procedure for the Data Catalogues to have a copy of the files in each site Synchronization procedure for the Data Catalogues to have a copy of the files in each site All information in the experiment Data Base All information in the experiment Data Base

Gang Chen/IHEP-CC ARGO Achievements Use of Gilda infrastructure to test the computing model: GILDA elements installed at IHEP and INFN ROMA TRE Use of Gilda infrastructure to test the computing model: GILDA elements installed at IHEP and INFN ROMA TRE Most of the scripts implementing the data transfer model already designed and implemented Most of the scripts implementing the data transfer model already designed and implemented Soon extensive test Soon extensive test Many requirements specified Many requirements specified

Gang Chen/IHEP-CC Grid VPN between IHEP and USTC Motivation Motivation International bandwidth bottleneck of China universitiesInternational bandwidth bottleneck of China universities Academic users in universities have to use commercial service shared with millions of public users.Academic users in universities have to use commercial service shared with millions of public users. USTC d0ustc MC production remote farm to FNAL SAM Grid running at unstable 50kBps, which is the best can be provided by China Telecom.USTC d0ustc MC production remote farm to FNAL SAM Grid running at unstable 50kBps, which is the best can be provided by China Telecom. Grid won’t be function this way!Grid won’t be function this way!

Gang Chen/IHEP-CC Grid VPN between IHEP and USTC Solution Solution Virtual Private Network (VPN) channel between CSTNET (USTC) and CSTNET (IHEP). d0ustc gain kBps stable net connection.Virtual Private Network (VPN) channel between CSTNET (USTC) and CSTNET (IHEP). d0ustc gain kBps stable net connection. Problem Problem Grid CONDOR CLOBUS does not recognize a VPN IP farm, and there is no standard solution provided by Wisconsin Group yet.Grid CONDOR CLOBUS does not recognize a VPN IP farm, and there is no standard solution provided by Wisconsin Group yet. IHEP, USTC and FNAL computing experts are working on it now. If we success, then we can distribute university LCG Tiers.IHEP, USTC and FNAL computing experts are working on it now. If we success, then we can distribute university LCG Tiers kBps is far from acceptable for Grid, a P2P link between IHEP to FNAL etc is highly desired kBps is far from acceptable for Grid, a P2P link between IHEP to FNAL etc is highly desired.

Gang Chen/IHEP-CC Secure VPN

Gang Chen/IHEP-CC d0ustc is on via USTC-IHEP VPN, much faster and stable still have some problem with gridftp SAM GRID SUMMARY

Gang Chen/IHEP-CC To foster the creation of a intercontinental e-Science community To foster the creation of a intercontinental e-Science community Training peopleTraining people Supporting existing and new applications: LHC, ARGO, Biology…Supporting existing and new applications: LHC, ARGO, Biology… To support interoperable infrastructure for grid operations between Europe and China To support interoperable infrastructure for grid operations between Europe and China A EU funded project of Interconnection & Interoperation of Grid between Europe and China EUChinaGrid

Gang Chen/IHEP-CC Partners 1 Istituto Nazionale di Fisica Nucleare (IT) (coordinator) 2 European Organisation for Nuclear Research CERN (CH) 3 Dipartimento di Biologia - Università di Roma Tre (IT) 4 Consortium GARR (IT) 5 Greek Research & Technology Network (GR) 6 Jagiellonian University – Medical College, Cracow (PL) 7 School of Computer Science and Engineering – Beihang University Beijing (CN) 8 Computer Network Information Center, Chinese Academy of Sciences – Beijing (CN) 9 Institute of High Energy Physics, Beijing (CN) 10 Peking University – Beijing (CN)

Gang Chen/IHEP-CC GILDA sites

Gang Chen/IHEP-CC GILDA sites

Gang Chen/IHEP-CC Job Statistics

Gang Chen/IHEP-CC EUChinaGrid Workshop: June, 2006

Gang Chen/IHEP-CC EUChinaGrid Tutorial: June, 2006 Organized by IHEP Tutors from IHEP and INFN

Gang Chen/IHEP-CC Networking 10 G 155M 

Gang Chen/IHEP-CC Networking 1Gbps

Gang Chen/IHEP-CC GLORIAD Gloriad is an initiative from China, USA and Russia. And now has been extended with Korea, Netherlands, and Canada.Gloriad is an initiative from China, USA and Russia. And now has been extended with Korea, Netherlands, and Canada. Gloriad provides expanded capacity for science and education collaboration (10 Gbps).Gloriad provides expanded capacity for science and education collaboration (10 Gbps). Gloriad is open to more members, especially in Asia Pacific region!Gloriad is open to more members, especially in Asia Pacific region!

Gang Chen/IHEP-CC Users of Gloriad IHEP

Gang Chen/IHEP-CC Network Performance Test IHEP-Taipei IHEP-CERN

Gang Chen/IHEP-CC In near future expand computing power and storage capacity. expand computing power and storage capacity. Participate in LCG SC4 soon. Participate in LCG SC4 soon. Promote HEP applications on Grid platform. Promote HEP applications on Grid platform. Establish more collaborations Establish more collaborations Main issues: Main issues: - Interoperability among other grid projects: CNGrid, ChinaGrid … - Interoperability among other grid projects: CNGrid, ChinaGrid … - Network bandwidth upgrading. - Network bandwidth upgrading.

Gang Chen/IHEP-CC Resource Plan at IHEP Resources Planned Resources CPU (kSI2K) Disk (TB) Tape (TB) Tape (MB/s) WAN (Mb/s) ~50% for BES, ~50% LHC and others:

Gang Chen/IHEP-CC Thank You!