Progress of Work on SE and DMS YAN Tian April. 16, 2014.

Slides:



Advertisements
Similar presentations
Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
Advertisements

1 Configuring Internet- related services (April 22, 2015) © Abdou Illia, Spring 2015.
IHEP Site Status Jingyan Shi, Computing Center, IHEP 2015 Spring HEPiX Workshop.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Implementing ISA Server Caching. Caching Overview ISA Server supports caching as a way to improve the speed of retrieving information from the Internet.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
ALICE DATA ACCESS MODEL Outline ALICE data access model - PtP Network Workshop 2  ALICE data model  Some figures.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep ,
Site report: Tokyo Tomoaki Nakamura ICEPP, The University of Tokyo 2014/12/10Tomoaki Nakamura1.
AYAN MITRA CHRIS HOFFMAN JANA HUTCHINS Arizona Geospatial Data Sharing Web Application Development April 10th, 2013.
YAN, Tian On behalf of distributed computing group Institute of High Energy Physics (IHEP), CAS, China CHEP-2015, Apr th, OIST, Okinawa.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Introduction to CVMFS A way to distribute HEP software on cloud Tian Yan (IHEP Computing Center, BESIIICGEM Cloud Computing Summer School.
Jiawen Zhang, IHEP, 2008, April 10, frascati Status of BEPCII/BESIII and Physics preparation Jiawen Zhang 2008/4/7—10 , PHIPSI08, Frascati.
Marianne BargiottiBK Workshop – CERN - 6/12/ Bookkeeping Meta Data catalogue: present status Marianne Bargiotti CERN.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
RMS Importer Status MACS Week March 2011 PP b-ABR_RMSImporterStatus Angela Brett RMS Importer Status 1.
Status of StoRM+Lustre and Multi-VO Support YAN Tian Distributed Computing Group Meeting Oct. 14, 2014.
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
Supported by EU projects 12/12/2013 Athens, Greece Open Data in Agriculture Hands-on with data infrastructures that can power your agricultural data products.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
1 / 22 AliRoot and AliEn Build Integration and Testing System.
LHCb planning for DataGRID testbed0 Eric van Herwijnen Thursday, 10 may 2001.
9 th Weekly Operation Report on DIRAC Distributed Computing YAN Tian From to
First attempt for validating/testing Testbed 1 Globus and middleware services WP6 Meeting, December 2001 Flavia Donno, Marco Serra for IT and WPs.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep , 2014 Draft.
BES III Computing at The University of Minnesota Dr. Alexander Scott.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
CERN IT Department t LHCb Software Distribution Roberto Santinelli CERN IT/GS.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
27 th Weekly Operation Report on DIRAC Distributed Computing YAN Tian From to
The GridPP DIRAC project DIRAC for non-LHC communities.
Status of BESIII Distributed Computing BESIII Workshop, Sep 2014 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
Image Distribution and VMIC (brainstorm) Belmiro Moreira CERN IT-PES-PS.
LHCbDirac and Core Software. LHCbDirac and Core SW Core Software workshop, PhC2 Running Gaudi Applications on the Grid m Application deployment o CVMFS.
EGEE is a project funded by the European Union under contract IST Experiment Software Installation toolkit on LCG-2
DDM Central Catalogs and Central Database Pedro Salgado.
Jiri Chudoba for the Pierre Auger Collaboration Institute of Physics of the CAS and CESNET.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
The GridPP DIRAC project DIRAC for non-LHC communities.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
Consistency Checking And RUCIO Progress Update Sarah Williams Indiana University ADC Weekly Meeting,
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
StoRM + Lustre Proposal YAN Tian On behalf of Distributed Computing Group
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Gang Chen, Institute of High Energy Physics Feb. 27, 2012, CHAIN workshop,Taipei Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures.
StoRM+Lustre Performance Test with 10Gbps Network YAN Tian for Distributed Computing Group Meeting Nov. 4th, 2014.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Cluster Status & Plans Cluster Status & Plans —— Gang Qin Jan
How to use Drupal Awdhesh Kumar (Team Leader) Presentation Topic.
Status of BESIII Distributed Computing BESIII Collaboration Meeting, Nov 2014 Xiaomei Zhang On Behalf of the BESIII Distributed Computing Group.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
Progress on Design and Implement of Job Management System Suo Bing, Yan Tian, Zhao Xianghu
Federating Data in the ALICE Experiment
Status of BESIII Distributed Computing
BESIII data processing
The Beijing Tier 2: status and plans
Xiaomei Zhang CMS IHEP Group Meeting December
AWS Integration in Distributed Computing
Report of Dubna discussion
Work report Xianghu Zhao Nov 11, 2014.
Introduction to CVMFS A way to distribute HEP software on cloud
Farida Fassi, Damien Mercie
Status of Storm+Lustre and Multi-VO Support
CMAST Update March 25, 2014.
The CMS Beijing Site: Status and Application
Presentation transcript:

Progress of Work on SE and DMS YAN Tian April. 16, 2014

Outline Deployment of StoRM SE Data transfer between SEs Squid+CVMFS new version test Status of new sites (INFN, BUAA, SDU) Status of DMS work plan

SE installation & test USTC SE is installed to be d-Cache type for emergency data transferring (Feb. 24). A stable test instance of StoRM SE is set up at IHEP (Mar. 17) – Presure test: About 20,000 test files (137GiB), the SE server work fine, no longer crash. – https webDAV component is added and tested. Users can view file list in webbroser and download data by wget with certificate. WHU SE (36TB) is installed with StoRM (Apr. 02); Pressure test: – 28.9 TB data transferred to USTC SE, downloaded by wget – 4.4TB data transferred to WHU SE – download 3.9TB data from USTC SE to WHU SE

Current Status of SEs #SE NameCapacityTypeSpeed to IHEP webDAVLaunch Date 1IHEPD-USER126 TBdCachein IHEP2012 2JINR-USER7.3 TBdCache~ 25 MB/stesting2012 3USTC-USER24 TBdCache~ 37 MB/sanonymous WHU-USER39 GBStoRM~ 21 MB/shttps YANT-USER167 GBStoRM (test)in IHEP2014.3

24.5 TB XYZ DST Data Transferred from IHEP SE to USTC SE Motivation: USTC physicist desire; testing SE; testing transfer system. Time: From Mar. 9 th to 25 th, 2014 Average speed: 37.0 MB/s (peak 60 MB/s at night), about 3.2 TB per day. USTC physicists download data from their SE by wget at about 75 MB/s. BatchData TypeFilesData SizeAverage SpeedSuccess Rate 1xyz_423012, TB36.7 MB/s99.31% 2xyz_426010, TB38.7 MB/s99.19% 3xyz_43605, TB37.4 MB/s98.80% 4xyz_4260scan5, TB36.9 MB/s99.65% 5xyz_4360scan TB30.4 MB/s98.92% Total35, TB37.0 MB/s99.23%

4.4 TB randomtrg Data Transferred from IHEP SE to USTC,JINR,WHU SE Motivation: for simulation + reconstruction test Target Data: round 06 random trigger data, 12,468 files with 4.04 TB size. Time: From Mar. 13 th to Apr. 5 th, 2014, intermittently. The speed reduction at USTC since half of the files are small.idx files. The speed of JINR and WHU is much higher than previous test result (download from JINR at 220 KB/s and WHU iperf test at 2MB/s). BatchDestination SEFilesData SizeAverage SpeedSuccess Rate 1USTC-USER12, TB23.4 MB/s99.37% 2JINR-USER12, TB24.9 MB/s99.76% 3WHU-USER12, TB21.4 MB/s99.76% Total37, TB

24.5 TB XYZ DST Data Transferred from IHEP SE to USTC SE Motivation: USTC physicist desire; testing SE; testing transfer system. Time: From Mar. 9 th to 25 th, 2014 Average speed: 37.0 MB/s (peak 60 MB/s at night), about 3.2 TB per day. USTC physicists download data from their SE by wget at about 75 MB/s. BatchData TypeFilesData SizeAverage SpeedSuccess Rate 1xyz_423012, TB36.7 MB/s99.31% 2xyz_426010, TB38.7 MB/s99.19% 3xyz_43605, TB37.4 MB/s98.80% 4xyz_4260scan5, TB36.9 MB/s99.65% 5xyz_4360scan TB30.4 MB/s98.92% Total35, TB37.0 MB/s99.23%

4.4 TB randomtrg Data Transferred from IHEP SE to USTC,JINR,WHU SE Motivation: for simulation + reconstruction test Target Data: round 06 random trigger data, 12,468 files with 4.04 TB size. Time: From Mar. 13 th to Apr. 5 th, 2014, intermittently. The speed reduction at USTC since half of the files are small.idx files. The speed of JINR and WHU is much higher than previous test result (download from JINR at 220 KB/s and WHU iperf test at 2MB/s). BatchDestination SEFilesData SizeAverage SpeedSuccess Rate 1USTC-USER12, TB23.4 MB/s99.37% 2JINR-USER12, TB24.9 MB/s99.76% 3WHU-USER12, TB21.4 MB/s99.76% Total37, TB

SE Transfer Speed: IHEP to USTC

SE Transfer Speed: IHEP to JINR/WHU

SE Transfer Quality: IHEP to JINR/WHU

Problems on Data Transfer permission control (randomtrg r06 should change to 777 before transfer) perserve timestamp info. (USTC physicist desire) transfer stalled dataset compatibility multi-request transfer UI refinement

webDAV HTTP Interface of USTC SE

webDAV HTTP Interface of WHU SE

HTTP Access to SE Motivation: site without SE can use http client to access dst data or randomtrg data WebDAV module: support http access to SE. wget download test : – 3.87 TB data from USTC webDAV interface to WHU. – typical speed: 40~60 MB/s – experienced bandwidth limitation, even 0 for some time, wget just waiting and continue working – final average speed 12.9 MB/s, time 3d 7h 23m 27s. Problems: need md5sum check

New Version of squid and CVMFS squid is upgraded from to CVMFS is upgraded from to Deployment: – BUAA (SL5.8): squid CVMFS – WHU (SL6.4): squid CVMFS Install guide in Twiki is updated.

Status of Sites #Site NameCPU CoresLRMSSEStatus 1IHEP-PBS.cn96PBSdCache 126 TBRunning 2GUCAS.cn152PBSshare IHEPRunning 3USTC.cn ~768PBS + condordCache 24 TBRunning 4PKU.cn88PBSRunning 5JINR.ru40~128gLitedCache 7.3 TBRunning 6UMN.us768PBS/condorRunning 7WHU.cn ~300PBSStoRM 39 TBWN’s CVMFS is deploying 8INFN-Torino.it~200gLiteStart Running at Apr. 2 9SDU.cn~102PBSSimple sh job is working 10BUAA.cn~256commercial?Squid+CVMFS installed Total2032~ TB

Status of DMS Work Plan Sub taskStatus Clean up DFC and SEDone by YAN Tian Upgrade gangaBOSSDone by ZHAO Xianghu Dataset and dst upload toolkit and APIDone by ZHANG Gang SE transfer toolkit test and refineDone by YAN Tian and LIN Tao WMS-log and randomtrg upload toolDone by ZHAO Xianghu New version DFC testHandling by ZHAO Xianghu Upgrade DIRAC to v6r10p20 and testTo do Tag, intergrated upgrade & test, releaseTo do

Thanks Thank you for your attention! Q & A