ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Service Challenge Meeting “Service Challenge 2 update” James Casey, IT-GD, CERN IN2P3, Lyon, 15 March 2005.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Data management in grid. Comparative analysis of storage systems in WLCG.
WLCG Service Report ~~~ WLCG Management Board, 27 th October
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
BNL Facility Status and Service Challenge 3 Zhenping Liu, Razvan Popescu, Xin Zhao and Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
CASTOR / GridFTP Emil Knezo PPARC-LCG-Fellow CERN IT-ADC GridPP 7 th Collaboration Meeting, Oxford UK July 1st 2003.
Spending Plans and Schedule Jae Yu July 26, 2002.
LCG Storage workshop at CERN. July Geneva, Switzerland. BNL’s Experience dCache1.8 and SRM V2.2 Carlos Fernando Gamboa Dantong Yu RHIC/ATLAS.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
BNL Facility Status and Service Challenge 3 HEPiX Karlsruhe, Germany May 9~13, 2005 Zhenping Liu, Razvan Popescu, and Dantong Yu USATLAS/RHIC Computing.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
June 15, PMG Ruth Pordes Status Report US CMS PMG July 15th Tier-1 –LCG Service Challenge 3 (SC3) –FY05 hardware delivery –UAF support Grid Services.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
LCG Storage Workshop “Service Challenge 2 Review” James Casey, IT-GD, CERN CERN, 5th April 2005.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
CASTOR in SC Operational aspects Vladimír Bahyl CERN IT-FIO 3 2.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Storage & Database Team Activity Report INFN CNAF,
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
Status of the SRM 2.2 MoU extension
Cross-site problem resolution Focus on reliable file transfer service
ASCC Site Report Simon C. Lin Computing Centre, Academia Sinica
Service Challenge 3 CERN
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Emil Knezo PPARC-LCG-Fellow CERN IT-DS-HSM August 2002
Bernd Panzer-Steindel, CERN/IT
The INFN TIER1 Regional Centre
Olof Bärring LCG-LHCC Review, 22nd September 2008
The INFN Tier-1 Storage Implementation
LHC Data Analysis using a worldwide computing grid
CASTOR: CERN’s data management system
The LHCb Computing Data Challenge DC06
Presentation transcript:

ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005

Plan for Taiwan Tier-1 Network Backup Path to T0 Primary Path to T0 (will install 10GE in 2006)

TW LCG Domestic Network (current) Every Link is already 10GE Every Link is already 10GE Type-2 is “direct-connect” Type-1 is passing through 3rd party facility or 3rd party network Type-2 is “direct-connect” Type-1 is passing through 3rd party facility or 3rd party network

AP Regional LCG Network (proposed) Solid lines between routers (circle) and switches (box) and networks are already exist. Solid lines between routers (circle) and switches (box) and networks are already exist. Solid lines between T2 and routers /switches /networks are already exist and/or proposed. Solid lines between T2 and routers /switches /networks are already exist and/or proposed. Dashed line are currently planned by ASnet and will be installed in Dashed line are currently planned by ASnet and will be installed in Type-2 is “direct-connect” Type-1 is passing through 3rd party facility or 3rd party network Type-2 is “direct-connect” Type-1 is passing through 3rd party facility or 3rd party network

SC test ASCC 2

Tier-1 ASCC Taipei User Interface (for CMS PhEDEx) File Transfer Service CASTOR Software installation(lcg00115, lcg00116, lcg00117, lcg00118) gridftp ready rfio ready srm installed Hardware installation 4TB Storage Resource available No Tape library available at this moment namespace /castor/grid.sinica.edu.tw/sc3/

lcg00115lcg00116lcg00117lcg TB name server stager rfiod srm gridftp rfiod srm gridftp rfiod srm gridftp rfiod srm gridftp Fibre channel UIFTSLFC fts.grid.sinica.edu.tw lfc.grid.sinica.edu.tw lcg00119.grid.sinica.edu.tw lcg00115:/flatfile/SE00/ lcg00116:/flatfile/SE00/ Note: Stager will manage following disk pool lcg00115:/flatfile/SE002TBdisk array lcg00116:/flatfile/SE002TBdisk array lcg00117:/flatfile/SE00nx250G local disk lcg00118:/flatfile/SE00nx250Glocal disk

Data Transfer Test Monthly Graph Iperf test – reach to 1600Mbits FTS / CMS data transfer – functionality test Gridftp work, but some problems with SRM (Solved early this week)

PHEDEX (Successful) Transfer Rate Castorgridsc → All sites Castorgrid → All sites

PHEDEX (Successful) Transfer Rate

Total amount of data successfully transferred

PHEDEX Transfer Quality

Summary Functionality testing is good. Need to find out the baseline parameters. Not enough storage space with SRM. Only one box is used for SC between CERN and Taiwan How much storage space is needed for SC? Will have 6 TB this week, and 10TB by September. Should the production resources could be integrated for SC? The Tape Library will be used. Resource Contention of Experiment and the general SC Improve the utilization of bandwidth Procurement of more storage is underway Preparation for ATLAS, CMS SC, and other AP T2 sites