LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.

Slides:



Advertisements
Similar presentations
1 TWAREN Optical Networking on TANet2 and TWAREN Ray-Ming Hsu National Center for High-performance Computing Hsinchu Science Park, Taiwan February
Advertisements

Taiwan R&E Networks International Link Updates (2004/07)
1 Lambda Networking in TWAREN New Architecture Design 2006 / 01 / 25 NCHC, Taiwan.
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
5.3 HS23 Blade Server. The HS23 blade server is a dual CPU socket blade running Intel´s new Xeon® processor, the E5-2600, and is the first IBM BladeCenter.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
1 R&E Network Planning, Engineering and OA&M Capabilities in Taiwan 2006 / 04 / 26 Jing-Jou Yen NCHC, Taiwan.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Infrastructure and Applications Development of IPv6 in Taiwan 22nd APAN Meeting Singapore, July 18, 2006 Infrastructure and Applications Development of.
1 1APCTP LHC Konkuk University. Introduction to GSDC Project Activities in 2009 Strategies and Plans in 2010 GSDC office opening ceremony CERN.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
APAN-Taiwan Status Report Yu-lin Chang, APAN-TW APAN 2003 Academia Sinica Computing Centre, Taiwan.
Spending Plans and Schedule Jae Yu July 26, 2002.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Dr. Saif-ur-Rehman Muhammad Waqar Asia Tier Center Forum.
1 October 16, 2003 SIP-based VoIP Deployment in Taiwan Speaker: Dr. Quincy Wu National Telecommunications Program Office.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Martin Bly RAL Tier1/A Centre Preparations for the LCG Tier1 Centre at RAL LCG CERN 23/24 March 2004.
SDSS Data Transfer Jing-Jou Yen Division Manager National Center for High-Performance Computing (August 26, 2005)
1 Design of New TWAREN C. Eugene Yeh, Deputy Director The National Center for High-performance Computing 22 nd APAN Meeting Singapore, July 19, 2006.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
NICI IPv6 Infrastructure Development Status IPv6 Summit in Taiwan 2005 Aug. 23 rd, 2005 Jing-Jou Yen National Center for High-Performance Computing.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
CDF computing in the GRID framework in Santander
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
March 20071ASNet – Academic Services Network The Design of ASnet and the support to Grid Yu-lin Academia Sinica International Symposium on Grid.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
PROPOSAL TW Grid Workshop & International Symposium on Grid Computing 9 ~ 12 March, 2003 Proposed by: Computing Centre, Academia Sinica (ASCC)
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
APAN Lambda BoF 1 TWAREN/TAIWANLight : Research and Development 2005 / 01 /26 Te-Lung Liu.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
APAN Lambda BoF 1 TWAREN/TAIWANLight : Research and Development 2005 / 01 /26 Te-Lung Liu.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
NOC meeting report Aug. 29, 2003 Kazunori Konishi.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
ICEPP, University of Tokyo
LCG Service Challenge: Planning and Milestones
Computer Hardware.
LCG 3D Distributed Deployment of Databases
LCG Deployment in Japan
ASCC Site Report Simon C. Lin Computing Centre, Academia Sinica
Update on Plan for KISTI-GSDC
The INFN TIER1 Regional Centre
High Energy Physics Computing Coordination in Pakistan
Presentation transcript:

LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004

Network Infrastructure in Taiwan Grid Applications in Taiwan LCG-1 Status Schedule for LCG2 Outline

Taiwan International Network Japan US- West US-East Europe 155Mbps(Jan., 2004) 622Mbps(Feb., 2004) 155Mbps(Feb., 2004) 1.24Gbps(Apr., 2004) 2.4Gbps 622Mbps(Feb., 2004) 1.24Gbps(Mar., 2004) US- West 2.4Gbps Polo Alto, USA Seattle, USA 1.24Gbps 622Mbps Singapore Singaren 155Mbps Feb. 2004, possibly 155Mbps Hong Kong HARNet

Topology of Taiwan Domestic Network Backbone: 80 G GigaPoPs: 145 G Dark Fiber: 6 Taichung Hsinchu 20G 10G Tainan Taipei Sinica NCU NCTHU NCNU CCU NCKU NSYSU NTU NDHU NCTU NTHU 10GE 5GE fiber 10GE or fiber

High Energy Physics: ATLAS, CMS, CDF Bioinformatics: BioPortal, HealthGrid Digital Archive Grid Biodiversity: GBIF member Tele-Science eLearning Access Grid Parallel Computing Environment Grid Applications in Taiwan

LCG-1 Status –LCG-0 Deployed at March 19, 2003, just after RAL and CNAF –ASGCCA Approved: June 12th, 2003 –Keep 2~6 Staffs Stationed at CERN from July 2003 Join GTA, DB, Application, and C&T teams for LCG Development –LCG-1 Testbed Ready: July 30, 2003 Sep. 2: LCG1-1_0_0 Ready Oct.28: LCG1-1_1_0 Ready Nov. 7: LCG1-1_1_1.I Ready Dec.: LCG1-1_1_3 Ready LCG System Deployment

Current Status Acting as the East2 RegionGIIS 1 Top MDS, 1 RB, 1 BDII, 1 UI, 1 Proxy, 1 LCFGng, 1 SE, 1 CE and 8 WNs Tier2 Sites will be ready in 1Q2004 (NTU, NCU & ASIP) Keep system availability to 99.8%+ Manpower in ASCC for Grid (FTE) LCGGridTotal

IBM BladeCenter –Processor (around 1KSI2K/cpu) 2x SMP processors; BIOS type: IBM BIOS Processor (CPU): Intel Xeon 2.80 GHz Front side bus (FSB): 533 MHz Internal L2 cache memory size: 512 KB –System Memory Memory (RAM): 2GB DDR SDRAM (Chipkill) RAM slots total: 4 DIMM; Memory speed: 266 MHz –Storage 2x 40GB IDE Hard Drive in each Blade Connect to External Storage thru SAN Switch (2Gb) –Networking: Gigabit Ethernet- Integrated –System Architecture 14 Blades/Chases; 5 Chases(7U)/Rack(19’) New Platform for LCG-2

Will try to meet the 12, 19, 26 Jan Schedule –BUT, the Chinese New Year is on the way! 52 Blades will be ready by 31 Jan., 2004 –is working on the installation and testing under LCG-1 –To fulfill the commitment of 60 nodes (16Nodes originally) Mass Storage System –Now using SAN and IBM TSM for management of 30TB+ disk arrays and 130TB+ Tape Library –Will try Caster/Enstore from Feb –Dedicated 5 TB+ disk array will be online in Mar. Willing to be a GOC in Asia Plan to collaborate with Karlsruhe for GGUS Schedule for LCG-2