Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site.

Slides:



Advertisements
Similar presentations
ERNET India Education and Research Network of India Education and Research Network of India DILIP BARMAN DILIP BARMAN SR. Manager, ERNET SR. Manager, ERNET.
Advertisements

Joint CHAIN/EU-IndiaGrid2 Workshop on e-Infrastructures Interoperation and Interoperability across Europe and the Asia-Pacific Region, 23/3, ISGc2011-OGF31,
Jorge Gasós Grid Technologies Unit European Commission The EU e Infrastructures Programme Workshop, Beijing, June 2005.
R. Chidambaram * Principal Scientific Adviser to GOI & DAE-Homi Bhabha Professor &P.S.Dhekne Raja Ramanna Fellow, Bhabha Atomic Research Centre * Keynote.
1 The EU e-Infrastructure Programme EUROCRIS Conference, Brussels, 17 th July 2007 Mário Campolargo European Commission - DG INFSO Head of Unit “GÉANT.
Workshop on HPC in India Challenges of Garuda : The National Grid Computing Initiative of India Subrata Chattopadhyay C-DAC Knowledge Park Bangalore, India.
CBPF Brasil and LHCONE in South America Edoardo Martelli (CERN); Pedro Diniz (CBPF); Renato Santana (CBPF); Douglas Vieira (USP - SAMPA) 09/02/2015, LHCOPN-LHCONE.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
International Workshop APAN 24, Current State of Grid Computing Researches and Applications in Vietnam Nguyen Thanh Thuy 1, Nguyen Kim Khanh 1,
1 1 st Virtual Forum on Global Research Communities Brussels, 12 th July 2007 Mário Campolargo European Commission - DG INFSO Head of Unit “GÉANT and e-Infrastructures”
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme The EU-IndiaGrid Project Joining.
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme GARUDA - The National Grid.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
E-Infrastructures in WP European Commission – DG CNECT eInfrastructure Presentation for national contact points.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Global Science experiment Data hub Center Oct. 13, 2014 Seo-Young Noh Status Report on Tier 1 in Korea.
EU-IndiaGrid2 (RI ) Framework Programme 7 ( ) Research infrastructures projects EU-IndiaGrid A Bridge across European.
NATIONAL CENTRE FOR ANTARCTIC AND OCEAN RESEARCH Indian National Antarctic Data Center (INADC) 7 th September 2011 Indian Antarctic Data Centre Management.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
From GEANT to Grid empowered Research Infrastructures ANTONELLA KARLSON DG INFSO Research Infrastructures Grids Information Day 25 March 2003 From GEANT.
Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi.
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme The EU-IndiaGrid Project Joining.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
1.  Infrastructure status  Up to 60G backbone for testing network equipment capability  10~60G backbone is deployed nationwide (6 Pops)  About 60.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme WP5 Application Support Marco.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
1 European e-Infrastructure experiences gained and way ahead OGF 20 / EGEE User’s Forum 9 th May 2007 Mário Campolargo European Commission - DG INFSO Head.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
SINET Update and Collaboration with TEIN2 Jun Matsukata National Institute of Informatics (NII) Research Organization of Information and Systems
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Roadmap to Next Generation Internet: Indian Initiatives Subbu C-DAC, India.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
"The views expressed in this presentation are those of the author and do not necessarily reflect the views of the European Commission" Global reach of.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Concept Note on Testbeds Prof Rekha Jain Executive Chair IIMA IDEA Telecom Centre of Excellence, IIM Ahmedabad, India
IPCEI on High performance computing and big data enabled application: a pilot for the European Data Infrastructure Antonio Zoccoli INFN & University of.
Grid Activities at VECC Tapas Samanta VECC, Kolkata.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
EU-IndiaGrid2 (RI ) Framework Programme 7 ( ) Research infrastructures projects EU-IndiaGrid A Bridge across European.
H. PERO, E uropean Commission European policy developments and challenges in the field of Research Infrastructures.
KISTI activities and plans Global experiment Science Data hub Center Jin Kim LHCOPN-ONE Workshop in Taipei1.
EU-IndiaGrid2 - Sustainable e-Infrastructures across Europe & India - Alberto Masoni INFN – National Institute of Nuclear Physics - Italy EU-IndiaGrid2.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
EU-IndiaGrid2 (RI ) Framework Programme 7 ( ) Research infrastructures projects EU-IndiaGrid2 Sustainable e-infrastructures.
Bob Jones EGEE Technical Director
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Clouds , Grids and Clusters
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Dagmar Adamova, NPI AS CR Prague/Rez
UK Status and Plans Scientific Computing Forum 27th Oct 2017
KISTI Daejeon, 23rd September 2015
National Knowledge Network Overview
Presentation transcript:

Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site Infrastructure LHCONE – India current status Network at T2_IN_TIFR Route summarization 1.Direct P2P from TIFR-LHCON 2.TIFR-NKN-TEIN4-GEANT-CERN Future developments APAN-381 APAN-38 LHCONE Meeting Brij Kishor Jashal Tata Institute of Fundamental Research, Mumbai – India - LHCONE

India’s e-infrastructure Two main collaborative computing grids exist in India GARUDA, the Indian National Grid Initiative. The Regional component of the Worldwide LHC Computing Grid (WLCG ) Source.. APAN-382 GARUDA, the National Grid Initiative of India is a collaboration of scientific and technological researchers on a nationwide grid comprising of computational nodes, storage devices and scientific instruments The Department of Information Technology (DIT) has funded the Center for Development of Advanced Computing (C-DAC[22]) to deploy the nation-wide computational grid ‘GARUDA’ which today connects 45 institutions cross 17 cities in its Proof of Concept (PoC) phase with an aim to bring “Grid” networked computing to research labs And industry. In pursuit of scientific and technological excellence, GARUDA PoC has also brought together the critical mass of well-established researchers. The Regional component of the Worldwide LHC Computing Grid (WLCG ) 1.IN-­ ‐ INDIACMS-­ ‐ TIFR – (T2_IN_TIFFR) 2.IN-­ ‐ DAE-­ ‐ KOLKATA-­ ‐ TIER2 – (IN-­ ‐ DAE-­ ‐ VECC-­ ‐ 02)

National knowledge Network ( NKN) The NKN is a state-of-the-art multi-gigabit pan-India network for providing a unified high speed network backbone for all knowledge and research institutions in the country Connectivity to institutions. in the country using high bandwidth / low latency network The network architecture and governance structure allows users with options to connect to the distribution layer as well. NKN enables creation of Virtual Private Networks (VPN) as well for special interest groups. NKN provides international connectivity to its users for global collaborative research via APAN. Presently, NKN is connected to Trans Eurasia Information Network TEIN4. Similar connectivity to Internet#2 network is in the pipeline APAN-383 Both above initiatives strongly rely on the development of national and international connectivity. National Knowledge Network NKN- TEIN connection Direct P2P connection with CERN -LHCONE, GEANT, FNAL

The network consists of an ultra-high speed core, starting with multiple 2.5/10 G and progressively moving towards 40/100 Gigabits per Second (Gbps).. between 7 Supercore (fully meshed) locations pan India. The network is further spread out through 26 Core locations with multiple of 2.5/10 Gbps partially meshed connectivity with Supercore locations. The distribution layer connects entire country to the core of the network using multiple links at speeds of 2.5/10 Gbps. The participating institutions at the edge seamlessly connect to NKN at gigabit speed Two 2.5 Gigabit links – one to Europe and other to Singapore through TEIN4 T2-IN-TIFR has been in the pilot project of LHCONE from APAN-384

5 EUIndiaGrid played an very important role in enhancing, increasing and consolidating Euro-India cooperation on e-Infrastructures through collaboration with key policy players both from the Government of India and the European Commission. S u s t a i n a b l e e - i n f r a s t r u c t u r e s A c r o s s E u r o p e a n d I n d i a R I FP7 INFSO Research Infrastructures Programme of the European Commission EU-IndiaGrid2 focused on four application areas strategic for Euro- India research cooperation: - Climate change - High Energy Physics - Biology - Material Science Over the course of the project, further areas of interest were identified -seismic hazard assessment produced interesting results - neuroscience applications

Computing Existing computational nodes = 48 with total of 384 cores and 864 GB of memory New computational nodes = 40 with total of 640 cores and 960 GB of memory HEP-SPEC06 Total no of physical cores is Total Average of Runs executed on a machine (Special Performance Evaluation) i.e. (HEP-SPEC06) is Storage Existing storage capacity of 23 DPM Disk Nodes is aggregated to 570 TB of existing storage. Five new DPM Disk Nodes with each node having 80 TB are installed adding 400 TB to the total storage capacity Total Capacity of 970 TB T2_IN_TIFR Resources APAN-386 WLCG site availability and reliability report India-CMS TIFR In 2014 (Jan-14 to Jul-14)

APAN-387 IN-DAE-VECC-02 Resources MonthPledged CPUPledged Efficiency (HS06-Hrs) Used HoursUsed as % of Pledge AvailabilityReliability July ,000 3,124,800 5,904, % 86% 86% June ,000 3,024,000 3,890, % 99% 99% May ,0003,124,8005,205,432167%100% April 20146,0003,024,0003,847,468127%92% March 20146,0003,124,8004,503,244144%98% February ,0002,822,4002,357,17284%89% January ,0003,124,800775,15625%96% INDIA is the 6 th Largest group under ALICE Collaboration.

APAN-388 Network at T2_IN_TIFR T2-IN-TIFR has been included in the pilot project of LHCONE during

TIFR - ASGC APAN-389 ~]$ traceroute alien.cern.ch traceroute to alien.cern.ch ( ), 30 hops max, 60 byte packets 1 vecc-direct.tier2-kol.res.in ( ) ms ms ms ( ) ms ms ms ( ) ms ms * ( ) ms ms ms 5 mb-xe-01-v4.bb.tein3.net ( ) ms ms ms Mumbai 6 eu-mad-pr-v4.bb.tein3.net ( ) ms ms ms Via EU 7 ae3.mx1.par.fr.geant.net ( ) ms ms ms 8 switch-bckp-gw.mx1.par.fr.geant.net ( ) ms ms ms 9 e513-e-rbrxl-2-te20.cern.ch ( ) ms ms msalien.cern.ch vecc-direct.tier2-kol.res.inmb-xe-01-v4.bb.tein3.neteu-mad-pr-v4.bb.tein3.netae3.mx1.par.fr.geant.netswitch-bckp-gw.mx1.par.fr.geant.nete513-e-rbrxl-2-te20.cern.ch ~]$ traceroute traceroute to ( ), 30 hops max, 40 byte packets ( ) ms ms ms ( ) ms ms ms ( ) ms ms ms 4 mb-xe-01-v4.bb.tein3.net ( ) ms ms ms 5 sg-so-04-v4.bb.tein3.net ( ) ms ms ms Via Singapore 6 hk-xe-03-v4.bb.tein3.net ( ) ms ms ms 7 asgc-pr-v4.bb.tein3.net ( ) ms ms ms 8 so r1.tpe.asgc.net ( ) ms ms ms ( ) ms ms ms 10 coresw.tpe.asgc.net ( ) ms ms ms 11 rocnagios.grid.sinica.edu.tw ( ) ms !X ms !X ms !X VECC to CERN

~]# nmap -sn --traceroute voms.cern.ch Starting Nmap 5.51 ( ) at :59 IST Nmap scan report for voms.cern.ch ( ) (0.17s latency Host is up (0.17s latency). TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS ms ms tifr.mx1.gen.ch.geant.net ( ) ms swiCE2-10GE-1-1.switch.ch ( ) ms e513-e-rbrxl-1-te1.cern.ch ( ) ms l513-b-rbrmx-3-ml3.cern.ch ( ) ms voms3.cern.ch ( ) Nmap done: 1 IP address (1 host up) scanned in 4.54 seconds TIFR to voms.cern.ch APAN-3810 TIFR – LHCONE Route Why different routes ? ~]# nmap -sn --traceroute lxplus.cern.ch Starting Nmap 5.51 ( ) at :56 IST Nmap scan report for lxplus.cern.ch ( ) Host is up (0.17s latency). TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS ms ms e513-e-rbrrt-3-ee5.cern.ch ( ) ms e513-e-rbrxl-2-ne2.cern.ch ( ) ms r513-b-rbrml-2-sc2.cern.ch ( ) ms lxplus0062.cern.ch ( ) Nmap done: 1 IP address (1 host up) scanned in 4.51 seconds

APAN-3811 Using Globus online and PhEDEX transfers, peak data transfer rates between TIFR and FNAL, U. Chicago of up to 1.2 Gbps. Route from TIFR to US ( U.Chicago)

APAN-3812

APAN-3813 Total cumulative data transfers for last four months is 450 TB

APAN-3814 Future developments NKN is planning to locate International PoP of NKN at CERN. CERN has agreed to provide necessary space, power etc for NKN Direct NKN connectivity to Internet#2 in pipeline Yes, the operations of existing dedicated P2P link from TIFR, Mumbai to Europe will continue. P2P Virtual Circuit for Alice T2 at VECC with LHCONE ? Your Suggestion or Questions ? Thank you