Presentation is loading. Please wait.

Presentation is loading. Please wait.

Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site.

Similar presentations


Presentation on theme: "Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site."— Presentation transcript:

1 Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site Infrastructure LHCONE – India current status Network at T2_IN_TIFR Route summarization 1.Direct P2P from TIFR-LHCON 2.TIFR-NKN-TEIN4-GEANT-CERN Future developments APAN-381 APAN-38 LHCONE Meeting Brij Kishor Jashal Tata Institute of Fundamental Research, Mumbai Email – brij.jashal@tifr.res.inbrij.jashal@tifr.res.in brij.Kishor.jashal@cern.ch India - LHCONE

2 India’s e-infrastructure Two main collaborative computing grids exist in India GARUDA, the Indian National Grid Initiative. The Regional component of the Worldwide LHC Computing Grid (WLCG ) Source.. http://www.euindiagrid.eu APAN-382 GARUDA, the National Grid Initiative of India is a collaboration of scientific and technological researchers on a nationwide grid comprising of computational nodes, storage devices and scientific instruments The Department of Information Technology (DIT) has funded the Center for Development of Advanced Computing (C-DAC[22]) to deploy the nation-wide computational grid ‘GARUDA’ which today connects 45 institutions cross 17 cities in its Proof of Concept (PoC) phase with an aim to bring “Grid” networked computing to research labs And industry. In pursuit of scientific and technological excellence, GARUDA PoC has also brought together the critical mass of well-established researchers. The Regional component of the Worldwide LHC Computing Grid (WLCG ) 1.IN-­ ‐ INDIACMS-­ ‐ TIFR – (T2_IN_TIFFR) 2.IN-­ ‐ DAE-­ ‐ KOLKATA-­ ‐ TIER2 – (IN-­ ‐ DAE-­ ‐ VECC-­ ‐ 02)

3 National knowledge Network ( NKN) The NKN is a state-of-the-art multi-gigabit pan-India network for providing a unified high speed network backbone for all knowledge and research institutions in the country Connectivity to 1500+ institutions. in the country using high bandwidth / low latency network The network architecture and governance structure allows users with options to connect to the distribution layer as well. NKN enables creation of Virtual Private Networks (VPN) as well for special interest groups. NKN provides international connectivity to its users for global collaborative research via APAN. Presently, NKN is connected to Trans Eurasia Information Network TEIN4. Similar connectivity to Internet#2 network is in the pipeline APAN-383 Both above initiatives strongly rely on the development of national and international connectivity. National Knowledge Network NKN- TEIN connection Direct P2P connection with CERN -LHCONE, GEANT, FNAL

4 The network consists of an ultra-high speed core, starting with multiple 2.5/10 G and progressively moving towards 40/100 Gigabits per Second (Gbps).. between 7 Supercore (fully meshed) locations pan India. The network is further spread out through 26 Core locations with multiple of 2.5/10 Gbps partially meshed connectivity with Supercore locations. The distribution layer connects entire country to the core of the network using multiple links at speeds of 2.5/10 Gbps. The participating institutions at the edge seamlessly connect to NKN at gigabit speed Two 2.5 Gigabit links – one to Europe and other to Singapore through TEIN4 T2-IN-TIFR has been in the pilot project of LHCONE from 2008-9 APAN-384

5 5 http://www.euindiagrid.eu/ EUIndiaGrid played an very important role in enhancing, increasing and consolidating Euro-India cooperation on e-Infrastructures through collaboration with key policy players both from the Government of India and the European Commission. S u s t a i n a b l e e - i n f r a s t r u c t u r e s A c r o s s E u r o p e a n d I n d i a R I - 2 4 6 6 9 8 FP7 INFSO Research Infrastructures Programme of the European Commission EU-IndiaGrid2 focused on four application areas strategic for Euro- India research cooperation: - Climate change - High Energy Physics - Biology - Material Science Over the course of the project, further areas of interest were identified -seismic hazard assessment produced interesting results - neuroscience applications

6 Computing Existing computational nodes = 48 with total of 384 cores and 864 GB of memory New computational nodes = 40 with total of 640 cores and 960 GB of memory HEP-SPEC06 Total no of physical cores is 1024. Total Average of Runs executed on a machine (Special Performance Evaluation) i.e. (HEP-SPEC06) is 7218.12Storage Existing storage capacity of 23 DPM Disk Nodes is aggregated to 570 TB of existing storage. Five new DPM Disk Nodes with each node having 80 TB are installed adding 400 TB to the total storage capacity Total Capacity of 970 TB T2_IN_TIFR Resources APAN-386 WLCG site availability and reliability report India-CMS TIFR In 2014 (Jan-14 to Jul-14)

7 APAN-387 IN-DAE-VECC-02 Resources MonthPledged CPUPledged Efficiency (HS06-Hrs) Used HoursUsed as % of Pledge AvailabilityReliability July 2014 6,000 3,124,800 5,904,876 189% 86% 86% June 2014 6,000 3,024,000 3,890,104 129% 99% 99% May 2014 6,0003,124,8005,205,432167%100% April 20146,0003,024,0003,847,468127%92% March 20146,0003,124,8004,503,244144%98% February 2014 6,0002,822,4002,357,17284%89% January 2014 6,0003,124,800775,15625%96% INDIA is the 6 th Largest group under ALICE Collaboration.

8 APAN-388 Network at T2_IN_TIFR T2-IN-TIFR has been included in the pilot project of LHCONE during 2008-9

9 TIFR - ASGC APAN-389 [vsinghal@gpu ~]$ traceroute alien.cern.ch traceroute to alien.cern.ch (137.138.99.142), 30 hops max, 60 byte packets 1 vecc-direct.tier2-kol.res.in (144.16.112.28) 0.194 ms 0.190 ms 0.183 ms 2 10.173.35.237 (10.173.35.237) 44.383 ms 44.386 ms 44.494 ms 3 10.255.237.205 (10.255.237.205) 127.543 ms 127.670 ms * 4 10.255.232.21 (10.255.232.21) 42.718 ms 42.767 ms 42.764 ms 5 mb-xe-01-v4.bb.tein3.net (202.179.249.41) 37.256 ms 37.263 ms 37.258 ms Mumbai 6 eu-mad-pr-v4.bb.tein3.net (202.179.249.118) 154.480 ms 154.398 ms 154.396 ms Via EU 7 ae3.mx1.par.fr.geant.net (62.40.98.65) 175.897 ms 175.885 ms 175.855 ms 8 switch-bckp-gw.mx1.par.fr.geant.net (62.40.124.82) 178.095 ms 178.119 ms 178.087 ms 9 e513-e-rbrxl-2-te20.cern.ch (192.65.184.70) 178.075 ms 178.234 ms 178.336 msalien.cern.ch vecc-direct.tier2-kol.res.inmb-xe-01-v4.bb.tein3.neteu-mad-pr-v4.bb.tein3.netae3.mx1.par.fr.geant.netswitch-bckp-gw.mx1.par.fr.geant.nete513-e-rbrxl-2-te20.cern.ch [brij@ui2 ~]$ traceroute 117.103.109.130 traceroute to 117.103.109.130 (117.103.109.130), 30 hops max, 40 byte packets 1 144.16.111.1 (144.16.111.1) 0.546 ms 0.550 ms 0.558 ms 2 202.141.153.30 (202.141.153.30) 0.170 ms 0.178 ms 0.187 ms 3 10.152.12.5 (10.152.12.5) 2.774 ms 2.738 ms 2.819 ms 4 mb-xe-01-v4.bb.tein3.net (202.179.249.41) 1.221 ms 1.125 ms 1.179 ms 5 sg-so-04-v4.bb.tein3.net (202.179.249.53) 59.434 ms 59.401 ms 59.419 ms Via Singapore 6 hk-xe-03-v4.bb.tein3.net (202.179.241.101) 91.568 ms 91.567 ms 91.530 ms 7 asgc-pr-v4.bb.tein3.net (202.179.241.98) 91.646 ms 91.727 ms 91.643 ms 8 so-4-1-0.r1.tpe.asgc.net (117.103.111.222) 174.172 ms 174.335 ms 174.232 ms 9 117.103.111.226 (117.103.111.226) 175.008 ms 174.862 ms 174.993 ms 10 coresw.tpe.asgc.net (117.103.111.233) 175.081 ms 175.093 ms 175.101 ms 11 rocnagios.grid.sinica.edu.tw (117.103.109.130) 174.479 ms !X 174.570 ms !X 174.491 ms !X VECC to CERN

10 [root@localhost ~]# nmap -sn --traceroute voms.cern.ch Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-10 21:59 IST Nmap scan report for voms.cern.ch (128.142.153.115) (0.17s latency Host is up (0.17s latency). TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 0.97 ms 144.16.111.1 2 167.46 ms tifr.mx1.gen.ch.geant.net (62.40.125.49) 3 334.13 ms swiCE2-10GE-1-1.switch.ch (62.40.124.22) 4 182.46 ms e513-e-rbrxl-1-te1.cern.ch (192.65.184.222) 5... 7 8 163.76 ms l513-b-rbrmx-3-ml3.cern.ch (194.12.148.18) 9... 10 169.49 ms voms3.cern.ch (128.142.153.115) Nmap done: 1 IP address (1 host up) scanned in 4.54 seconds TIFR to voms.cern.ch APAN-3810 TIFR – LHCONE Route Why different routes ? [root@localhost ~]# nmap -sn --traceroute lxplus.cern.ch Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-10 21:56 IST Nmap scan report for lxplus.cern.ch (188.184.28.151) Host is up (0.17s latency). TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 0.99 ms 144.16.111.1 2 171.30 ms e513-e-rbrrt-3-ee5.cern.ch (192.16.155.17) 3 170.26 ms e513-e-rbrxl-2-ne2.cern.ch (192.65.184.57) 4... 6 7 166.23 ms r513-b-rbrml-2-sc2.cern.ch (194.12.149.6) 8... 9 166.56 ms lxplus0062.cern.ch (188.184.28.151) Nmap done: 1 IP address (1 host up) scanned in 4.51 seconds

11 APAN-3811 Using Globus online and PhEDEX transfers, peak data transfer rates between TIFR and FNAL, U. Chicago of up to 1.2 Gbps. Route from TIFR to US ( U.Chicago)

12 APAN-3812

13 APAN-3813 Total cumulative data transfers for last four months is 450 TB

14 APAN-3814 Future developments NKN is planning to locate International PoP of NKN at CERN. CERN has agreed to provide necessary space, power etc for NKN Direct NKN connectivity to Internet#2 in pipeline Yes, the operations of existing dedicated P2P link from TIFR, Mumbai to Europe will continue. P2P Virtual Circuit for Alice T2 at VECC with LHCONE ? Your Suggestion or Questions ? Thank you


Download ppt "Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site."

Similar presentations


Ads by Google