Grid Computing at DAE, India B.S. Jagadeesh Computer Division BARC. 18/February/2011.

Slides:



Advertisements
Similar presentations
International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
Advertisements

“ National Knowledge Network and Grid
High Performance Computing Course Notes Grid Computing.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
Chapter 9 Designing Systems for Diverse Environments.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
Marilyn T. Smith, Head, MIT Information Services & Technology DataSpace IS&T Data CenterMIT Optical Network 1.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
“ HEP Computing Coordination in India ” B. S. JAGADEESH Scientific Officer,Computer Division, Bhabha Atomic Research Center, Mumbai, India.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Peer to Peer & Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University.
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme Grid Status and EU-IndiaGrid.
Chep06 1 High End Visualization with Scalable Display System By Dinesh M. Sarode, S.K.Bose, P.S.Dhekne, Venkata P.P.K Computer Division, BARC, Mumbai.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Grid Infrastructure for the ILC Andreas Gellrich DESY European ILC Software and Physics Meeting Cambridge, UK,
Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
André Augustinus 10 October 2005 ALICE Detector Control Status Report A. Augustinus, P. Chochula, G. De Cataldo, L. Jirdén, S. Popescu the DCS team, ALICE.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
1 Computing Challenges for the Square Kilometre Array Mathai Joseph & Harrick Vin Tata Research Development & Design Centre Pune, India CHEP Mumbai 16.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme WP5 Application Support Marco.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
EU-IndiaGrid2 (RI ) Framework Programme 7 ( ) Research infrastructures projects EU-IndiaGrid A Bridge across European.
EU-IndiaGrid2 - Sustainable e-Infrastructures across Europe & India - Alberto Masoni INFN – National Institute of Nuclear Physics - Italy EU-IndiaGrid2.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
] Open Science Grid Ben Clifford University of Chicago
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Clouds , Grids and Clusters
Grid site as a tool for data processing and data analysis
Kolkata Status and Plan
The LHC Computing Grid Visit of Her Royal Highness
Grid Computing.
Recap: introduction to e-science
CERN, the LHC and the Grid
CS258 Spring 2002 Mark Whitney and Yitao Duan
CLUSTER COMPUTING.
Presentation transcript:

Grid Computing at DAE, India B.S. Jagadeesh Computer Division BARC. 18/February/2011

Our Approach to Grids has been an evolutionary approach Anupam Supercomputers To Achieve Supercomputing Speeds –At least 10 times faster than the available sequential machines in BARC To build a General Purpose Parallel Computer –Catering to wide variety of problems –General Purpose compute nodes and interconnection network To keep development cycle short –Use readily available, off-the-shelf components

ANUPAM Performance over the years

‘ANUPAM-ADHYA’ 47 Tera Flops

Medium Range Weather Forecasting, Delhi 2+8 -node ANUPAM-Alpha fully operational since December 1999 Aeronautical Development Agency (ADA), Computational Fluid Dynamics calculations related to Light Combat Aircraft (LCA) LCA design computed on 38 node ANUPAM-860. VSSC, Trivandrum 8 node Alpha and 16 node Pentium for Aero Space related CFD applications ANUPAM Applications outside DAE

 Processing ( parallelization of computation)  I/O ( parallel file system)  Visualization (parallelized graphic pipeline/ Tile Display Unit) Complete solution to scientific problems by exploiting parallelism for

Visualization Rendering Pipeline Geometry Database Geometry Transformation Rasterization Image Per Vertex Per Pixel Transformation,clipping, Lighting, etc Scan-conversion,shading and visibility

Parallel Visualization Taxonomy

Snapshots of Tiled Image Viewer

We now have, Large tiled display Rendering power with distributed rendering Scalable to many many pixels and polygons An attractive alternative to high end graphics system Deep and rich scientific visualization system

Post-tsunami: Nagappattinam, India (Lat: ° N Lon: ° E) This one-meter resolution image was taken by Space Imaging's IKONOS satellite on Dec. 29, 2004 — just three days after the devastating tsunami hit. 1M IKONOS Image Acquired: 29 December 2004 Credit "Space Imaging"

So, We Need Lots Of Resources Like High Performance Computers, Visualization Tools, Data Collection Tools, Sophisticated Laboratory Equipments Etc. “ Science Has Become Mega Science” “Laboratory Has To Be A Collaboratory” Key Concept is “Sharing By Respecting Administrative Policies”

Grid concept? Many jobs per system One job per system Two systems per JOB Many systems per JOB View all of the above as Single unified resource -- Early days of Computation -- Risc / Workstation Era -- Client-Server Model -- Parallel / Distributed Computing -- Grid Computing

User Access Point Resource Broker Grid Resources Result GRID CONCEPT

LHC Computing LHC (Large Hadron Collider) has become operational and is churning out data. Data rates per experiment of >100 Mbytes/sec. >1 Pbytes/year of storage for raw data per experiment. Computationally problem is so large that can not be solved by a single computer centre World-wide collaborations and analysis. –Desirable to share computing and analysis throughout the world.

A collision at LHC 16 Bunches, each containing 100 billion protons, cross 40 million times a second in the centre of each experiment 1 billion proton-proton interactions per second in ATLAS & CMS ! Large Numbers of collisions per event ~ 1000 tracks stream into the detector every 25 ns a large number of channels (~ 100 M ch)  ~ 1 MB/25ns i.e. 40 TB/s !

Data Grids for HEP Tier2 Centre ~1 TIPS Online System Offline Processor Farm ~20 TIPS CERN Computer Centre FermiLab ~4 TIPS France Regional Centre Italy Regional Centre Germany Regional Centre Institute Institute ~0.25TIPS Physicist workstations ~100 MBytes/sec ~622 Mbits/sec ~1 MBytes/sec There is a “bunch crossing” every 25 nsecs. There are 100 “triggers” per second Each triggered event is ~1 MByte in size Physicists work on analysis “channels”. Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server Physics data cache ~PBytes/sec ~622 Mbits/sec or Air Freight (deprecated) Tier2 Centre ~1 TIPS Caltech ~1 TIPS ~622 Mbits/sec Tier 0 Tier 1 Tier 2 Tier 4 1 TIPS is approximately 25,000 SpecInt95 equivalents Image courtesy Harvey Newman, Caltech Tier 3

… based on advanced technology 23 km of superconducting magnets cooled in superfluid helium at 1.9 K A big instrument !!

LHC is a very large scientific instrument… Lake Geneva Large Hadron Collider 27 km circumference Large Hadron Collider 27 km circumference CMS ATLAS LHCb ALICE

Tier 0 at CERN: Acquisition, First pass processing Storage & Distribution

LEMON architecture

QUATTOR Quattor is a tool suite providing automated installation, configuration and management of clusters and farms Highly suitable to install, configure and manage Grid computing clusters correctly and automatically At CERN, currently used to auto manage nodes >2000 with heterogeneous hardware and software applications Centrally configurable & reproducible installations, run time management for functional & security updates to maximize availability

NKN TOPOLOGY All pops are covered by atleast Two NLDs

Mumbai Gauribidunir Mysore Hyderabad Chennai Kalpakkam Vizag Manguru Bhubneshwar Kolkata Indore Delhi Gandhinagar Mount Abu Jaduguda Tarapur Allahabad Shilong Kota ANUNET Leased Links Existing Proposed Leased Links Plan All over India

ANUNET WIDE AREA NETWORK INSAT 3C 8 Carriers of 768 Kbps each CAT, Indore IOP Bhubaneshwar IPR, Ahmedabad HRI, Allahabad BARC, Mysore BARC, Gauribidnur BARC, Tarapur BARC, Mt.Abu AERBNPCIL HWB BRIT CTCRS, BARC Anushaktinagar, Mumbai BARC NFC CCCM ECIL, Hyderabad TIFRIRE TMC DAE, Mumbai SAHA INST. VECC, Kolkata BARC FACL IGCAR, Kalpakkam IMS, Chennai MRPU Notes: Sites shown in yellow oblong are connected over dedicated landlines. Quarter Transponder 9 MHz HWB, Manuguru HWB, Kota AMD, Secund’bad AMD, Shillong UCIL-I Jaduguda UCIL-II Jaduguda UCIL-III Jaduguda TMH Navi Mumbai BARC, Trombay Hos p.

Multiple Zones in NKN Router ANUNET Router WLCG Segment at site To NKN NKN Router at Site Internet Segment at site ANUNET Segment at site GARUDA Segment at site NKN-GEN Segment at site Additional CUG

0 National Grid Computing CDAC, Pune WLCG Collaboration Common Users Group (CUG) Anunet (DAE units) BARC – IGCAR NKN Router NKN-Internet (Grenoble-France) NKN-General (National Collaborations) Intranet segment of BARC Internet segment of BARC Logical Communication Domains Through NKN

Teacher 55“ LED Projection Screen Elevation Front 55“ LED HD Camera 55“ LED Elevation Back 55“ LED LAYOUT OF VIRTUAL CLASSROOM

An Example of High bandwidth Application

Collaboratory?

Depicts a one degree oscillation photograph on crystals of HIV-1 PR M36I mutant recorded by remotely operating the FIP beamline at ESRF, and OMIT density for the mutation residue I. (Credits: Dr. Jean- Luc Ferrer, Dr. Michel Pirochi & Dr. Jacques Joley, IBS/ESRF, France, Dr. M.V. Hosur & Colleagues, Solid State Physics Division & Computer Division, BARC) E-Governance?

COLLABORATIVE DESIGN Collaborative design of reactor components Credits : IGCAR, Kalpakkam, NIC-Delhi, Comp Divn, BARC

UTKARSH Utkarsh – dual processor-quad core 80 node (BARC) Aksha-itanium – dual processor 10 node (BARC) Ramanujam – dual core dual processor 14 node (RRCAT) Daksha – dual processor 8 node (RRCAT) Igcgrid-xeon – dual processor 8 node (IGCAR) Igcgrid2-xeon – quad core 16 node (IGCAR) DAEGrid Connectivity is through NKN – 1 Gbps 4 Sites 6 Clusters 800 Cores DAEGrid

DAEGrid (Continued) BARC -> Three clusters connected Aksha – Dual Processor 10 – node Itanium 2 Surya -- Dual Processor 32- node Xeon (EM64T) Utkarsh Cores (Dual quad-core GHz) Services Certification Authority Server (CA), Resource Broker (RB), GridICE Server or Monitoring and Accounting Server (MAS), Virtual Organization Membership Server (VOMS), Repository Server, Grid Portal (UI), Storage Element (DPM SE – 13 GB), Catalogue (LFC), DNS & NTP, Three Computing Element (one each for three clusters), and Worker Nodes (WN’s).

IGCAR: wide-area Data dissemination BARC: Computing with shared controls DAE Grid 4 Mbps Links VECC: real-time Data collection CAT: archival storage Resource sharing and coordinated problem solving in dynamic, multiple R&D units

Interesting Issues being investigated - Effect of Process Migration in distributed environments - A Novel Distributed Memory file system - Implementation of Network Swap for Performance Enhancement - Redundancy issues to address the failure of resource brokers - Service center concept using Glite Middleware -All of the above lead towards ensuring Better quality of service and imparting simplicity in Grid usage.

THANK YOU