Design considerations for the Indigo Data Analysis Centre. Anand Sengupta, University of Delhi Thanks to -Maria Alessandra Papa (AEI) -Stuart Anderson.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

11/12/2003LIGO Document G Z1 Data reduction for S3 I Leonor (UOregon), P Charlton (CIT), S Anderson (CIT), K Bayer (MIT), M Foster (PSU), S Grunewald.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
High Performance Computing Course Notes Grid Computing.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
LIGO- GXXXXXX-XX-X Advanced LIGO Data & Computing Material for the breakout session NSF review of the Advanced LIGO proposal Albert Lazzarini Caltech,
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Assessment of Core Services provided to USLHC by OSG.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
LIGO-G E ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech Argonne, May 10, 2004.
Grid Computing. What is a Grid? Many definitions exist in the literature Early definitions: Foster and Kesselman, 1998 –“A computational grid is a hardware.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
10/20/05 LIGO Scientific Collaboration 1 LIGO Data Grid: Making it Go Scott Koranda University of Wisconsin-Milwaukee.
Patrick R Brady University of Wisconsin-Milwaukee
Ocean Observatories Initiative Common Execution Infrastructure (CEI) Overview Michael Meisinger September 29, 2009.
G Z LIGO Scientific Collaboration Grid Patrick Brady University of Wisconsin-Milwaukee LIGO Scientific Collaboration.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Update on the LIGO Data Analysis System LIGO Scientific Collaboration Meeting LIGO Hanford Observatory August 19 th, 2002 Kent Blackburn Albert Lazzarini.
OSG Area Coordinators’ Meeting LIGO Applications (NEW) Kent Blackburn Caltech / LIGO October 29 th,
LIGO-G9900XX-00-M ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
LIGO Z LIGO Scientific Collaboration -- UWM 1 LSC Data Analysis Alan G. Wiseman (LSC Software Coordinator) LIGO Scientific Collaboration.
Part Four: The LSC DataGrid Part Four: LSC DataGrid A: Data Replication B: What is the LSC DataGrid? C: The LSCDataFind tool.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Design considerations for the Indigo Data Analysis Centre. Anand Sengupta, University of Delhi Many thanks to -Maria Alessandra Papa (AEI) -Stuart Anderson.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
…building the next IT revolution From Web to Grid…
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
LIGO-G E LIGO Scientific Collaboration Data Grid Status Albert Lazzarini Caltech LIGO Laboratory Trillium Steering Committee Meeting 20 May 2004.
LIGO-G Z LIGO at the start of continuous observation Prospects and Challenges Albert Lazzarini LIGO Scientific Collaboration Presentation at NSF.
Junwei Cao March LIGO Document ID: LIGO-G Computing Infrastructure for Gravitational Wave Data Analysis Junwei Cao.
Jan 12, 2009LIGO-G Z1 DMT and NDS2 John Zweizig LIGO/Caltech Ligo PAC, Caltech, Jan 12, 2009.
LIGO Plans for OSG J. Kent Blackburn LIGO Laboratory California Institute of Technology Open Science Grid Technical Meeting UCSD December 15-17, 2004.
State of LSC Data Analysis and Software LSC Meeting LIGO Hanford Observatory November 11 th, 2003 Kent Blackburn, Stuart Anderson, Albert Lazzarini LIGO.
Scott Koranda, UWM & NCSA 14 January 2016www.griphyn.org Lightweight Data Replicator Scott Koranda University of Wisconsin-Milwaukee & National Center.
11/12/2003LIGO-G Z1 Data reduction for S3 P Charlton (CIT), I Leonor (UOregon), S Anderson (CIT), K Bayer (MIT), M Foster (PSU), S Grunewald (AEI),
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
LSC Meeting LIGO Scientific Collaboration - University of Wisconsin - Milwaukee 1 Software Coordinator Report Alan Wiseman LIGO-G Z.
LSC Meeting LIGO Scientific Collaboration - University of Wisconsin - Milwaukee 1 LSC Software and Other Things Alan Wiseman University of Wisconsin.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
3/12/2013Computer Engg, IIT(BHU)1 CLOUD COMPUTING-1.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
LIGO-G W Use of Condor by the LIGO Scientific Collaboration Gregory Mendell, LIGO Hanford Observatory On behalf of the LIGO Scientific Collaboration.
LIGO-G E PAC9 Meeting LIGO Laboratory at Caltech 1 The LIGO I Science Run Data & Computing Group Operations Plan 9 th Meeting of the.
LIGO-G Z1 Using Condor for Large Scale Data Analysis within the LIGO Scientific Collaboration Duncan Brown California Institute of Technology.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Gravitational Wave Data Analysis  GW detectors  Signal processing: preparation  Noise spectral density  Matched filtering  Probability and statistics.
Scott Koranda, UWM & NCSA 20 November 2016www.griphyn.org Lightweight Replication of Heavyweight Data Scott Koranda University of Wisconsin-Milwaukee &
EU-IndiaGrid2 (RI ) Framework Programme 7 ( ) Research infrastructures projects EU-IndiaGrid2 Sustainable e-infrastructures.
Clouds , Grids and Clusters
Grid Computing.
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

Design considerations for the Indigo Data Analysis Centre. Anand Sengupta, University of Delhi Thanks to -Maria Alessandra Papa (AEI) -Stuart Anderson (LIGO Caltech) -Sanjay Jain (Delhi University) -Patrick Brady (UWM) -Phil Ehrens (LIGO Caltech) -Sarah Ponrathnam (IUCAA)

Network of gravity wave detectors

Data from gravitational wave experiments  Data comprised of  Gravitational wave channel (ASQ)  Environmental monitors  Internal engineering monitors  Multiple data products beyond raw data  Reduced data sets  Level 1: gravitational wave and environmental channels  Level 3: only gravitational wave data.  Different sampling rates IFO Env CH Health 1TB of raw data per day!

The IndIGO data analysis centre  Would like to propose for a high- throughput Computation and GW Data Archival Centre.  Tier -2 centre with data archival and computational facilities  Inter-institutional proposal for facility  Will provide fundamental infrastructure for consolidating GW data analysis expertise in India. Tier 0 LIGO Sites at Hanford, Livingston Data acquisition systems Tier 1 LIGO Labs at Caltech Tier 2 LIGO Lab at MIT, LSC institutions like UWM, Syracuse etc IndIGO Data Analysis Centre

Main objectives of the data centre Data Centre Archival Community development Indian Researchers and Students Other science groups Web Services Collaboration tools Analysis LSC LIGO Data Grid LIGO Data Grid as a role model for the proposed IndIGO Data Analysis Centre.

6 What is the LIGO Data Grid?  The combination of LSC computational and data storage resources with grid-computing middleware to create a distributed gravitational- wave data analysis facility.  Compute centres at LHO, LLO, Tier-1 centre at LIGO Caltech, Tier-2 centers at MIT, UWM, PSU, Syracuse.  Other clusters in Europe: Birmingham and the AEI  IndIGO Data Analysis Centre  Grid computing software  E.g Globus, GridFTP and Condor  Tools built from them

7 LIGO Data Grid Overview  Cyberinfrastructure  Hardware - administration, configuration, maintenance  Grid middleware & services - support, admin, configuration, maintenance  Core LIGO analysis software toolkits – support, enhance, release  Users - support

Condor, Globus, VDT and all that  IndIGO Data Centre is envisaged to be a high throughput compute facility: (data volume driven)  Opportunistic scheduling, Condor  NOT a high performance computational facility, although one can imagine a synergy between GW users and other scientific users sharing the resources. Traditionally, MPI community requires dedicated scheduling.  The Globus Toolkit is a collection of grid middleware that allows users to run jobs, transfer files, track file replicas, publish information about a grid, and more.  All of these facilities share a common security infrastructure called GSI that enables single sign-on. Users can select any subset of the Globus Toolkit to use in building their grid. The VDT includes all of Globus.

Typical Work flow in inspiral pipeline GLUE: LSC has developed in-house toolkit to write out work-flows as Condor DAGs One month of data: 5 analysis DAGs containing ~45,000 jobs and few tens of Plotting DAGs each of 50 jobs. For a year’s worth of data, we run more than 500K+ nodes.

10 Why do we need the IndIGO Data Centre Scientific pay-off is bounded by the ability to perform computations on the data.  Maximum scientific exploitation requires data analysis to proceed at same rate as data acquisition  Low latency analysis is needed if we want opportunity to provide alerts to astronomical community in the future  Computers required for LIGO flagship searches  Stochastic = 1 unit (3 GHz workstation day per day of data)  Bursts = 50  Compact binary inspiral = 600 (BNS), 300 (BBH), 6,000 (PBH)  All sky pulsars = 1,000,000,000 (but can tolerate lower latency &..... ) Data Centre

11 Users and Usage  The current LIGO Data Grid (LDG) supports ~600 LSC scientists  Demand for resources is growing rapidly as experience increases and more data become available  The IndIGO data centre is expected to be setup on a similar footing

LSC computing resources  LSC institutions and LIGO lab operate several large computing clusters for a total of 16,900 CPU cores.  Used for searches and large scale simulations  Background estimates / assessment of significance  Pipeline parameter tuning  Sensitivity estimates, upper limits  Analysis code-base: millions of lines of code  Grid-enabled tools for data distribution Distribution of LSC CPU cores

National Knowledge Network  IndIGO data centre will need a high bandwidth backbone connection for data replication from Tier-1 centres as well as for users to use the facility from their parent institutions.  NKN can potentially provide this facility between IndIGO member institutions.  Outstanding issues: International connections, EU-India Grid  The philosophy of NKN is to build a scalable network, which can expand both in the reach (spread in the country) and Speed.  Setting up a common network backbone like national highway, wherein different categories of users shall be supported.

NKN TOPOLOGY The objective of the National Knowledge Network is to bring together all the stakeholders in Science, Technology, Higher Education, Research and Development, GRID Computing, e- governance with speeds up to the order of 10s of gigabits per second coupled with extremely low latencies. The major PoPs of ERNET are already a part of NKN – VECC, RRCAT, IIT(Chennai, Kanpur, Guwahati), IUCAA, University of Rajasthan.

Collective wisdom  Site selection / Bandwidth  IUCAA, Pune. Host to several large computational facilities. Delhi University?  External Gigabit Ethernet is probably sufficient although 10 Gig would be better.  Storage, Cooling, AC  Typical: 1Pbytes on disk at Tier-1 centre. High throughput file system. RDS will require only a fraction of this at Tier-2 centre. Anticipate ~100TB.  At a rough estimate, 1000 cores = 35kW. Design Data Centre to hold 2/3 generation of equipment. Project 5-10 years in future. Need power to run the cooling itself, and power for disk storage and servers.  Hardware / Cabling  Commodity off –the shelf computers, standard equipment racks. High density configurations. Co-exist with other user communities if need be. Typically top of the rack GigE switch to the machines in the racks and 10GigE to a central switch  Middleware/Software/Security  Globus, VDT, Condor  Job management system  GSI for user authentication across LSC + IndIGO Consortium

Data Centre Proposal User requirements Tier1/2 Managers Site survey, Bandwidth availability Budgeting Proposal Roadmap Proposal readiness by 15 May, 2011

Challenges  Working with LDR and VDT involves a steep learning curve. Many new concepts. BUT, have a large user base and expert help.  Training system administrators and maintenance manpower  Lot of uncertainties – bandwidth provider, site host, storage and node requirements etc. Ideas getting more concrete as we move along and start talking with LSC compute facility maintainers and experts from science and industry.  Very useful to visit a LSC cluster site (AEI Hannover e.g) and talking to the people involved in those centres.  We should keep open the option of proposing this centre in conjunction with other (different kind, MPI based) scientific users. This would pose a host of challenges  Hardware, middleware and software requirements are different, hence some common ground has to be reached between groups.  Condor has a MPI environment – so MPI based codes are not a problem  Need to have this tested. Volunteers are needed.  Need to work out projections for next 5 years and gear up for Adv. LIGO and LIGO Australia.

Conclusions  Need for a IndIGO data centre  Large Tier-2 data/compute centre for archival of g-wave data and analysis  Bring together data-analysts within the Indian gravity wave community.  Puts IndIGO in the global map for international collaboration  LSC wide facility would be useful for LSC participation  Functions of the IndIGO data centre  Data archival: Tier-2 data centre for archival of LIGO data. This would include data from LIGO- Australia. LIGO Data-Grid Tools for replication.  Provide Computation Power: Pitch for about 1000 cores  Compare with AEI (~5000 cores), LIGO-Caltech (~1400 cores), Syracuse cluster (~2500 cores).  Main considerations for data centre design  Network: gigabit backbone, National Knowledge Network. Indian grid!  Dedicated storage network: SAN, disk space  Electrical power, cooling, Air-Conditioning: requirements and design  Layout of rack, cabling  Hardware (blades, GPUs etc.), middleware (Condor, Globus), software (Data Monitoring Tools, LALApps, Matlab)  Consultations with industry and experienced colleagues from Indian scientific community.