Grid Deployments and Cyberinfrastructure Andrew J. Younge 102 Lomb Memorial Drive Rochester, NY 14623

Slides:



Advertisements
Similar presentations
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Advertisements

Xsede eXtreme Science and Engineering Discovery Environment Ron Perrott University of Oxford 1.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
MTA SZTAKI Hungarian Academy of Sciences Grid Computing Course Porto, January Introduction to Grid portals Gergely Sipos
Open Science Grid June 28, 2006 Bill Kramer Chair of the Open Science Grid Council NERSC Center General Manager, LBNL.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
High Throughput Urgent Computing Jason Cope Condor Week 2008.
Simo Niskala Teemu Pasanen
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Assessment of Core Services provided to USLHC by OSG.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
E-Science Workflow Support with Grid-Enabled Microsoft Project Gregor von Laszewski and Leor E. Dilmanian, Rochester Institute of Technology Abstract von.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Andrew J. Younge Golisano College of Computing and Information Sciences Rochester Institute of Technology 102 Lomb Memorial Drive Rochester, New York
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
August 2007 Advancing Scientific Discovery through TeraGrid Adapted from S. Lathrop’s talk in SC’07
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Computational grids and grids projects DSS,
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
07:44:46Service Oriented Cyberinfrastructure Lab, Introduction to BOINC By: Andrew J Younge
Use of Condor on the Open Science Grid Chris Green, OSG User Group / FNAL Condor Week, April
Rochester Institute of Technology Job Submission Andrew Pangborn & Myles Maxfield 10/19/2015Service Oriented Cyberinfrastructure Lab,
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Grid computing using Alina Bejan University of Chicago.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
Rochester Institute of Technology Cyberaide Shell: Interactive Task Management for Grids and Cyberinfrastructure Gregor von Laszewski, Andrew J. Younge,
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
…building the next IT revolution From Web to Grid…
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
VO Privilege Activity. The VO Privilege Project develops and implements fine-grained authorization to grid- enabled resources and services Started Spring.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
NSF TeraGrid Review January 10, 2006
Presentation transcript:

Grid Deployments and Cyberinfrastructure Andrew J. Younge 102 Lomb Memorial Drive Rochester, NY

How Do we Make Use of These Tools?

Grid Hierarchy Clustering Systems Middleware Upperware Condor PBS SGE LSF TeraGrid OSG EGEE BOINC CoG Kit Cyberaide Web Portals Science Gateways

Cluster Systems Condor * PBS * SGE LSF All batch queuing systems!

PBS – Portable Batch System Used for dedicated Cluster resources – Homogeneous clusters with MPI Manages thousands of CPUs in near real time Schedules large numbers of jobs quickly and efficiently Many different implementations – PBS Pro (not free but advanced) – Open PBS (free but old) – Torque & Maui (free, stable, advanced) Deployments – Dedicated clusters in academic and corporate settings – Playstation3 Clusters

Condor Used for dedicated and non-dedicated resources – Typically used to “scavenge” CPUs in places where a lot of workstations are available – Heterogeneous environments Separate Condor tasks – Resource Management and Job Management User interface is simple; commands that use small config files Not good for MPI jobs Deployments – Campus workstations and desktops – Corporate servers

Grid tools in Condor Condor-G – Replicates the Job Management functionality – Submission to a grid resource using the Globus Toolkit – NOT a grid service, just a way to submit to a grid Flocking – Allows for queued jobs in one Condor cluster to be executed on another Condor cluster – Directional flocking (A => B but not B => A) – Unidirectional flocking (A B) Glidein – Dynamically adds machines to a Condor cluster – Can be used to create your own personal Condor cluster on the Teragrid!

Clusters in Action

Ganglia

BOINC Desktop based Grid Computing - “Volunteer Computing” – Centralized Grid system – Users encouraged by gaining credits for their computations – Can partake in one or many different projects Open access for contributing resources, closed access for using grid Allows organizations to gain enormous amounts of computational power with very little cost. BOINC is really a cluster and a grid system in one!

BOINC Projects

BOINC Projects (2) Full List of Projects:

The Lattice Project

The Open Science Grid - OSG Large national-scale grid computing infrastructure 5 DOE Labs, 65 Universities, 5 regional/campus grids 43,000 CPUs, 6 Petabytes of disk space Uses the Globus Toolkit – GT4, however uses pre-WS services (GT2) – Typically connects to Condor pools Virtual Data Toolkit (VDT) & OSG Release Tools – NMI + VOMS, CEMon, MonaLisa, AuthZ, VO management tools, etc – VORS – Resource Selector:

The TeraGrid NSF-funded national-scale Grid Infrastructure – 11 Locations – LONI, NCAR, NCSA, NICS, ORNL, PSC, IU, PU, SDSC, TACC, UC/ANL – 1.1Petaflops, 161 thousand CPUs, 60 Petabytes disk space – Dedicated 10G fiber lines to each location – Specialized visualization servers Uses Globus Toolkit 4’s basic WS services and security protocols Grid Infrastructure Group (GIG) at U. Chicago – Commity for Teragrid planning, management, and coordination Science Gateways – Independent services for specialized groups and organizations – “TeraGrid Inside” capabilities – Web Portals, desktop apps, coordinated access points – Not Virtual Organizations (VOs)

TeraGrid Overview SDSC TACC UC/ANL NCSA ORNL PU IU PSC NCAR Caltech UNC/RENCI UW Resource Provider (RP) Software Integration Partner Grid Infrastructure Group (UChicago) LONI NICS

TeraGrid User Portal

EGEE European Commision funded International Grid system – 250 resource locations, 40,000 CPUs, 20 Petabytes of storage – Originally European grid, but expanded to US and Asia Uses the gLite Middleware system – Uses Globus’ Grid Security Infrastructure (GSI) – Specialized elements to utilize underlying hardware – Groups organized as Virtual Organizations (VOs) and uses VOMS membership services to enable user privileges Originally based on the old LHC Grid – EGEE-I Ended April, Continued on as EGEE-II – Now part of WLCG

Worldwide LHC Computing Grid Large grid to support the massive computational needs of the Large Hadron Collider at CERN – Project produces >15 Petaflops per year! WLCG is really a mashup of other grids – EGEE, OSG, GridPP, INFN Grid, NorduGrid – Uses specialized upperware to manage these grids Multi-tier system for efficiently distributing data to scientists and researchers around the world Used mostly for ATLAS, ALICE, CMS, LHCb, LHCf and TOTEM experiments

What If we could use all of the Grids together?

Cyberaide Shell There are many different cyberinfrastructure deployments today. – How do we make sence of them? – How do we use them for our benefit? Our idea for Cyberaide Gridshell will be to link to these grid deployments – Provide an easy, all-in-one interface for many different grids – Automate scheduling and resource management – Leverage Web 2.0 technologies

References yllabus/6_NationalGrids.ppt yllabus/6_NationalGrids.ppt UserGuide.html UserGuide.html