Simo Niskala Teemu Pasanen

Slides:



Advertisements
Similar presentations
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Advertisements

SAN DIEGO SUPERCOMPUTER CENTER Inca 2.0 Shava Smallen Grid Development Group San Diego Supercomputer Center June 26, 2006.
Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
Cross-site data transfer on TeraGrid using GridFTP TeraGrid06 Institute User Introduction to TeraGrid June 12 th by Krishna Muriki
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
High Performance Computing Course Notes Grid Computing.
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
Background Chronopolis Goals Data Grid supporting a Long-term Preservation Service Data Migration Data Migration to next generation technologies Trust.
Data Grids Darshan R. Kapadia Gregor von Laszewski
ANL NCSA PICTURE 1 Caltech SDSC PSC 128 2p Power4 500 TB Fibre Channel SAN 256 4p Itanium2 / Myrinet 96 GeForce4 Graphics Pipes 96 2p Madison + 96 P4 Myrinet.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
USING THE GLOBUS TOOLKIT This summary by: Asad Samar / CALTECH/CMS Ben Segal / CERN-IT FULL INFO AT:
The DOE Science Grid Computing and Data Infrastructure for Large-Scale Science William Johnston, Lawrence Berkeley National Lab Ray Bair, Pacific Northwest.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
1-2.1 Grid computing infrastructure software Brief introduction to Globus © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid computing course. Modification.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
Workload Management Massimo Sgaravatto INFN Padova.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
High Performance Louisiana State University - LONI HPC Enablement Workshop – LaTech University,
TeraGrid Science Gateways: Scaling TeraGrid Access Aaron Shelmire¹, Jim Basney², Jim Marsteller¹, Von Welch²,
CoG Kit Overview Gregor von Laszewski Keith Jackson.
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
National Computational Science National Center for Supercomputing Applications National Computational Science NCSA-IPG Collaboration Projects Overview.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Kurt Mueller San Diego Supercomputer Center NPACI HotPage Updates.
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
GRID ARCHITECTURE Chintan O.Patel. CS 551 Fall 2002 Workshop 1 Software Architectures 2 What is Grid ? "...a flexible, secure, coordinated resource- sharing.
Authors: Ronnie Julio Cole David
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Globus Grid Tutorial Part 2: Running Programs Across Multiple Resources.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Feb 2-4, 2004LNCC Workshop on Computational Grids & Apps Middleware for Production Grids Jim Basney Senior Research Scientist Grid and Security Technologies.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
TeraGrid Overview John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory March 25,
Status of Globus activities Massimo Sgaravatto INFN Padova for the INFN Globus group
Initiating Teragrid Sessions Raghu Reddy. Outline Motivation Initial Setup –Certificates –Proxies –Grid-map file entries and DNs Softenv for customizing.
GridShell/Condor: A virtual login Shell for the NSF TeraGrid (How do you run a million jobs on the NSF TeraGrid?) The University of Texas at Austin.
Data Infrastructure in the TeraGrid Chris Jordan Campus Champions Presentation May 6, 2009.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Joint Techs, Columbus, OH
Creating and running applications on the NGS
Globus —— Toolkits for Grid Computing
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Grid Computing.
Scalable Systems Software for Terascale Computer Centers
Presentation transcript:

Simo Niskala Teemu Pasanen TeraGrid Simo Niskala Teemu Pasanen

TeraGrid general objectives resources service architecture grid services teragrid application services Using TeraGrid

General An effort to build and deploy the world's largest, fastest, distributed infrastructure for open scientific research Extensible Terascale Facility, ETF Funded by National Science Foundation, NSF Total of $90 million at the moment Partners: Argonne National Laboratory, ANL National Center for Supercomputing Applications, NCSA San Diego Supercomputing Center, SDSC Center for Advanced Computing Research (CalTech), CACR Pittsburgh Supercomputing Center, PSC New partners in September 2003: Oak Ridge National Laborarory, ORNL Purdue University Indiana University Texas Advanced Computing Center, TACC Provides Terascale computing power by connecting several supercomputers with Grid technologies Will offer 20 TFLOPS when ready in 2004 first 4 TFLOPS will be available for use around Jan 2004

Objectives increase computational capabilities for research community with geographically distributed resources deploy a distributed ”system” using Grid technologies rather than ”distributed computer” Define an open an extensible infrastructure focus on integrating “resources” rather than “sites” adding resources will require significant, but not unreasonable, effort supporting key protocols and specifications (e.g. authorization, accounting) supporting heterogeneity while exploiting homogeneity balancing complexity and uniformity

Resources 4 clusters at ANL, Caltech, NCSA and SDSC Itanium 2-based Linux clusters Total computing capacity of 15 TFLOPS Terascale Computing System, TCS-1 at PSC AlphaServer-based Linux cluster 6 TFLOPS HP Marvel system at PSC Set of SMP machines 32*1.15GHz(Alpha EV 67) + 128GB/machine ~1 Petabyte of networked storage 40 Gb/s backplane network

Resources Backplane network consists of 4 10Gb/s optical fiber channels enables ”machine room” network across sites optimized for peak requirements designed to scale to a much smaller number of sites than general WAN separate TeraGrid resource only for data transfer needs of TeraGrid resources

Resources

Resources

Service architecture Grid Services (Globus toolkit) TeraGrid Application Services

Grid Services Service Layer Functionality TeraGrid implementation Advanced Grid Services super schedulers, resource discovery services, repositories, etc. SRB, MPICH-G2, distributed accounting, etc. Core Grid Services (Collective layer) TeraGrid information service, advanced data movement, job scheduling, monitoring GASS, MDS, Condor-G, NWS Basic Grid Services (Resource layer) Authentication and access Resource allocation/Mgmt Data access/Mgmt Resource Information Service Accounting GSI-SSH, GRAM, Condor, GridFTP, GRIS

Advanced Grid Services on top of Core and Basic Services enhancements required for TeraGrid for example Storage Resource Broker, SRB additional capabilities new services possible in future

Core Grid Services built on Basic Grid Services focus on the coordination of multiple services mostly implementations of Globus services MDS, GASS etc. supported by most TeraGrid resources

Basic Grid Services focus on sharing single resources implementations of i.e. GSI and GRAM should be supported by all TeraGrid resources

Grid Services provide clear specifications for what a resource must do in order to participate only specifications defined, implementations are left open

TeraGrid Application Services enable running of applications on a heterogenous system on top of basic and core Grid services under development new service specifications to be added by current and new TeraGrid sites

TeraGrid Application Services Objective Basic Batch Runtime Supports running static-linked binaries High Throughput Runtime (Condor-G) Supports running naturally distributed applications using Condor-G Advanced Batch Runtime Supports running dynamic-linked binaries Scripted Batch Runtime Supports scripting (including compile) On-Demand / Interactive Runtime Supports interactive applications Large-Data Supports very large data sets, data pre-staging, etc. File-Based Archive Supports GridFTP interface to data services

Using TeraGrid Access account account request form Globus certificate for authentication and Distinguished Name (DN) entry logging in single-site access requires SSH multiple-site acccess requires GSI-enabled SSH

Using TeraGrid Transferring files Storage Resource Broker (SRB) data management tool for storing large data sets accross distributed, heterogenous storage High Performance Storage System (HPSS) moving entire directory structures between systems SCP copying users files to TeraGrid platforms using SCP Globus-url-copy transferring files between sites using GridFTP GSINCFTP Uses proxy for authentication additional software to Globus toolkit

Using TeraGrid Programming Environments IA-64 clusters (in NCSA, SDSC, Caltech, ANL) Intel (default), Gnu, mpich-gm (default mpi-compiler) PSC clusters HP (default), Gnu Softenv software manages users environments through symbolic keys

Using TeraGrid Running jobs Grid Tools Condor-G Globus toolkit PBS (portable batch system)

TeraGrid www.teragrid.org