Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.

Slides:



Advertisements
Similar presentations
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Advertisements

Cross-site data transfer on TeraGrid using GridFTP TeraGrid06 Institute User Introduction to TeraGrid June 12 th by Krishna Muriki
Xsede eXtreme Science and Engineering Discovery Environment Ron Perrott University of Oxford 1.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
Extreme Scalability RAT Report title=RATs#Extreme_Scalability Sergiu Sanielevici,
User Introduction to the TeraGrid 2007 SDSC NCAR TACC UC/ANL NCSA ORNL PU IU PSC.
User Support Coordination Objectives for Plan Years 4 and 5 (8/1/2008 to 3/1/2010) Sergiu Sanielevici, GIG Area Director for User Support Coordination.
Grid Deployments and Cyberinfrastructure Andrew J. Younge 102 Lomb Memorial Drive Rochester, NY 14623
Sergiu January 2007 TG Users’ Data Transfer Needs SDSC NCAR TACC UC/ANL NCSA ORNL PU IU PSC.
ANL NCSA PICTURE 1 Caltech SDSC PSC 128 2p Power4 500 TB Fibre Channel SAN 256 4p Itanium2 / Myrinet 96 GeForce4 Graphics Pipes 96 2p Madison + 96 P4 Myrinet.
Science Gateways on the TeraGrid Von Welch, NCSA (with thanks to Nancy Wilkins-Diehr, SDSC for many slides)
Simo Niskala Teemu Pasanen
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
TG QM Arlington: GIG User Support Coordination Plan Sergiu Sanielevici, GIG Area Director for User Support Coordination
Core Services I & II David Hart Area Director, UFP/CS TeraGrid Quarterly Meeting December 2008.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
June 26, 2006 TeraGrid A National Production Cyberinfrastructure Facility Scott Lathrop TeraGrid Director of Education, Outreach and Training University.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
TeraGrid National Cyberinfrasctructure for Scientific Research PRESENTER NAMES AND AFFILIATIONS HERE.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
TeraGrid Information Services December 1, 2006 JP Navarro GIG Software Integration.
Information Technology at Purdue Presented by: Dr. Gerry McCartney Vice President and CIO, ITaP HPC User Forum September 8-10, 2008 Using SiCortex SC5832.
National Center for Supercomputing Applications GridChem: Integrated Cyber Infrastructure for Computational Chemistry Sudhakar.
The TeraGrid David Hart Indiana University AAAS’09, FEBRUARY 13, 2009.
Minority-Serving Institutions Cyberinfrastructure Institute Welcome to TeraGrid Scott Lathrop, TeraGrid Director of Education, Outreach and Training
Advancing Scientific Discovery through TeraGrid Scott Lathrop TeraGrid Director of Education, Outreach and Training University of Chicago and Argonne National.
August 2007 Advancing Scientific Discovery through TeraGrid Scott Lathrop TeraGrid Director of Education, Outreach and Training University of Chicago and.
1 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA State of TeraGrid in Brief John Towns TeraGrid Forum Chair Director of Persistent Infrastructure National.
August 2007 Advancing Scientific Discovery through TeraGrid Adapted from S. Lathrop’s talk in SC’07
SAN DIEGO SUPERCOMPUTER CENTER NUCRI Advisory Board Meeting November 9, 2006 Science Gateways on the TeraGrid Nancy Wilkins-Diehr TeraGrid Area Director.
Set of priorities per WBS level 3 elements: (current numbering need to be mapped to new WBS version from Tim) (AD = member of wheels with oversight responsibility)
TeraGrid Overview Cyberinfrastructure Days Internet2 10/9/07 Mark Sheddon Resource Provider Principal Investigator San Diego Supercomputer Center
1 Preparing Your Application for TeraGrid Beyond 2010 TG09 Tutorial June 22, 2009.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
RNA-Seq 2013, Boston MA, 6/20/2013 Optimizing the National Cyberinfrastructure for Lower Bioinformatic Costs: Making the Most of Resources for Publicly.
National Grid Cyberinfrastructure Open Science Grid (OSG) and TeraGrid (TG)
SAN DIEGO SUPERCOMPUTER CENTER Impact Requirements Analysis Team Co-Chairs: Mark Sheddon (SDSC) Ann Zimmerman (University of Michigan) Members: John Cobb.
Kelly Gaither Visualization Area Report. Efforts in 2008 Focused on providing production visualization capabilities (software and hardware) Focused on.
TeraGrid Quarterly Meeting Dec 5 - 7, 2006 Data, Visualization and Scheduling (DVS) Update Kelly Gaither, DVS Area Director.
TeraGrid Privacy Policy: What is it and why are we doing it… Von Welch TeraGrid Quarterly Meeting March 6, 2008.
Rochester Institute of Technology Cyberaide Shell: Interactive Task Management for Grids and Cyberinfrastructure Gregor von Laszewski, Andrew J. Younge,
Advanced User Support Amit Majumdar 5/7/09. Outline  Three categories of AUS  Update on Operational Activities  AUS.ASTA  AUS.ASP  AUS.ASEOT.
1 TeraGrid and the Path to Petascale John Towns Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing Applications.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
TeraGrid Extension Gateway Activities Nancy Wilkins-Diehr TeraGrid Quarterly, September 24-25, 2009 The Extension Proposal!
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
© 2006 The University of Chicago Team Science, Team Scholarship Tom Barton Chad Kainz.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Education, Outreach and Training (EOT) and External Relations (ER) Scott Lathrop Area Director for EOT Extension Year Plans.
Data, Visualization and Scheduling (DVS) TeraGrid Annual Meeting, April 2008 Kelly Gaither, GIG Area Director DVS.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
TeraGrid-Wide Operations Von Welch Area Director for Networking, Operations and Security NCSA, University of Illinois April, 2009.
TeraGrid Overview John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory March 25,
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & Technology Introduction to the TeraGrid Daniel S. Katz Lead, LONI as a TeraGrid.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Tapping into National Cyberinfrastructure Resources Donald Frederick SDSC
Visualization Update June 18, 2009 Kelly Gaither, GIG Area Director DV.
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
NSF TeraGrid Review January 10, 2006
Performance Technology for Scalable Parallel Systems
Joint Techs, Columbus, OH
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Cyberinfrastructure and PolarGrid
Presentation transcript:

Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User Services Coordination Pittsburgh Supercomputing Center

Sergiu April 2006June 2006 TeraGrid: Integrating NSF Cyberinfrastructure SDSC NCAR (mid-2006) TACC UC/ANL NCSA ORNL PU IU PSC

Sergiu April 2006June 2006 NSF TeraGrid In Context 1/001/051/10 1/951/901/85 NSF Centers ProgramPACI Construction Operation & Enhancement * * *

Sergiu April 2006June 2006 The TeraGrid Facility Grid Infrastructure Group (GIG) –University of Chicago –TeraGrid integration, planning, management, coordination Resource Providers (RP) –Currently NCSA, SDSC, PSC, Indiana, Purdue, ORNL, TACC, UC/ANL Additional RPs in discussion –Systems (resources, services) support, user support –Provide access to resources via policies, software, and mechanisms coordinated by and provided through the GIG. The Facility –An integrated set of HPC resources providing NSF scientists with access to resources and collections of resources through unified user support, coordinated software and services, and extensive documentation and training. The Federation –Interdependent partners working together under the direction of an overall project director, the GIG PI.

Sergiu April 2006June 2006 It’s all in These URLs TG home page: TG User Portal:

Sergiu April 2006June 2006 ANL/UCIUNCSAORNLPSCPurdueSDSCTACC Computational Resources Itanium 2 (0.5 TF) IA-32 (0.5 TF) Itanium2 (0.2 TF) IA-32 (2.0 TF) Itanium2 (10.7 TF) SGI SMP (7.0 TF) Dell Xeon (17.2TF) IBM p690 (2TF) Condor Flock (1.1TF) IA-32 (0.3 TF) XT3 (10 TF) TCS (6 TF) Marvel SMP (0.3 TF) Hetero (1.7 TF) IA-32 (11 TF) Opportunistic Itanium2 (4.4 TF) Power4+ (15.6 TF) Blue Gene (5.7 TF) IA-32 (6.3 TF) Online Storage20 TB32 TB1140 TB1 TB300 TB26 TB1400 TB50 TB Mass Storage1.2 PB5 PB2.4 PB1.3 PB6 PB2 PB Net Gb/s, Hub30 CHI10 CHI30 CHI10 ATL30 CHI10 CHI40 LA10 CHI Data Collections # collections Approx total size Access methods 5 Col. >3.7 TB URL/DB/ GridFTP > 30 Col. URL/SRB/DB/ GridFTP 4 Col. 7 TB SRB/Portal/ OPeNDAP >70 Col. >1 PB GFS/SRB/ DB/GridFTP 4 Col TB SRB/Web Services/ URL InstrumentsProteomics X-ray Cryst. SNS and HFIR Facilities Visualization Resources RI: Remote Interact RB: Remote Batch RC: RI/Collab RI, RC, RB IA-32, 96 GeForce 6600GT RB SGI Prism, 32 graphics pipes; IA-32 RI, RB IA-32 + Quadro4 980 XGL RB IA-32, 48 Nodes RBRI, RC, RB UltraSPARC IV, 512GB SMP, 16 gfx cards TeraGrid Resources 100+ TF 8 distinct architectures 3 PB Online Disk >100 data collections Updated by Kelly Gaither

Sergiu April 2006June 2006 TeraGrid Facility Today Heterogeneous Resources at Autonomous Resource Provider Sites Local Value-Added User Environment Common TeraGrid Computing Environment A single point of contact for help Integrated documentation and training A common allocation process A common baseline user environment Services to assist users in harnessing the right TeraGrid platforms for each part of their work, Enhancements driven by users. Science Gateways to engage broader communities

Sergiu April 2006June 2006 Current Menu of Compute Resources Cross-Site IA-64 Cluster: DTF –IBM Itanium-2/Myrinet at NCSA, SDSC, ANL: ~15.6 TF, 5.2 TB Memory combined Single-Site IA-32 & IA-64 Clusters –NCSA, TACC, Purdue, IU, ANL, ORNL: ~32 TF in all! Tightly-Coupled MPP Systems –PSC XT3 (10 TF)+TCS (6 TF); SDSC Blue Gene (5.7 TF) SMP Systems –SDSC Power4+ (15.6 TF); NCSA Altix (7 TF)+p690 (2TF); PSC Marvel (0.3 TF) Mix and Match With Data and Visualization

Sergiu April 2006June 2006 Exploring the TeraGrid Get started with a Development Grant: TG-DAC up to 30K SU Roaming* –Experiment with code(s) and task(s) on the various resources for ~1 year –Find the best mapping of your research task flow to the resources: the scenarios we’ll present today may suggest possible answers –Document this mapping to write a production proposal: define a science goal and discuss how many SUs you will need on which systems to accomplish it over 1 or 2 CYs. *Roaming means never having to say which system

Sergiu April 2006June 2006 Peer Reviewed Production Grants Large (>200K SU, start 4/1 or 10/1) or Medium (start 1/1, 4/1, 7/1 or 10/1) Supersize it! Specific or Roaming Depends on the outcome of your task flow mapping: specific –If a task works best on a specific system, ask for it by name. Extrapolate DAC benchmarks to justify your request. specific allocations –You can ask for specific allocations on several systems. –But a roaming allocation –But if it’s best for you to use a large number of TG systems (e.g any/all clusters/SMPs/MPPs), a roaming allocation will free you from the need to predict what task you will do on which machine! –Roaming jobs may get lower priority on machines that have been assigned many specific allocations.

Sergiu April 2006June 2006 We’re here to work with you! Personal consultant contact upon receiving a production grant Documentation at for any problems or ASTA Program:ASTA Program: intensive help from our consultants for a focused effort to optimize the effectiveness of your application’s use of TeraGrid resources to achieve a scientific breakthrough. Tough to get into, but worth it! Talk to us… Science Gateways: enable entire communities of users associated with a common scientific goal to use TeraGrid resources through a common interface. Contact:

Sergiu April 2006June 2006 And now: A word about Safety and Security on the TeraGrid! Over to Jim, our much-feared Chief of Security. Thank you, and please enjoy this Tutorial and this first annual TeraGrid conference!