Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views.

Slides:



Advertisements
Similar presentations
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
Advertisements

User Introduction to the TeraGrid 2007 SDSC NCAR TACC UC/ANL NCSA ORNL PU IU PSC.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Federated Access to US CyberInfrastructure Jim Basney CILogon This material is based upon work supported by the National Science Foundation.
Chapter 6: An Introduction to System Software and Virtual Machines
Systems Engineer An engineer who specializes in the implementation of production systems This material is based upon work supported by the National Science.
National Finance Center’s 2008 Customer Forum EmpowHR 9.0 Billy Dantagnan Teracore.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Project Overview:. Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services.
Data Management Subsystem: Data Processing, Calibration and Archive Systems for JWST with implications for HST Gretchen Greene & Perry Greenfield.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Effective User Services for High Performance Computing A White Paper by the TeraGrid Science Advisory Board May 2009.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
CTSS 4 Strategy and Status. General Character of CTSSv4 To meet project milestones, CTSS changes must accelerate in the coming years. Process –Process.
Corral: A Texas-scale repository for digital research data Chris Jordan Data Management and Collections Group Texas Advanced Computing Center.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Invitation to Computer Science 5 th Edition Chapter 6 An Introduction to System Software and Virtual Machine s.
Accelerating Scientific Exploration Using Workflow Automation Systems Terence Critchlow (LLNL) Ilkay Altintas (SDSC) Scott Klasky(ORNL) Mladen Vouk (NCSU)
1 PY4 Project Report Summary of incomplete PY4 IPP items.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
Black Box Software Testing Copyright © Cem Kaner & James Bach 1 Black Box Software Testing Fall 2005 Overview—Part 2 (Mission of Testing) Cem Kaner,
Kelly Gaither Visualization Area Report. Efforts in 2008 Focused on providing production visualization capabilities (software and hardware) Focused on.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
TeraGrid CTSS Plans and Status Dane Skow for Lee Liming and JP Navarro OSG Consortium Meeting 22 August, 2006.
TeraGrid Advanced Scheduling Tools Warren Smith Texas Advanced Computing Center wsmith at tacc.utexas.edu.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Evolving Interfaces to Impacting Technology: The Mobile TeraGrid User Portal Rion Dooley, Stephen Mock, Maytal Dahan, Praveen Nuthulapati, Patrick Hurley.
Digital Learning India 2008 July , 2008 Mrs. C. Vijayalakshmi Department of Computer science and Engineering Indian Institute of Technology – IIT.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
CPSC 171 Introduction to Computer Science System Software and Virtual Machines.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
Contract Year 1 Review IMT Tilt Thompkins MOS - NCSA 15 May 2002.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
TeraGrid User Portal Eric Roberts. Outline Motivation Vision What’s included? Live Demonstration.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
Software Integration Highlights CY2008 Lee Liming, JP Navarro GIG Area Directors for Software Integration University of Chicago, Argonne National Laboratory.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Visualization Update June 18, 2009 Kelly Gaither, GIG Area Director DV.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
ITEC 275 Computer Networks – Switching, Routing, and WANs Week 12 Chapter 14 Robert D’Andrea Some slides provide by Priscilla Oppenheimer and used with.
Update on CHEP from the Computing Speaker Committee G. Carlino (INFN Napoli) on behalf of the CSC ICB, October
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Visualization Update July 2, 2009 Kelly Gaither, GIG Area Director DV.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Transforming Science Through Data-driven Discovery Workshop Overview Ohio State University MCIC Jason Williams – Lead, CyVerse – Education, Outreach, Training.
Quarterly Meeting Spring 2007 NSTG: Some Notes of Interest Adapting Neutron Science community codes for TeraGrid use and deployment. (Lynch, Chen) –Geared.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Invitation to Computer Science 6th Edition
VisIt Project Overview
OPERATING SYSTEMS CS 3502 Fall 2017
Tools and Services Workshop
Joslynn Lee – Data Science Educator
LEAD-VGrADS Day 1 Notes.
Shared Research Computing Policy Advisory Committee (SRCPAC)
Title of Poster Site Visit 2017 Introduction Results
Title of Poster Site Visit 2018 Introduction Results
Recruiting and Onboarding Project
This material is based upon work supported by the National Science Foundation under Grant #XXXXXX. Any opinions, findings, and conclusions or recommendations.
Presentation transcript:

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation RDAV Update Sean Ahern – Director of the UT RDAV Center Science Advisory Board Meeting July 2010

Executive summary RDAV resources (Nautilus) are currently in the allocations system, and several requests have been made. Largest of which is from Jim Kinter, here at the SAB meeting Though much of our hardware and software is in place and ready to go, engineering delays from SGI have pushed our timeline for full deployment out. This SGI UltraViolet is one of the first off the assembly line. More I/O bandwidth than any other UltraViolet system ordered (30-50 GB/s). Current full availability timeframe is the middle of August. We continue to pursue early customer activities and educational opportunities.

Hardware status Nautilus: Engineering delays from SGI have pushed our full acceptance date back a few weeks. 1/4 of the full UltraViolet machine is going through acceptance testing now and seems to be working well. The rest of the UltraViolet machine arrives at NICS today (7/18/10) and will be fully available mid August. Graphics cards: NVIDIA has not met their deadlines for full scalability. The timeframe for 16 Tesla S2050‘s is the fall. We decided to deliver a machine without the full GPU complement rather than further delay delivery for the NSF. Four prior generation NVIDIA GPUs will be available in the interim. Parallel filesystem: We expect to have a full 1 PB parallel filesystem available on Nautilus on August 1. Portal: Our portal system is operational and we are working to deploy new capabilities on it.

Software and environment status VisIt has been ported and runs well on our smaller system. Remote visualization systems are deployed and secure: NX, VNC Workflow systems appear to be working. Compiling environments work well, though we’re waiting on SGI for the full complement (Intel compilers, primarily). Exploring issues with job placement, NUMA aware scheduling, and GPU scheduling through our job management system (MOAB/Torque).

RDAV Portal Completed: SSH-VNC portlet for remote access to RDAV resources tested for functionality. Integration of access to Dashboard components tested. Continuing work: Teragrid authentication Teragrid queue and account information access To Do: RDAV visualization tool launcher Semantic visualization product store Workflow manager portlet

RDAV Portal: Main Portal

RDAV Portal: eSimMon Dashboard

Work with early users Have staff working directly with three early users to push early system. Bronson Messer: Supernova simulation: Worked on efficiently converting large parallel data into format suitable for exploratory visualization with VisIt. Scripted batch processing to generate imagery for Bellerophon portal system. Stephen Miller: Spallation Neutron Source experiment: Rewrote rebinning filter to process experimental data from scattering experiments into a usable format: Datasets from 4 GB-several TB. Wrote custom VisIt scripts to produce visualizations to smooth workflow Lou Gross: National Institute for Mathematical and Biological Synthesis: Statistical analysis of species diversity in the Great Smoky Mountains National Park. Novel visualizations for species presence and absence predictions Greater Smoky Mountains National park MaxEnt prediction for Acer Saccharum Supernova entropy

Education, Outreach, and Training activities We taught a joint visualization class with TACC at the Petascale Programming Environments and Tools classes on 9 July. We will be teaching a tutorial on Nautilus usage for visualization, data analysis, and workflow management at the TeraGrid'10 conference.