Project Overview:. Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services.

Slides:



Advertisements
Similar presentations
Intel® RPIER 3.1 User Training Joe Schwendt Steve Mancini 7/31/2006.
Advertisements

Computing Infrastructure
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
Advanced Scientific Visualization Laboratory Paul Navrátil 28 May 2009.
Parallel Visualization At TACC Greg Abram. Visualization Problems Small problems: Data are small and easily moved Office machines and laptops are adequate.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
WEST VIRGINIA UNIVERSITY HPC and Scientific Computing AN OVERVIEW OF HIGH PERFORMANCE COMPUTING RESOURCES AT WVU.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
Jia Yao Director: Vishwani D. Agrawal High Performance Compute Cluster April 13,
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Advanced Scientific Visualization Paul Navrátil 28 May 2009.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Overview of Atmosphere.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Lustre at Dell Overview Jeffrey B. Layton, Ph.D. Dell HPC Solutions |
Digital Graphics and Computers. Hardware and Software Working with graphic images requires suitable hardware and software to produce the best results.
Introduction to UNIX/Linux Exercises Dan Stanzione.
A+ Guide to Hardware: Managing, Maintaining, and Troubleshooting, Sixth Edition Chapter 9, Part 11 Satisfying Customer Needs.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Visualization Linda Fellingham, Ph. D Manager, Visualization and Graphics Sun Microsystems Shared Visualization 1.1 Software Scalable Visualization 1.1.
Visualization as a Science Discovery Tool Issues and Concerns Kelly Gaither Director of Visualization/ Sr. Research Scientist Texas Advanced Computing.
Introduction to HPC resources for BCB 660 Nirav Merchant
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Overview of Atmosphere.
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
Kelly Gaither Visualization Area Report. Efforts in 2008 Focused on providing production visualization capabilities (software and hardware) Focused on.
Large Scale Visualization on the Cray XT3 Using ParaView Cray User’s Group 2008 May 8, 2008 Sandia is a multiprogram laboratory operated by Sandia Corporation,
Testing… Testing… 1, 2, 3.x... Performance Testing of Pi on NT George Krc Mead Paper.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
CCS Overview Rene Salmon Center for Computational Science.
A Framework for Visualizing Science at the Petascale and Beyond Kelly Gaither Research Scientist Associate Director, Data and Information Analysis Texas.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
APST Internals Sathish Vadhiyar. apstd daemon should be started on the local resource Opens a port to listen for apst client requests Runs on the host.
Creating and running an application.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
Using the Weizmann Cluster Nov Overview Weizmann Cluster Connection Basics Getting a Desktop View Working on cluster machines GPU For many more.
| nectar.org.au NECTAR TRAINING Module 5 The Research Cloud Lifecycle.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
Visualization Update June 18, 2009 Kelly Gaither, GIG Area Director DV.
LIOProf: Exposing Lustre File System Behavior for I/O Middleware
The New Lonestar Dan Stanzione Deputy Director, TACC 3/24/2011.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Overview of Atmosphere.
VIRTUAL NETWORK COMPUTING SUBMITTED BY:- Ankur Yadav Ashish Solanki Charu Swaroop Harsha Jain.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Extreme Scale Infrastructure
Workstations & Thin Clients
Integrating Scientific Tools and Web Portals
HPC usage and software packages
Jay Boisseau, Director Texas Advanced Computing Center
Working With Azure Batch AI
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
VisIt Libsim Update DOE Computer Graphics Forum 2012 Brad Whitlock
VirtualGL.
Tools and Services Workshop Overview of Atmosphere
Architecture & System Overview
Advanced Computing Facility Introduction
Support for ”interactive batch”
Introduction to High Performance Computing Using Sapelo2 at GACRC
The Neuronix HPC Cluster:
Software - Operating Systems
EN Software Carpentry The Linux Kernel and Associated Operating Systems.
Presentation transcript:

Project Overview:

Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services to the national science community Project Duration: –August 1, 2009 – July 31, 2012 Partners and Roles: –Kelly Gaither (TACC, PI) –Valerio Pascucci, Chuck Hansen (University of Utah, co-PI) –David Ebert (Purdue University, co-PI) –John Clyne (NCAR, co-PI) –Hank Childs (UC Davis/LBL, Software) –Linda Akli (SURA, MSI Outreach)

Longhorn Configuration :

Longhorn First NSF XD Visualization Resource 256 Dell Dual Socket, Quad Core Intel Nehalem Nodes –240 with 48 GB shared memory/node (6 GB/core) –16 with 144 GB shared memory/node (18 GB/core) –73 GB Local Disk –2 Nvidia GPUs/Node (FX 5800 – 4GB RAM) ~14.5 TB aggregate memory QDR InfiniBand Interconnect Direct Connection to Ranger’s Lustre Parallel File System 10G Connection to 210 TB Local Lustre Parallel File System Jobs launched through SGE 256 Nodes, 2048 Cores, 512 GPUs, 14.5 TB Memory

Longhorn’s Lustre File System ($SCRATCH) OSS’s on Longhorn are built on Dell Nehalem Servers Connected to MD10000 Storage Vaults 15 Drives Total Configured into 2 Raid5 pairs with a Wandering Spare Peak Throughput Speed of the File System is 5.86 GB/sec Peak Aggregate Speed of the File System is 5.43 GB/sec

Longhorn Usage Modalities: Remote/Interactive Visualization –Highest priority jobs –Remote/Interactive capabilities facilitated through VNC –Run on 3 hour queue limit boundary GPGPU jobs –Run on a lower priority than the remote/interactive jobs –Run on a 12 hour queue limit boundary CPU jobs with higher memory requirements –Run on lowest priority when neither remote/interactive nor GPGPU jobs are waiting in the queue –Run on a 12 hour queue limit boundary

Longhorn Queue Structure Example: qsub -q normal -P vis

Software Available on Longhorn Programming APIs: OpenGL, vtk (Not natively parallel) –OpenGL – low level primitives, useful for programming at a relatively low level with respect to graphics –VTK (Visualization Toolkit) – open source software system for 3D computer graphics, image processing, and visualization –IDL Visualization Turnkey Systems –VisIt – free open source parallel visualization and graphical analysis tool –ParaView – free open source general purpose parallel visualization system –VAPOR – free flow visualization package developed out of NCAR –EnSight – commercial turnkey parallel visualization package targeted at CFD visualization –Amira – commercial turnkey visualization package targeted at visualizing scanned medical data (CAT scan, MRI, etc..)

Accessing Longhorn:

Connecting to Longhorn Using VNC longhorn laptop or workstation qsub /share/sge/default/pe_scripts/job.vnc touch ~/vncserver.out tail –f ~/vncserver.out contains vnc port info after job launches longhorn laptop or workstation ssh –L VNC server on vis node ivis[1-7|big] longhorn laptop or workstation vncviewer localhost:: automatic port forwarding to vis node establishes secure tunnel to longhorn vnc port localhost connection forwarded to longhorn via ssh tunnel Without SSH Tunneling: With SSH Tunneling: (1 ) (2 )

Longhorn Visualization Portal portal.longhorn.tacc.utexas.edu Developed to provide easy access to Longhorn and abstract away complexities involved with command line access Developed in conjunction with TeraGrid user portal and employs a fraction of TGUP developers to ensure continuity Used for all in-person Longhorn training

Longhorn Visualization Portal portal.longhorn.tacc.utexas.edu Specify type of session Specify resolution of vnc session Specify number of nodes needed and the wayness of the nodes Provides graphic of machine load

Longhorn Visualization Portal portal.longhorn.tacc.utexas.edu Vnc session opens up in java enabled browser Behaves as if user had gotten a remote desktop into Longhorn.

Longhorn Visualization Portal portal.longhorn.tacc.utexas.edu 3453 Jobs Submitted through Portal All Vis Training on Longhorn Submits through the Longhorn Portal 8/ /2009 2/2010 7/2010 8/ /2010

Longhorn Documentation and Training:

Longhorn User Guides and Training Dates: user-guide

Training Statistics: 1/4/2010 – 12/31/ People Trained In Person 8/ /2009 2/2010 7/2010 8/ /2010

Longhorn Usage Statistics:

Usage on Longhorn: 1/4/2010 – 1/18/ active projects 48,457 jobs run on the system 5,456,155 SUs expended on the system 8/ /2009 2/2010 7/2010 8/ /2010

Usage by Job Type: 1/4/2010 – 1/18/2011 Numbers at Top Indicate Snapshot in Time