TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.

Slides:



Advertisements
Similar presentations
TeraGrid Community Software Areas (CSA) JP (John-Paul) Navarro TeraGrid Grid Infrastructure Group Software Integration University of Chicago and Argonne.
Advertisements

DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
2. Computer Clusters for Scalable Parallel Computing
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
SAN DIEGO SUPERCOMPUTER CENTER Accounting & Allocation Subhashini Sivagnanam SDSC Special Thanks to Dave Hart.
Server Platforms Week 11- Lecture 1. Server Market $ 46,100,000,000 ($ 46.1 Billion) Gartner.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
THE QUE GROUP WOULD LIKE TO THANK THE 2013 SPONSORS.
Core Services I & II David Hart Area Director, UFP/CS TeraGrid Quarterly Meeting December 2008.
Project Overview:. Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
SDSC RP Update TeraGrid Roundtable Changes in SDSC Allocated Resources We will decommission our IA-64 cluster June 30 (rather than March 2010)
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
Research Support Services Research Support Services.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst IDF: Multi-Core Processing for.
Implementing KFS Release 2 (Let’s Get Cookin’!) Susan Moore / Jonathon Keller, UC - Davis Vince Schimizzi / Mike Criswell, MSU.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Copyright © 2002, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
October 21, 2015 XSEDE Technology Insertion Service Identifying and Evaluating the Next Generation of Cyberinfrastructure Software for Science Tim Cockerill.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
National Computational Science National Center for Supercomputing Applications National Computational Science NCSA Terascale Clusters Dan Reed Director,
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
TeraGrid CTSS Plans and Status Dane Skow for Lee Liming and JP Navarro OSG Consortium Meeting 22 August, 2006.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Ultimate Integration Joseph Lappa Pittsburgh Supercomputing Center ESCC/Internet2 Joint Techs Workshop.
Cluster Software Overview
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Data Area Report Chris Jordan, Data Working Group Lead, TACC Kelly Gaither, Data and Visualization Area Director, TACC April 2009.
Data, Visualization and Scheduling (DVS) TeraGrid Annual Meeting, April 2008 Kelly Gaither, GIG Area Director DVS.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
NCSA RP Update John Towns. NCSA Resource updates Cobalt –CXFS update Lincoln –production since mid-March –final configuration 192 compute nodes – Dell.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
TeraGrid Overview John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory March 25,
TG Quarterly Meeting Breckenridge, CO Apr 11, 2007 NCSA TG RP Update 1Q07.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Purdue RP Highlights TeraGrid Round Table November 5, 2009 Carol Song Purdue TeraGrid RP PI Rosen Center for Advanced Computing Purdue University.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Compute and Storage For the Farm at Jlab
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Introduction to XSEDE Resources HPC Workshop 08/21/2017
XSEDE’s Campus Bridging Project
SAP HANA Cost-optimized Hardware for Non-Production
Implementing KFS Release 2 (Let’s Get Cookin’!)
The National Grid Service Mike Mineter NeSC-TOE
Presentation transcript:

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 HPCOPS  Supplemental award to the TG RP award  John Towns will be PI, co-PIs Alameda, M. Butler, T. Cockerill, M. Pflugmacher  Operational support  System administration and maintenance for existing NCSA resources not already on original TG RP award  Networking  Grid software support/TeraGrid integration  Local allocations and account management  User support  Local helpdesk  Consulting  Advanced applications support  Training and Documentation  Outreach and Education  Proposal is operational in nature – no projects, no milestones  No funds for Development Activities

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 Systems  Abe  90 TF Dell BladeServer, 1200 Dual Socket/Quad-Core Intel64 Nodes, InfiniBand, 9.6 TB memory, 180 TB storage, OS Red Hat Enterprise Linux 4 (2.6.9)  50% of Abe’s cycles available to xRAC – system procured with State of Illinois funding  Recommended Use Guidelines – for jobs requiring >1,000 cores  Not a Roaming Resource, but has CTSS4 for multi-resource projects such as Coveney’s GENIUS  Tungsten  17 TF Dell Cluster, 1280 Dual Socket Intel Xeon IA-32, Myrinet, 4 TB memory, 140 TB storage, OS Linux Red Hat 9.0  Recommended Use Guidelines – highly parallel codes needing a 32-bit environment and codes that perform well in a distributed cluster environment. In addition, requests for extended access to dual processor nodes for larger scale runs are encouraged.  Cobalt  6.6 TF SGI SMP, 1024 processor Intel Itanium2 IA-64 SMP, NUMALINK, 4 TB memory, 250 TB storage, OS SGI ProPack (Kernel: 2.6.5)  Upgrade to ProPack 5 in progress  Recommended Use Guidelines - jobs with moderate levels of parallelism ( processors), large shared-memory (over 250GB), codes with high inter-processor communications or inherent load balancing challenges which perform better in an SMP environment, large-scale interactive data analysis, high-end visualization (via access to the Prism systems)  Mercury  10.2TF IBM Cluster, 887 Dual Socket Intel Itanium2 IA-64, Myrinet, 4.6TB memory, 80 TB storage, OS SuSE Linux SLES8 (2.4.21)  Upgrading OS this year  Copper – retiring Sep 30, 2007 (IBM p690)

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 Science Gateways Support  GISolve  GT4 GRAM auditing tools integration Evaluate what auditing capabilities will be of interest to the GISolve SGW PI and their community of users Integrate tools necessary for accessing GRAM auditing capabilities into the GISolve SGW  Community Restricted Shell Integrate use of community restricted shell capability into GISolve SGW  LEAD  Support testing of reliability/scalability issues of GT WS GRAM  Support testing of reliability/scalability issues of GT GridFTP

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 Projects  NCSA’s TG RP/HPCOPS is highly operational, with projects generated  Current Projects  LUSTRE-WAN and GPFS-WAN Adding Servers and Disk to enable 10 Gb/sec connection so we can get back on the testbed and drive toward production  Abe Integration  Dark Energy Survey Database  MSI Workshop  Science Gateway Resource Utilization/Optimization

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007  Additional slides with more detail on LEAD Weather Challenge

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 LEAD WxChallenge Support  67 institutions –  120 jobs x 16p/job – 5000 compute hours/d  0.5e6 cpu hours total  Dedicated reservations on 2 NCSA resources during contest hours

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 LEAD Research Support  Four projects (3 idealized, 1 ‘real data’)  Cell interaction: role of nearby cells on rotation, intensity (NSF funded)  Squall lines: role of latent cooling in MCS behavior (modified microphysics; NSF funded)  Gravity waves: study of intensity vs. environment (NSF funded; M.S. student)  Mesoscale ensemble modeling (cool+warm season, initial data & physics perturbations)  All will utilize the ensemble broker + MyLEAD  Brian Jewett, UIUC

TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 LEAD Research Support Credit: Brian Jewett, UIUC