NERSC Policy Board Meeting, February 19, 2003 1 SOS8, Charleston, SC, April 12-14, 2004 State of the Labs: NERSC Update Juan Meza Lawrence Berkeley National.

Slides:



Advertisements
Similar presentations
A Local-Optimization based Strategy for Cost-Effective Datasets Storage of Scientific Applications in the Cloud Many slides from authors’ presentation.
Advertisements

FY 2004 Allocations Francesca Verdier NERSC User Services NERSC User Group Meeting 05/29/03.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update June 12,
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
Overview of Midrange Computing Resources at LBNL Gary Jung March 26, 2002.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Deployment, Deployment, Deployment March, 2002 Randy Burris Center for Computational Sciences.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
LBNL Visualization Group Research Snapshot Wes Bethel Lawrence Berkeley National Laboratory Berkeley, CA 24 Feb 2004.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
National Energy Research Scientific Computing Center (NERSC) The GUPFS Project at NERSC GUPFS Team NERSC Center Division, LBNL November 2003.
UNIVERSITY of MARYLAND GLOBAL LAND COVER FACILITY High Performance Computing in Support of Geospatial Information Discovery and Mining Joseph JaJa Institute.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
The Cosmic Simulator Daniel Kasen (UCB & LBNL) Peter Nugent, Rollin Thomas, Julian Borrill & Christina Siegerist.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Visualization Group March 8 th, Visualization Group Permanent staff: –Wes Bethel (group leader) –John Shalf, Cristina Siegerist, Raquel Romano Collaborations:
National Energy Research Scientific Computing Center (NERSC) NERSC User Group October 4, 2005 Horst Simon and William Kramer NERSC/LBNL.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Date donald smits center for information technology Centre of Information Technology RUG Robert Janz Centre of Information Technology University.
Report on CSU HPC (High-Performance Computing) Study Ricky Yu–Kwong Kwok Co-Chair, Research Advisory Committee ISTeC August 18,
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Problem is to compute: f(latitude, longitude, elevation, time)  temperature, pressure, humidity, wind velocity Approach: –Discretize the.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
National Energy Research Scientific Computing Center (NERSC) Visualization Tools and Techniques on Seaborg and Escher Wes Bethel & Cristina Siegerist NERSC.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
June 29 San FranciscoSciDAC 2005 Terascale Supernova Initiative Discovering New Dynamics of Core-Collapse Supernova Shock Waves John M. Blondin NC State.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
The theoretical understanding of Type Ia Supernovae Daniel Kasen.
1 Accelerator Modeling (SciDAC). 2 Magneto-rotational instability and turbulent angular momentum transport (INCITE)
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
Mcs/ HPC challenges in Switzerland Marie-Christine Sawley General Manager CSCS SOS8, Charleston April,
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
A look at computing performance and usage.  3.6GHz Pentium 4: 1 GFLOPS  1.8GHz Opteron: 3 GFLOPS (2003)  3.2GHz Xeon X5460, quad-core: 82 GFLOPS.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
TeraScale Supernova Initiative: A Networker’s Challenge 11 Institution, 21 Investigator, 34 Person, Interdisciplinary Effort.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
National Energy Research Scientific Computing Center (NERSC) HPC In a Production Environment Nicholas P. Cardo NERSC Center Division, LBNL November 19,
1 OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH The NERSC Center --From A DOE Program Manager’s Perspective-- A Presentation to the NERSC Users Group.
State of LSC Data Analysis and Software LSC Meeting LIGO Hanford Observatory November 11 th, 2003 Kent Blackburn, Stuart Anderson, Albert Lazzarini LIGO.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
US Planck Data Analysis Review 1 Julian BorrillUS Planck Data Analysis Review 9–10 May 2006 Computing Facilities & Capabilities Julian Borrill Computational.
NCAR RP Update Rich Loft NCAR RPPI May 7, NCAR Teragrid RP Developments Current Cyberinfrastructure –5.7 TFlops/2048 core Blue Gene/L system –100.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
PDSF and the Alvarez Clusters Presented by Shane Canon, NERSC/PDSF
Tackling I/O Issues 1 David Race 16 March 2010.
Considering Time in Designing Large-Scale Systems for Scientific Computing Nan-Chen Chen 1 Sarah S. Poon 2 Lavanya Ramakrishnan 2 Cecilia R. Aragon 1,2.
LBNL/NERSC/PDSF Site Report for HEPiX Catania, Italy April 17, 2002 by Cary Whitney
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
Hall D Computing Facilities Ian Bird 16 March 2001.
LQCD Computing Project Overview
A Brief Introduction to NERSC Resources and Allocations
National Vision for High Performance Computing
Computing Facilities & Capabilities
The Shifting Landscape of CI Funding
Kirill Lozinskiy NERSC Storage Systems Group
Scientific Computing At Jefferson Lab
NERSC Reliability Data
BlueGene/L Supercomputer
TeraScale Supernova Initiative
Presentation transcript:

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 State of the Labs: NERSC Update Juan Meza Lawrence Berkeley National Laboratory

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 NERSC Center Overview  Funded by DOE, annual budget $28M, about 65 staff  Supports open, unclassified, basic research  Located in the hills next to University of California, Berkeley campus  Close collaborations between university and NERSC in computer science and computational science  Close collaboration with about 125 scientists in the Computational Research Division at LBNL

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 NERSC System Architecture SYMBOLIC MANIPULATION SERVER ETHERNET 10/100 Megabit FC Disk STK Robots ESnet HPSS Gigabit Ethernet Jumbo Gigabit Ethernet SGI HPSS OC 48 – 2400 Mbps HPSS 12 IBM SP servers 15 TB of cache disk, 8 STK robots, 44,000 tape slots, GB drives, GB drives,max capacity 5-8 PB PDSF 400 processors (Peak 375 GFlop/s)/ 360 GB of Memory/ 35 TB of Disk/Gigabit and Fast Ethernet Ratio = (1,93) IBM SP NERSC-3 – “Seaborg” 6,656 Processors (Peak 10 TFlop/s)/ 7.8 Terabyte Memory/44Terabytes of Disk Ratio = (8,7) LBNL “Alvarez” Cluster 174 processors (Peak 150 GFlop/s)/ 87 GB of Memory/1.5 terabytes of Disk/ Myrinet 2000 Ratio - (.6,100) Ratio = (RAM Bytes per Flop, Disk Bytes per Flop) Testbeds and servers Visualization Server – “escher” SGI Onyx 3400 – 12 Processors/ 2 Infinite Reality 4 graphics pipes 24 Gigabyte Memory/4Terabytes Disk

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, Accomplishments  High End Systems —NERSC 3 (“Seaborg”) 10 Tflop/s system in full production —Increased HPSS storage capacity to > 8 Pbytes —Evaluation of alternative architectures (SX-6, X-1, ES, BG/L) —Initiated procurement of NCS (New Computational System)  Comprehensive Scientific Support —Reached >95% utilization on Seaborg —Excellent results in User Survey  Intensive Support for Scientific Challenge Teams —INCITE allocations and SciDAC projects  Unified Science Environment —All NERSC systems on the grid (2/2004)

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 Immediate High Utilization of “Seaborg” 90%

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 NERSC FY 03 Usage by Institution Type

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 FY03 Leading DOE laboratory usage (>500,000 processor hours)

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 FY 03 Usage by Scientific Discipline

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 Terascale Simulations of Supernovae  PI: Tony Mezzacappa, ORNL  Allocation Category: SciDAC  Code: neutrino scattering on lattices (OAK3D)  Kernel: complex linear equations  Performance: 537 Mflop/s per processor (35% of peak)  Scalability: 1.1 Tflop/s on 2,048 processors  Allocation: 565,000 MPP hours; requested and needs 1.52 million

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 Simulation Matches Gamma Ray Burst SciDAC Project by Stan Woosley et al., UC Santa Cruz In March 2003 HETE satellite observed unusually close and bright GRB “Rosetta stone” of GRBs, because it conclusively established that at least some long GRBs come from supernovas By 1993, 135 different theories on the origin of GRBs had been published in scientific journals NERSC simulations show that “collapsar” model best describes data [1] J. Hjorth, J. Sollerman, P. Møller, J. P. U. Fynbo, S. E. Woosley, et al., “A very energetic supernova associated with the  -ray burst of 29 March 2003,” Nature 423, 847 (2003).

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 ProteinShop: Computational Steering of Protein Folding Teresa Head-Gordon et al., UC Berkeley (optimization and protein folding), and Silvia Crivelli, LBNL et al. (visualization) ProteinShop incorporates inverse kinematics from robotics or video gaming to permit biologist to manipulate protein interactively Optimization finds local energy minimum on Seaborg Permits much larger search space, and integration of intuitive knowledge Best paper award at IEEE Visualization Conference, and “most innovative” at CASP submitted for R&D 100 award

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 INCITE  INCITE - Innovative and Novel Computational Impact on Theory and Experiment - devotes 10% (4.9M hours) of NERSC resources to the most significant science regardless of DOE affiliation  Proposal Demographics —52 proposals received —130,508,660 CPU hours requested (1 proposal asked for 71,761,920 hours – the rest were well justified – less than 5M hours)  An oversubscription of 13 to 29 times.

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 FY04 INCITE Awards  Quantum Monte Carlo Study of Photosynthetic Centers; William Lester, Berkeley Lab  Stellar Explosions in Three Dimensions; Tomasz Plewa, University of Chicago  Fluid Turbulence; P.K. Yeung, Georgia Institute of Technology Innovative and Novel Computational Impact on Theory and Experiment (INCITE)

NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 Thank you