SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely.

Slides:



Advertisements
Similar presentations
1 Copyright © 2012 Oracle and/or its affiliates. All rights reserved. Convergence of HPC, Databases, and Analytics Tirthankar Lahiri Senior Director, Oracle.
Advertisements

Priority Research Direction (I/O Models, Abstractions and Software) Key challenges What will you do to address the challenges? – Develop newer I/O models.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
SHORT OVERVIEW OF CURRENT STATUS A. A. Moskovsky Program Systems Institute, Russian Academy of Sciences IKI - MSR Research Workshop Moscow,
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO HPEC September 16, 2010 Dr. Mike Norman, PI Dr. Allan Snavely, Co-PI.
SAN DIEGO SUPERCOMPUTER CENTER Using Gordon to Accelerate LHC Science Rick Wagner San Diego Supercomputer Center XSEDE 13 July 22-25, 2013 San Diego, CA.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon: NSF Flash-based System for Data-intensive Science Mahidhar Tatineni 37.
SAN DIEGO SUPERCOMPUTER CENTER Choonhan Youn Viswanath Nandigam, Nancy Wilkins-Diehr, Chaitan Baru San Diego Supercomputer Center, University of California,
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update Trestles Recent Dash results Gordon schedule SDSC’s broader HPC.
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Performance of Applications Using Dual-Rail InfiniBand 3D Torus Network on the.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Status Update TeraGrid Science Advisory Board Meeting July 19, 2010 Dr. Mike.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
A High-Performance Campus-Scale Cyberinfrastructure: The Technical, Political, and Economic Presentation by Larry Smarr to the NSF Campus Bridging Workshop.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
NPACI: National Partnership for Advanced Computational Infrastructure Supercomputing ‘98 Mannheim CRAY T90 vs. Tera MTA: The Old Champ Faces a New Challenger.
1 RAMP 100K Core Breakout Assorted RAMPants RAMP Retreat, UC San Diego June 14, M.
Lecture 1: Introduction to High Performance Computing.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Capacity Planning in SharePoint Capacity Planning Process of evaluating a technology … Deciding … Hardware … Variety of Ways Different Services.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO TeraGrid Coordination Meeting June 10, 2010 TeraGrid Forum Meeting June 16, 2010.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
Swiss Academic Compute Cloud Project Lightning Talk at CH OpenStack User Meeting Nov Tyanko Alekseyev (UZH) Markus Eurich (ETH) Dean Flanders.
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
SDSC RP Update TeraGrid Roundtable Changes in SDSC Allocated Resources We will decommission our IA-64 cluster June 30 (rather than March 2010)
Statistical Performance Analysis for Scientific Applications Presentation at the XSEDE14 Conference Atlanta, GA Fei Xing Haihang You Charng-Da Lu July.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
1 CS503: Operating Systems Spring 2014 Dongyan Xu Department of Computer Science Purdue University.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
HPC computing at CERN - use cases from the engineering and physics communities Michal HUSEJKO, Ioannis AGTZIDIS IT/PES/ES 1.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
CCGrid, 2012 Supporting User Defined Subsetting and Aggregation over Parallel NetCDF Datasets Yu Su and Gagan Agrawal Department of Computer Science and.
Using IOR to Analyze the I/O Performance
EScience: Techniques and Technologies for 21st Century Discovery Ed Lazowska Bill & Melinda Gates Chair in Computer Science & Engineering Computer Science.
“The UCSD Big Data Freeway System” Invited Short Talk Workshop on “Enriching Human Life and Society” UC San Diego February 6, 2014 Dr. Larry Smarr Director,
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
DiRAC-3 – The future Jeremy Yates, STFC DiRAC HPC Facility.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 Graphic Processing Processors (GPUs) Parallel.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Tackling I/O Issues 1 David Race 16 March 2010.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Opportunistic Computing Only Knocks Once: Processing at SDSC Ian Fisk FNAL On behalf of the CMS Collaboration.
COMP7500 Advanced Operating Systems I/O-Aware Load Balancing Techniques Dr. Xiao Qin Auburn University
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Using Pattern-Models to Guide SSD Deployment for Big Data in HPC Systems Junjie Chen 1, Philip C. Roth 2, Yong Chen 1 1 Data-Intensive Scalable Computing.
HPC In The Cloud Case Study: Proteomics Workflow
Achieving the Ultimate Efficiency for Seismic Analysis
Low-Cost High-Performance Computing Via Consumer GPUs
Tools and Services Workshop
Jay Boisseau, Director Texas Advanced Computing Center
Appro Xtreme-X Supercomputers
Low-Cost High-Performance Computing Via Consumer GPUs
File Processing : Storage Media
File Processing : Storage Media
TeraScale Supernova Initiative
Presentation transcript:

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely Co-Principal Investigator Project Scientist

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO What is Gordon? A “data-intensive” supercomputer based on SSD flash memory and virtual shared memory Emphasizes MEM and IO over FLOPS A system designed to accelerate access to massive data bases being generated in all fields of science, engineering, medicine, and social science The NSF’s most recent Track 2 award to the San Diego Supercomputer Center (SDSC) Coming Summer 2011

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Why Gordon? Growth of digital data is exponential “data tsunami” Driven by advances in digital detectors, networking, and storage technologies Making sense of it all is the new imperative data analysis workflows data mining visual analytics multiple-database queries on demand data-driven applications

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO The Memory Hierarchy Flash SSD, O(TB) 1000 cycles Potential 10x speedup for random I/O to large files and databases

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Architecture: “Supernode” 32 Appro Extreme-X compute nodes Dual processor Intel Sandy Bridge 240 GFLOPS 64 GB 2 Appro Extreme-X IO nodes Intel SSD drives 4 TB ea. 560,000 IOPS ScaleMP vSMP virtual shared memory 2 TB RAM aggregate 8 TB SSD aggregate 240 GF Comp. Node 64 GB RAM 240 GF Comp. Node 64 GB RAM 4 TB SSD I/O Node vSMP memory virtualization

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Architecture: Full Machine 32 supernodes = 1024 compute nodes Dual rail QDR Infiniband network 3D torus (4x4x4) 4 PB rotating disk parallel file system >100 GB/s SN DDDDDD

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Peak Capabilities Speed245 TFLOPS Mem (RAM)64 TB Mem (SSD)256 TB Mem (RAM+SSD)320 TB Ratio (MEM/SPEED)1.31 BYTES/FLOP IO rate to SSDs35 Million IOPS Network bandwidth16 GB/s bi-directional Network latency 1  sec. Disk storage4 PB Disk IO Bandwidth>100 GB/sec

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon is designed specifically for data- intensive HPC applications Such applications involve “very large data-sets or very large input-output requirements” Two data-intensive application classes are important and growing Data Mining “the process of extracting hidden patterns from data… with the amount of data doubling every three years, data mining is becoming an increasingly important tool to transform this data into information.” Wikipedia Data-Intensive Predictive Science solution of scientific problems via simulations that generate large amounts of data

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO High Performance Computing (HPC) vs High Performance Data (HPD) AttributeHPCHPD Key HW metricPeak FLOPSPeak IOPS Architectural featuresMany small-memory multicore nodes Fewer large-memory SMP nodes Typical applicationNumerical simulationDatabase query Data mining ConcurrencyHigh concurrencyLow concurrency or serial Data structuresData easily partitioned e.g. grid Data not easily partitioned e.g. graph Typical disk I/O patternsLarge block sequentialSmall block random Typical usage modeBatch processInteractive

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Data mining applications will benefit from Gordon De novo genome assembly from sequencer reads & analysis of galaxies from cosmological simulations and observations Will benefit from large shared memory Federations of databases and Interaction network analysis for drug discovery, social science, biology, epidemiology, etc. Will benefit from low latency I/O from flash

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Data-intensive predictive science will benefit from Gordon Solution of inverse problems in oceanography, atmospheric science, & seismology Will benefit from a balanced system, especially large RAM per core & fast I/O Modestly scalable codes in quantum chemistry & structural engineering Will benefit from large shared memory

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Dash: towards a supercomputer for data intensive computing

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Project Timeline Phase 1: Dash development (9/09-7/11) Phase 2: Gordon build and acceptance (3/11-7/11) Phase 3: Gordon operations (7/11-6/14)

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Comparison of the Dash and Gordon systems Doubling capacity halves accessibility to any random data on a given media 1.2TB per 100K tpm-C IOPS per tpm-C 150K tpm-C 1996 =.7 -.8smaller memory subsystem 2006 = System ComponentDash Gordon Node Characteristics (# sockets, cores, DRAM) 2 sockets, 8 cores, 48 GB 2 sockets, TBD cores, 64 GB Compute Nodes (#) Processor TypeNehalem Sandy Bridge Clock Speed (GHz)2.4 TBD Peak Speed (Tflops) DRAM (TB)3 64 I/O Nodes (#)2 64 I/O Controllers per Node2 with 8 ports 1 with 16 ports Flash (TB)2 256 Total Memory: DRAM + flash (TB)5 320 vSMPYes 32-node Supernodes2 32 InterconnectInfiniBand Disk.5 PB 4.5 PB

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon project wins storage challenge at SC09 with Dash

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO We won SC09 Data Challenge with Dash! With these numbers: IOR 4KB RAMFS 4Million+ IOPS on up to.750 TB of DRAM (1 supernode’s worth) 88K+ IOPS on up to 1 TB of flash (1 supernode’s worth) Speed up Palomar Transients database searches 10x to 100x Best IOPS per dollar Since that time we boosted flash IOPS to 540K (hitting our 2011 performance targets – it is now 2009

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Dash Update – early vSMP test results

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Dash Update – early vSMP test results

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Next Steps Continue vSMP and flash SSD assessment and development on Dash Prototype Gordon application profiles using Dash New application domains New usage modes and operational support mechanisms New user support requirements Work with TRAC to identify candidate apps Assemble Gordon User Advisory Committee International Data-Intensive Conference Fall 2010