SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, 1971 30 minute turn at the gate.

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
HPC USER FORUM I/O PANEL April 2009 Roanoke, VA Panel questions: 1 response per question Limit length to 1 slide.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Supermicro © 2009Confidential HPC Case Study & References.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
The Efficient Fabric Presenter Name Title. The march of ethernet is inevitable Gb 10Gb 8Gb 4Gb 2Gb 1Gb 100Mb +
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Cisco and NetApp Confidential. Distributed under non-disclosure only. Name Date FlexPod Entry-level Solution FlexPod Value, Sized Right for Smaller Workloads.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
What’s New in the Cambridge High Performance Computer Service? Mike Payne Cavendish Laboratory Director - Dr. Paul Calleja.
Dell IT Innovation Labs in the Cloud “The power to do more!” Andrew Underwood – Manager, HPC & Research Computing APJ Solutions Engineering Team.
An Introduction to Cloud Computing. The challenge Add new services for your users quickly and cost effectively.
Lustre at Dell Overview Jeffrey B. Layton, Ph.D. Dell HPC Solutions |
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Update on Center for High Performance Computing..
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
SOLUTIONS FOR THE EFFICIENT ENTERPRISE Sameer Garde Country GM,India.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Corral: A Texas-scale repository for digital research data Chris Jordan Data Management and Collections Group Texas Advanced Computing Center.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
© 2010 Voltaire Inc. HPCFS AT ORLANDO LUG 2011 BILL BOAS PATH FORWARD FOR LUSTRE COMMUNITY System Fabric Works.
HPC Business update HP Confidential – CDA Required
March 9, 2015 San Jose Compute Engineering Workshop.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector.
1 Raspberry Pi HPC Testbed By Bradford W. Bazemore Georgia Southern University.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
© 2010 DataDirect Networks. Confidential Information D D N D E L I V E R S DataDirect Networks Update Mike Barry, HPC Sales April IDC HPC User Forum.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
InfiniBand at Sun Carl Hensler Distinguished Engineer Solaris Engineering Sun Microsystems.
Terascala – Lustre for the Rest of Us  Delivering high performance, Lustre-based parallel storage appliances  Simplifies deployment, management and tuning.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Highest performance parallel storage for HPC environments Garth Gibson CTO & Founder IDC HPC User Forum, I/O and Storage Panel April 21, 2009.
HIGH PERFORMANCE COMPUTING TIM CARROLL HPC DEVELOPMENT & STRATEGY
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
BlueWaters Storage Solution Michelle Butler NCSA January 19, 2016.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
© 2011 DataDirect Networks. All rights reserved. | NDA Confidential Information. DataDirect Networks Lustre User Group 2011 Update.
Thomas Baus Senior Sales Consultant Oracle/SAP Global Technology Center Mail: Phone:
N8-HPC and Polaris Alan Real, Robin Pinning Technical Director(s), N8-HPC
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Computer Networks Laboratory project. In cooperation with Mellanox Technologies Ltd. Guided by: Crupnicoff Diego. Gurewitz Omer. Students: Cohen Erez.
Extreme Scale Infrastructure
The demonstration of Lustre in EAST data system
Appro Xtreme-X Supercomputers
OCP: High Performance Computing Project
Joint Techs Workshop InfiniBand Now and Tomorrow
Presentation transcript:

SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, minute turn at the gate First flight: June, minute turn at the gate First flight: January, hour 30min US to Europe First flight: January, hour 30min US to Europe One Plane: Pilots, Mechanics, Cabin Crew, Vendors train once Common spare parts Economies of scale One Plane: Pilots, Mechanics, Cabin Crew, Vendors train once Common spare parts Economies of scale

HYPERION PETASCALE IO TESTBED 1,152 nodes 9,216 cores ~100 TF/s 144 nodes 144 4x 576 BG/s GbE 290 BG/s 576 4x 2.3 TB/s IBA TorMesh SAN GbE TorMesh SAN Lustre Appliances ~47 GB/s

Development & Testing InfiniBand Open-Source Parallel File Systems Open-Source software “Intel Cluster Ready” process Development & Testing InfiniBand Open-Source Parallel File Systems Open-Source software “Intel Cluster Ready” process Petascale HW & SW Testbed Processor, memory, networking, storage, visualization. Efficiently refresh, expand, and upgrade to future technologies Petascale HW & SW Testbed Processor, memory, networking, storage, visualization. Efficiently refresh, expand, and upgrade to future technologies INNOVATION THROUGH COLLABORATION Beneficiaries of Hyperion Collaboration Government agencies, alliances, and computing centers End customers such as financial services, energy, pharmaceuticals. Users of all sizes and resource levels Beneficiaries of Hyperion Collaboration Government agencies, alliances, and computing centers End customers such as financial services, energy, pharmaceuticals. Users of all sizes and resource levels Dell & Intel, Mellanox, RedHat, Cisco, Qlogic, DDN, SuperMicro, LSI, Sun* Dell & Intel, Mellanox, RedHat, Cisco, Qlogic, DDN, SuperMicro, LSI, Sun*

What’s next?  What we will deliver in the coming months: –New HPC solutions –New Partnerships –New People –Investments What we ask: –Consider –Collaborate 4 DELL CONFIDENTIAL