SiCortex Update IDC HPC User Forum

Slides:



Advertisements
Similar presentations
Hardware Lesson 3 Inside your computer.
Advertisements

Computing Infrastructure
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware.
Scientific codes on a cluster of Itanium-based nodes Joseph Pareti HPTC Consultant HP Germany
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
Kei Davis and Fabrizio Petrini Europar 2004, Pisa Italy 1 CCS-3 P AL STATE OF THE ART.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
T2K Open Supercomputer Its Concept and Architecture Hiroshi Nakashima (Kyoto U.) with cooperation of Mitsuhisa Sato (U. Tsukuba) Taisuke Boku (U. Tsukuba)
Performance Analysis of Virtualization for High Performance Computing A Practical Evaluation of Hypervisor Overheads Matthew Cawood University of Cape.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Information Technology Center Introduction to High Performance Computing at KFUPM.
HELICS Petteri Johansson & Ilkka Uuhiniemi. HELICS COW –AMD Athlon MP 1.4Ghz –512 (2 in same computing node) –35 at top500.org –Linpack Benchmark 825.
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Douglas Doerfler Sandia National Labs April 13th, 2004 SOS8 Charleston, SC “Big” and “Not so Big” Iron at SNL.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Rocks cluster : a cluster oriented linux distribution or how to install a computer cluster in a day.
SSI-OSCAR A Single System Image for OSCAR Clusters Geoffroy Vallée INRIA – PARIS project team COSET-1 June 26th, 2004.
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Sobolev Showcase Computational Mathematics and Imaging Lab.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 9/1/09.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Optimum System Balance for Systems of Finite Price John D. McCalpin, Ph.D. IBM Corporation Austin, TX SuperComputing 2004 Pittsburgh, PA November 10, 2004.
IM&T Vacation Program Benjamin Meyer Virtualisation and Hyper-Threading in Scientific Computing.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
2007/11/2 First French-Japanese PAAP Workshop 1 The FFTE Library and the HPC Challenge (HPCC) Benchmark Suite Daisuke Takahashi Center for Computational.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
0 Functional Verification of the SiCortex Multiprocessor System-on-a-Chip June 7, 2007 Oleg Petlin, Wilson Snyder
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
Revision - 01 Intel Confidential Page 1 Intel HPC Update Norfolk, VA April 2008.
Tackling I/O Issues 1 David Race 16 March 2010.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Operational and Application Experiences with the Infiniband Environment Sharon Brunett Caltech May 1, 2007.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
A Practical Evaluation of Hypervisor Overheads Matthew Cawood Supervised by: Dr. Simon Winberg University of Cape Town Performance Analysis of Virtualization.
CIT 140: Introduction to ITSlide #1 CSC 140: Introduction to IT Operating Systems.
Chapter 3 Getting Started. Copyright © 2005 Pearson Addison-Wesley. All rights reserved. Objectives To give an overview of the structure of a contemporary.
PARALLEL MODEL OF EVOLUTIONARY GAME DYNAMICS Amanda Peters MIT /13/2009.
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
DSS-G Configuration Bill Luken – April 10th , 2017
Application of General Purpose HPC Systems in HPEC
UBUNTU INSTALLATION
NVIDIA’s Extreme-Scale Computing Project
Appro Xtreme-X Supercomputers
Stallo: First impressions
CRESCO Project: Salvatore Raia
Storage SIG State and Future
Containers in HPC By Raja.
Small site approaches - Sussex
Joint Techs Workshop InfiniBand Now and Tomorrow
Cray Announces Cray Inc.
DWE Software Offerings V9.1.2
Versatile HPC: Comet Virtual Clusters for the Long Tail of Science SC17 Denver Colorado Comet Virtualization Team: Trevor Cooper, Dmitry Mishin, Christopher.
What is an Operating System?
What’s New from Platform Computing
Footer.
A very basic introduction
Presentation transcript:

SiCortex Update IDC HPC User Forum April 14, 2008

The Company Founded by computer system designers Single Focus: High Performance Computing

More delivered performance per square foot, per dollar, and per watt. What we Offer Complete computer systems System on a chip silicon Reduce power Maximize memory performance Integrate high performance interconnect High Reliability Software Open Source: Linux, GNU, and MPI Licensed: Fortran and debugger More delivered performance per square foot, per dollar, and per watt.

SiCortex Product Family 648 Gflops 864 Gbytes 30 GB/S I/O 2KW 1.5 Teraflops 1.8 Terabytes 68 GB/S I/O 4KW 5.8 Teraflops 7.8 Terabytes 216 GB/S I/O 18KW 72 Gflops 48 Gbytes 6.5 GB/S I/O 200 Watts

What’s Inside PCI Express I/O Memory Everything Else Compute: 162 GF/sec Memory b/w: 345 GB/sec Fabric b/w: 78 GB/sec I/O b/w: 7.5 GB/sec Power: 500 Watts Fabric Interconnect

Integrated HPC Linux Environment Operating System Linux kernel and utilities (2.6.18+) Cluster file system (Lustre) Development Environment GNU C, C++ Pathscale C, C++, Fortran Math libraries Performance tools Debugger (TotalView) MPI libraries (MPICH2) System Management Scheduler (SLURM) Partitioning Monitoring Console, boot, diagnostics Maintenance and Support Factory-installed software Regular updates Open source build environment Linux gentoo GNU MPI Libraries

Hardware Update SC072 Catapult - $15000, shipping Expandable configurations of SC648, SC1458, SC5832 Engineering work on multicabinet systems

Software Update SiCortex Release 2.2 shipping Support for all system types Linux 2.6.18, Lustre 1.6 Expanded I/O device support Infiniband, 10G Compiler and library performance improvements Open Source

Performance Update MPI Latency - 1.4 µsec MPI BW - 1.5 GB/s HPC Challenge work underway SC5832, on 5772 cpus: DGEMM 72% HPL 3.6 TF PTRANS 210 GB/s STREAM 345 MB/s (1.9 TB/s aggregate) FFT 174 GF RandomRing 4 usec, 50 MB/s RandomAccess 0.74 GUPS (2.25 optimized)