Presentation is loading. Please wait.

Presentation is loading. Please wait.

Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.

Similar presentations


Presentation on theme: "Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C."— Presentation transcript:

1 Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C

2 A P P R O H P C P R E S E N T A T I O N SLIDE | 2 :: Corporate Snapshot Company Overview Leading developer of high performance servers, clusters and supercomputers – Established in 1991 – Headquartered in Milpitas, CA – Sales & Service office in Houston, TX – Manufacturing Hardware in Asia – Global Presence via Strategic and Channel Partners – 72% Profitable CAGR over past 3 years – Deployed the second largest Supercomputer in Japan – Six top ranked computing systems listed in the Top 500 – Delivering balanced architecture for scalable performance Target Markets – Financial Services – Government / Defense – Manufacturing – Oil & Gas

3 A P P R O H P C P R E S E N T A T I O N SLIDE | 3 Strategic Partnership NEC has a strong presence in the EMEA HPC Market with over 20 years of experience This is a breakthrough for Appro’s entry into the EMEA HPC market Provides sustainable competitive advantages enabling both companies to participate in this growing market segment Appro and NEC look forward to working together to offer powerful, flexible and reliable solutions to EMEA HPC markets :: Appro & NEC Join Forces in HPC Market Formal Press Announcement will go out on Tuesday, 9/16/08

4 A P P R O H P C P R E S E N T A T I O N SLIDE | 4 1,424 Cores 2.8TB System Memory 15 TFlops NOAA Cluster 2006. 9 6,912 Cores 13.8TB System Memory 33 TFlops LLNL Minos Cluster 2007. 6 9,216 Cores 18.4TB System Memory 44 TFlops LLNL Atlas Cluster 2006. 11 4,608 Cores 9.2 TB System Memory 49 TFlops 2008. 2 :: Past Performance History HPC Experience DE Shaw Research Cluster

5 A P P R O H P C P R E S E N T A T I O N SLIDE | 5 TLCC Cluster 2008. 4 13,824 Cores 120 TFlops 2008. 8 48,384 Cores LLNL, LANL, SNL 426 TFlops 4,000 Cores Dual-rail IB 38 TFlops Renault F1 CFD Cluster 2008. 7 10,784 Cores Quad-rail IB 95 TFlops Tsukuba University Cluster 2008. 6 :: Past Performance History HPC Experience LLNL Hera Cluster

6 A P P R O H P C P R E S E N T A T I O N SLIDE | 6 HPC Challenges Petascale deployments (4000+ node deployment) – Balanced Systems (CPU/Memory/Network) – Scalability (SW & Network) – Reliability (Real RAS: Network, Node, SW) – Facilities (Space, Power & Cooling) Integrated exotics (GPU cluster) – Solutions still being evaluated :: Changes in the Industry

7 A P P R O H P C P R E S E N T A T I O N SLIDE | 7 :: Based on a Scalable Multi-Tier Architecture Petascale Deployments

8 A P P R O H P C P R E S E N T A T I O N SLIDE | 8 Middle Ware-Hooks Job Scheduling IB-Subnet Manager Virtual Cluster Manager Instant SW Provisioning BIOS Synchronization 3D Torus Network Topology Support A CE Dual Rail Networks Stateless Operation Remote Lights out Management Standard Linux OS Support Failover & Recovery :: Scalable cluster management software “Appro Cluster Engine™ software turns a cluster of Servers into a,” functional, usable, reliable and available computing system” Jim Ballew, CTO Appro Petascale Deployments

9 A P P R O H P C P R E S E N T A T I O N SLIDE | 9 Delivers Cold Air directly to the equipment for optimum cooling efficiency. Delivers comfortable air temperature to the room for return to Chillers Back-to-Back Rack configuration saves floor space in the datacenter and encloses the Cold isles inside the racks FRU and maintenance is done from the front side of the rack cabinet :: Innovative Cooling and Density needed Petascale Deployments Top View Up to 30% Improvement in Density with Greater Cooling Efficiency

10 A P P R O H P C P R E S E N T A T I O N SLIDE | 10 :: Path to PetaFLOP Computing Appro Xtreme-X Supercomputer - Modular Scalable Performance Number of Racks 1 2 8 48 96 192 Number of Processors 128 256 1024 5,952 11,90423,808 Number of Cores 512 1,024 4096 23,808 47,61695,232 Peak Performance6TF/s 12TF/s 49TF/s279TF/s 558TF/s 1.1PF/s Memory Capacity1.5TB 3TB12TB 72TB 143TB 286TB Memory BW Ration GB/s per GF/s - 0.68GB/s per GF/sMemory Capacity Ratio GB per GF/s - 0.26GB per GF/s IO Fabric Interconnect – Dual-Rail QDR IO BW Ratio GB/sec per GF/s - 0.17GB/s per GF/s Usable Node-Node BW GB/s - 6.4GB/sNode-Node Latency - <2us Performance Numbers are Based on 2.93GHz Intel Nehalem Processors and Includes only Compute Nodes.... Petascale Deployments

11 A P P R O H P C P R E S E N T A T I O N SLIDE | 11 :: Possible Path to PetaFLOP GPU Computing GPU Computing Cluster – Solution still being evaluated Number of Racks 3 5 1018 34 Number of Blades 64 128 256 512 1024 Number of GPUs 32 64 128 256 512 Peak GPU Performance128TF 256TF 512TF 1PF 2PF Peak CPU Performance6TF12TF24TF48TF96TF Max Memory Capacity 1.6TB 3.2TB 6.4TB 13TB 26TB Bandwidth to GPU – 6.4GB/secNode Memory Bandwidth – 32GB/sec Max IO Bandwidth (2 QDR X4 IB) - 6.4GB/secNode to Node Latency – 2us.... Xtreme-X Supercomputer

12 Appro Xtreme-X Supercomputers Thank you Questions? A P P R O I N T E R N A T I O N A L I N C


Download ppt "Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C."

Similar presentations


Ads by Google