Download presentation
Presentation is loading. Please wait.
Published byMoses Bowes Modified over 9 years ago
1
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today
2
2 HPC Trend: Faster processors Processors inching ahead of each other Itanium … Xeon … Opteron … Xeon … Big leap happens this year: 17% 51% 50% 46% 24% 17% 10% 55% 19% 17% 16% 1% 2005 Opteron over Xeon 2006 Xeon over Opteron CAD Visual Studio Fin Model O&G DCC CAE
3
3 Industry Standard Processor choice & leadership Choices at end of 2006 WoodcrestRev FMontecito Price/performance leadership with 32/64-bit co-existence Highest performance 64-bit processor core for sustained performance Dual-Core 4 FLOPs/Tick DDR2 FBD memory Dual-Core 2 FLOPs/Tick DDR2 memory Dual-Core 4 FLOPs/Tick DDR2 memory New higher performance chipsets 1GHz HyperTransport New higher performance sx2000 and zx2 chipsets Highest clock speed, peak performance, large cache High bandwidth for sustained performance Highest SMP scalability (to 64p/128c) HP-UX for mission-critical technical computing Extensive 32-bit, and growing 64-bit ecosystems 2p/4c nodes for highly parallel Scale-out workloads Extensive 32-bit, and growing 64-bit ecosystems 2p/4c & 4p/8c nodes for moderate Scale-out workloads Extensive 64-bit ecosystem (and 32/64-bit on HP-UX) Scale-up and scale-out for complex workloads
4
4 Application performance is a qualitative number based on HP benchmarking results. Results are normalized to the faster Itanium operating environment and sorted by the Opteron:Itanium ratio. ISV compiler choices and optimization levels influence results as well as raw microprocessor capabilities. Itanium, Opteron, Xeon comparative results 1HCY06
5
5HP Confidential. Contains Intel Confidential Information EDA - Other: simulation, verification, synthesis, physical design Application performance is a qualitative number based on HP & Intel benchmarking results. Results are normalized to the faster Itanium operating environment and sorted by the Opteron:Itanium ratio. ISV compiler choices and optimization levels influence results as well as raw microprocessor capabilities. Itanium, Opteron, Xeon comparative results 2HCY06
6
6 Broadest Suite of HPC platforms HP Technical Clusters BL2xp / BL3xp BL45p, BL60p BL460, BL680 Blade Clusters HP Technical SMP Servers DL360 DL380 DL385 DL140 DL145 ProLiant Family rx8620 Superdome rx4640 rx2620 rx7620 Integrity Family rx1620 DL580 DL585 HP Technical Workstations xw8200 nw8240 xw9300 c8000 HP Cluster Platform 4000 HP Cluster Platform 3000 HP Cluster Platform 6000
7
7 HP Blades for HPC Blades are the ideal platform for clusters −Simplified management −Designed for performance and scalability −Reduced interconnect and network complexity −High density −Centralized power management Factors for blades adoption in HPC clusters: −Performance parity with racked systems −Price advantage shifts to blades −Interconnect choice expands to cover range of HPC workloads Front Rear
8
8 Agenda … Grid Initiative at HP HPC Focus & Trends HP Supercomputing Vision HP Today
9
9 What is “Supercomputing Utility” Vision Develop and offer a open standards & open systems based Supercomputing Utility – that can expand/grow over time, and truly adapt to the changing enterprise and environment. The utility can deliver high computational throughput, support multiple applications with different characteristics and workload. The fabric of this utility is a high speed network – all linked to a large scale data store. The environment is managed and controlled as a single system, and provides support for dispersed work force – either with direct log in or grid accessible.
10
10 HP Vision for Supercomputing facility Computation Data Management Visualization Integration is the Key ! Industry Standard Servers
11
11 HP Unified Cluster Portfolio strategy Advancing the power of clusters with Integrated solutions spanning computation, storage and visualization Choice of industry standard platforms, operating systems, interconnects, etc HP engineered and supported solutions that are easy to manage and use Scalable application performance on complex workloads Extensive use of open source software Extensive portfolio of qualified development tools and applications Scalable Visualization Array HP Integrity & ProLiant Servers HP Cluster Platforms HP StorageWorks Scalable File Share Storage Grid Computation Visualization Data Management
12
12 HP XC software for Linux Leveraged Open Source FunctionTechnologyFeatures and Benefits Distribution and KernelRHEL 3.0 compatible Red Hat compatible shipping product, Posix enhancements, support for Opteron, ISV support Batch SchedulerLSF 6.0 Platform LSF HPC Premier scheduler, policy driven, allocation controls, MAUI support. Provides migration for AlphaserverSC customers Resource ManagementSLURM Simple Linux Utility for Resource Management Fault tolerant, highly scalable, uses standard kernel MPIHP-MPI 2.1 HP’s Message Passing Interface Provides standard interface for multiple interconnects, MPICH compatible, support for MPI-2 functionality Inbound Network / Cluster Alias LVS Linux Virtual Server High availability virtual server project for managing incoming requests, with load balancing System Files Management SystemImager Configuration tools Cluster database SystemImager Automates Linux installs, software distribution, and production deployment. Supports complete, bootable image; can use multicast; used at PNNL and Sandia Console Telnet based console commands Power control Adaptable for HP integrated management processors – no need for terminal servers, reduced wiring MonitoringNagios SuperMON Nagios Browser based, robust host, service and network monitor from open source. SuperMon supports high speed, high sample rates, low perturbation monitoring for clusters. High Perf I/OLustre TM 1.2.x Lustre TM Parallel File System High performance parallel file system – efficient, robust, scalable
13
13 High performance interconnects Infiniband −Emerging industry standard −IB 4x – speeds 1.8GB/s, <5μSec MPI latency −24 port, 288 port switches −Scalable topologies with federation of switches Myrinet −Speeds up to 800MB/s, <6μSec MPI latency −16 port, 128 port, 256 port switches −Scalable topologies with federation of switches Quadrics −Elan 4 – 800MB/s, <3μSec MPI latency −8 port, 32 port, 64 port, 128 port switches −Scalable topologies with federation of switches GigE −60-80MB/s, >40 μSec MPI latency top-level switches node-level switches (128 ports) Connects to 64 nodes top-level switches (288 ports) node-level switches (24 ports) Connects to 12 nodes top-level switches (264 ports) node-level switches (128 ports) Connects to 64 nodes PCI-e
14
14 HP Cluster Platforms Factory pre-assembled hardware solution with optional software installation −Includes nodes, interconnects, network, racks, etc. integrated & tested Configure to order from 5 node to 512 nodes (more by request) −Uniform, worldwide specification and product menus −Fully integrated, with HP warranty and support Compute NodesOperating Systems Interconnects HP Cluster Platform 3000 ProLiant DL140 G2 ProLiant DL360 G4 server Linux Windows GigE, IB, Myrinet HP Cluster Platform 4000 ProLiant DL145 G2 ProLiant DL585 Linux Windows GigE, IB, Myrinet, Quadrics HP Cluster Platform 6000 Integrity rx1620 Integrity rx2620 Linux HP-UX GigE, IB, Quadrics
15
15 Data Management HP StorageWorks Scalable File Share (HP SFS) Customer challenge I/O performance limitations HP SFS provides Scalable performance −Aggregate parallel read or write bandwidth from > 1 GB/s to “tens of GB/s” −100-fold increase over NFS Scalable access −Shared, coherent, parallel access across a huge number of clients, 1000’s today “10’s of thousands” future Scalable capacity −multiple terabytes to multiple petabytes Based on breakthrough Lustre technology −Open source, industry standards based Scalable Storage Grid (Smart Cells) Scalable Bandwidth Linux Cluster HP Scalable File Share
16
16 Scalable Visualization Customer challenge Visualization solutions too expensive, proprietary, not scalable HP Scalable Visualization Array (SVA) Open, scalable, affordable, high-end, visualization solution based on industry standard Sepia technology Innovative approach combining −standard graphics adapters −accelerated compositing Yields a system that scales to clusters capable of displaying 100 million pixels or more HP Scalable Visualization Array
17
17 App node Compute Nodes users Service Nodes inbound connections Services Admin Log-in Services Multi-Panel Display Device Viz node VIz node Pixel Network Visualization Nodes xc compute cluster Delivering the Vision Object Storage Servers HP SFS Servers Meta Data Servers Scalable HA Storage Farm OST MDS SVA Rendering & Compositing sfs / lustre Scalable File Share High Speed Interconnect
18
18 TIFR – Tata Institute of Fundamental Research Computational Mathematics Laboratory (CML) I ndustry: Scientific Research - Pune Challenges −Current AlphaServer based Increase computational power −Explosive grow in new research Massive increase in performance −Partnership for support services HP Solution −1 teraflop peak HP XC based on: CP6000 (77) 2CPU/4GB Integrity rx1620 1.6GHz compute nodes, Integrity rx2620 service node 288 Port Infiniband switch −HP Math Libraries for Linux on Itanium −New CCN for collaboration on Algorithms Results −First step to massive Supercomputer −Improve ability to solve computationally demanding algorithms We need partners who complement our core competency in areas like complex hardware system design, microelectronics, nanotechnology and system software. This is where HP steps in, as it has been investigating HPC concepts for more than a decade and this has led to the creation of Itanium processors jointly with Intel. There is a need to build a giant hardware accelerator to address fundamental questions in computer science, which could not be answered until now, either by theory or experiment, to influence future development of the subject, facilitate scientific discoveries and solve grand challenges in various disciplines. This supercomputer, which will help us understand how to structure our algorithms for a larger system, is only a first step in that direction. Professor Naren Karmarkar Head CML, TIFR (Dr Karmarkar is a Bell Labs Fellow)
19
19 TI – Texas Instruments Industry: Semiconductor Engineering / EDA - Bangalore Challenges −5,000 processors already installed, additional Cluster computing required −Reduce design cycle time by 10X. −Datacenter now full will turn to industry for Utility Computing HP Solution −5.6 teraflop peak Beowulf Clusters based on: Cluster Platform 4000 −500 Compute nodes −ProLiant DL145 G2 2.8GHz 2P/2GB −GigaBit Ethernet Interconnect −Support Services −Adding to 100/+ existing DL585 Servers Result −Additional 1,000 processor Cluster for development requirements www.ti.com/asia/docs/india/index.html
20
20 IGIB – Institute of Genomics & Integrative Biology Industry: BioTechnology / LMS - Delhi Challenges −Current AlphaServer based Increase computational power −Explosive grow in new research Massive increase in performance −Partnership for support −Improve cost efficiencies HP Solution −4½ teraflop peak HP XC based on: CP3000 (288) 2CPU/4GB ProLiant DL140 G2 3.6GHz nodes using Infiniband CP3000 (24) 2CPU/4GB ProLiant DL140 G2 test cluster Superdome, 12 TB StorageWorks EVA SAN −Single point support service −IGIB research staff collaboration Results −HP India’s largest Supercomputer −One of the world’s most powerful research systems dedicated to Life Sciences HP’s Cluster Platform provides a scalable architecture that allows us to complete large, complex simulation experiments such as molecular interactions and dynamics, virtual drug screening, protein folding, etc much more quickly. This technology combined with HP’s experience and expertise in life sciences helps IGIB speed access to information, knowledge, and new levels of efficiency, which we hope will ultimately culminate in the discovery of new drug targets and predictive medicine for complex disorders with minimum side effects. Dr. Samir Brahmachari Director, IGIB
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.