…updates… 9/19/2018.

Slides:



Advertisements
Similar presentations
Introduction to Grid Application On-Boarding Nick Werstiuk
Advertisements

IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
ArcGIS Server Architecture at the DNR GIS/LIS Conference, October 2013.
PowerEdge T20 Customer Presentation. Product overview Customer benefits Use cases Summary PowerEdge T20 Overview 2 PowerEdge T20 mini tower server.
Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 1.
Statewide IT Conference30-September-2011 HPC Cloud Penguin on David Hancock –
Profit from the cloud TM Parallels Dynamic Infrastructure AndOpenStack.
Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM.
Introduction to DBA.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Office of Technology Operations & Planning Unlocking the Power of Server Virtualization Rebecca Astin Office of Technology Operations and Planning National.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
Towards High-Availability for IP Telephony using Virtual Machines Devdutt Patnaik, Ashish Bijlani and Vishal K Singh.
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
VMware Update 2009 Daniel Griggs Solutions Architect, Virtualization Servers & Storage Solutions Practice Dayton OH.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Virtual Machines. Virtualization Virtualization deals with “extending or replacing an existing interface so as to mimic the behavior of another system”
Virtualization Performance H. Reza Taheri Senior Staff Eng. VMware.
5.3 HS23 Blade Server. The HS23 blade server is a dual CPU socket blade running Intel´s new Xeon® processor, the E5-2600, and is the first IBM BladeCenter.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
1 Some Context for This Session…  Performance historically a concern for virtualized applications  By 2009, VMware (through vSphere) and hardware vendors.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
Dual Stack Virtualization: Consolidating HPC and commodity workloads in the cloud Brian Kocoloski, Jiannan Ouyang, Jack Lange University of Pittsburgh.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
VSolution Playbook VIRTUALIZED SAN SOLUTION FOR VMWARE SMB.
Benefits: Increased server utilization Reduced IT TCO Improved IT agility.
VMware Infrastructure 3 The Next Generation in Virtualization.
Improving Network I/O Virtualization for Cloud Computing.
Virtualization: Not Just For Servers Hollis Blanchard PowerPC kernel hacker.
© 2012 IBM Corporation IBM Flex System™ The elements of an IBM PureFlex System.
COMS E Cloud Computing and Data Center Networking Sambit Sahu
Headline in Arial Bold 30pt HPC User Forum, April 2008 John Hesterberg HPC OS Directions and Requirements.
Server Virtualization
Clustering In A SAN For High Availability Steve Dalton, President and CEO Gadzoox Networks September 2002.
1 Public DAFS Storage for High Performance Computing using MPI-I/O: Design and Experience Arkady Kanevsky & Peter Corbett Network Appliance Vijay Velusamy.
Hyper-V Performance, Scale & Architecture Changes Benjamin Armstrong Senior Program Manager Lead Microsoft Corporation VIR413.
VMware vSphere Configuration and Management v6
High Performance Storage Solutions April 2010 Larry Jones VP, Product Marketing.
Coupling Facility. The S/390 Coupling Facility (CF), the key component of the Parallel Sysplex cluster, enables multisystem coordination and datasharing.
Virtualization Supplemental Material beyond the textbook.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Cloud Computing – UNIT - II. VIRTUALIZATION Virtualization Hiding the reality The mantra of smart computing is to intelligently hide the reality Binary->
Tackling I/O Issues 1 David Race 16 March 2010.
REMINDER Check in on the COLLABORATE mobile app Best Practices for Oracle on VMware - Deep Dive Darryl Smith Chief Database Architect Distinguished Engineer.
© 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ProLiant G5 to G6 Processor Positioning.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Sql Server Architecture for World Domination Tristan Wilson.
Evaluation of SMP Shared Memory Machines for Use With In-Memory and OpenMP Big Data Applications Andrew J. Younge ∗, Christopher Reidy †, Robert Henschel.
Extreme Scale Infrastructure
Oracle & HPE 3PAR.
TYBIS IP-Matrix Virtualized Total Video Surveillance System Edge Technology, World Best Server Virtualization.
Organizations Are Embracing New Opportunities
EonNAS.
Current Generation Hypervisor Type 1 Type 2.
Server Consolidation and Virtualization
Virtualization OVERVIEW
Flex System Enterprise Chassis
Appro Xtreme-X Supercomputers
REAL QUESTIONS,100% PASSING GUARANTEED
IDC HPC User Forum 09/08/2009 Manuel Hoffmann, Vice President, Channel Development /15/2018.
Containers and Software-Defined Servers
Network Attached Storage NAS100
Hybrid Programming with OpenMP and MPI
IBM Power Systems.
Virtualization Dr. S. R. Ahmed.
Assoc. Prof. Marc FRÎNCU, PhD. Habil.
Presentation transcript:

…updates… 9/19/2018

New Server Virtualization Paradigm ENTERPRISE APPLICATIONS Applications requiring fraction of the physical server resources HIGH PERFORMANCE COMPUTING Applications requiring superset of the physical server resources Existing: Partitioning New: Aggregation Hypervisor or VMM Virtual Machines App OS Virtual Machine App OS Hypervisor or VMM 9/19/2018

Existing HPC Deployment Models Applications requiring superset of the physical server resources Scale-Up Scale-Out Fit the hardware to the problem size Break the problem to fit the hardware 9/19/2018

Existing HPC Deployment Models PROS AND CONS Scale-Up Scale-Out Fit the hardware to the problem size Break the problem to fit the hardware + - Simplified IT infrastructure Simple and flexible programming Single system to manage Consolidated I/O High installation & management cost Complex parallel programming Multiple operating systems Cluster file systems, etc. - + Proprietary hardware design High cost Architecture lock-in Leverages industry standard servers Low cost Open architecture 9/19/2018

Existing HPC Deployment Models PROS AND CONS Scale-Up Scale-Out Aggregation + Simplified IT infrastructure Simple and flexible programming Single system to manage Consolidated I/O Virtual Machine App OS Hypervisor or VMM + Leverages industry standard servers Low cost Open architecture 9/19/2018

vSMP Foundation – Background THE NEED FOR AGGREGATION - TYPICAL USE CASES Virtual Machine App OS Hypervisor or VMM vSMP Foundation Capabilities: Up to 16 nodes: 32 processors (128 cores) 4 TB RAM More at: http://www.scalemp.com/spec Cluster Management Requirements driven by IT to simplify cluster deployment: Single OS InfiniBand complexity removal Simplified I/O: faster scratch storage Large memory is a plus OPEX savings SMP Replacement Requirements driven by the end users per application characteristics: Large memory High core-count IT simplification is a plus CAPEX savings 9/19/2018

Why Aggregate? Fit the hardware to the problem size OVERCOMING LIMITATIONS OF EXISTING DEPLOYMENT MODELS Fit the hardware to the problem size Alternative to costly and proprietary RISC systems Large memory x86 resource Enable larger workloads that cannot be run otherwise High core-count x86 shared-memory resource with high memory bandwidth Allow threaded applications to benefit from shared-memory systems Reduced development time of custom code using OpenMP (vs. MPI) App OS $$$$$ App OS $$$ 9/19/2018

Why Aggregate? Break the problem to fit the hardware OVERCOMING LIMITATIONS OF EXISTING DEPLOYMENT MODELS Break the problem to fit the hardware Ease of use: one system to manage: fewer, larger nodes means less cluster management overhead Single Operating System Avoid cluster file systems Hide InfiniBand complexities Shared I/O Single process can utilize I/O bandwidth of multiple systems $$$$$ App OS App OS $$$ 9/19/2018

Simplified Cluster - Example 9/19/2018

Customers and Partners Federal Educational Commercial Supported Platforms 9/19/2018

Target Environments and Applications Users seeking to simplify cluster complexities Applications that use large memory footprint (even with one processor) Applications that need multiple processors and shared memory Typical end-user applications Manufacturing CSM (Computational Structural Mechanics) ABAQUS/Explicit ABAQUS/Standard ANSYS Mechanical LSTC LS-DYNA ALTAIR Radioss CFD (Computational Fluid Dynamics) FLUENT ANSYS CFX STAR-CD AVL FIRE Tgrid Other inTrace OpenRT Life Sciences Gaussian VASP AMBER Schrödinger Jaguar Schrödinger Glide NAMD DOCK GAMESS GOLD mpiBLAST GROMACS MOLPRO OpenEye FRED OpenEye OMEGA SCM ADF HMMER Energy Schlumberger ECLIPSE Paradigm GeoDepth 3DGEO 3DPSDM Norsar 3D EDA Mentor Cadence Synopsys Finance Wombat KX Others The MathWorks MATLAB R Octave Wolfram MATHEMATICA ISC STAR-P 9/19/2018

Automatic failover and load-balancing vSMP Foundation 2.0 Support for Intel® Nehalem Processor Family First Nehalem solution with more than 2 processors Up to 3 times better performance compared to Harpertown systems Optimized performance with intra-board memory placement and QDR InfiniBand High-availability with dual-rail InfiniBand 2 InfiniBand switches (dual-rail) in an active-active configuration Automatic failover on link errors (cable) or switch failure Improved performance with switch load-balancing (both switches used in parallel) Partitioning Hardware-level isolated partitions, each can run different OS Up to 8 partitions, minimum 2 servers per partition Requires add-on license Emulex LightPulse® Fibre-Channel HBA Support Server A Server B Server C InfiniBand Switch 2 InfiniBand Switch 1 Automatic failover and load-balancing Single Partition Multiple Partitions 9/19/2018

vSMP Foundation 2.0 COMPLETE SYSTEM VIEW - NOW AVAILABLE FOR ACADEMIC INSTITUTES ! Before After 9/19/2018

Some Performance Data GAUSSIAN 9/19/2018

Some Performance Data GAUSSIAN 9/19/2018

vSMP Foundation Performance STREAM (OMP) - MB/SEC. (HIGHER IS BETTER) HW Characteristics: 1333MHz - 32 x Intel XEON E5345 QC (Clovertown), 2.33GHz, 2x4MB L2, 1333MHz; 900/960GB (vSMP Foundation 1.7) (Source: ScaleMP) 1600MHz - 32 x Intel XEON E5472 QC (Harpertown), 3.00GHz, 2x6MB L2, 1600MHz; 249/288GB (vSMP Foundation 1.7) (Source: ScaleMP) QPI 6.4GT/s - 4 x Intel XEON X5570 QC (Nehalem), 2.93GHz, 8MB L3, QPI 6.4; 9/16GB (vSMP Foundation 1.7) (Source: ScaleMP)

vSMP Foundation Performance SPECint_rate_base2000 - RATE (HIGHER IS BETTER) Higher is Better HW Characteristics: vSMP Foundation™ (QC-8 core): 2 x Intel XEON 5345 QC (Clovertown), 2.33GHz, 2x4MB L2; 908/960GB (vSMP Foundation 1.7) (Source: ScaleMP) vSMP Foundation™ (QC-128 core): 32 x Intel XEON 5345 QC (Clovertown), 2.33GHz, 2x4MB L2; 908/960GB (vSMP Foundation 1.7) (Source: ScaleMP)

vSMP Foundation Performance SPECint_rate_base2006 - RATE (HIGHER IS BETTER) HW Characteristics: QPI 6.4GT/s - 4 x Intel XEON X5570 QC (Nehalem), 2.93GHz, 8MB L3, QPI 6.4; 9/16GB (vSMP Foundation 1.7) (Source: ScaleMP)

vSMP Foundation Performance SPECfp_rate_base2000 - RATE (HIGHER IS BETTER) Higher is Better HW Characteristics: vSMP Foundation™ (QC-8 core): 2 x Intel XEON 5345 QC (Clovertown), 2.33GHz, 2x4MB L2; 908/960GB (vSMP Foundation 1.7) (Source: ScaleMP) vSMP Foundation™ (QC-128 core): 32 x Intel XEON 5345 QC (Clovertown), 2.33GHz, 2x4MB L2; 908/960GB (vSMP Foundation 1.7) (Source: ScaleMP)

vSMP Foundation Performance SPECfp_rate_base2006 - RATE (HIGHER IS BETTER) HW Characteristics: QPI 6.4GT/s - 4 x Intel XEON X5570 QC (Nehalem), 2.93GHz, 8MB L3, QPI 6.4; 9/16GB (vSMP Foundation 1.7) (Source: ScaleMP)

Shai Fultheim Founder and President Shai@ScaleMP.com, +1 (408) 480 1612 9/19/2018