Cray Announces Cray Inc.

Slides:



Advertisements
Similar presentations
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Advertisements

Thanks to Microsoft Azure’s Scalability, BA Minds Delivers a Cost-Effective CRM Solution to Small and Medium-Sized Enterprises in Latin America MICROSOFT.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 1.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Prof. Srinidhi Varadarajan Director Center for High-End Computing Systems.
Copyright 2009 FUJITSU TECHNOLOGY SOLUTIONS PRIMERGY Servers and Windows Server® 2008 R2 Benefit from an efficient, high performance and flexible platform.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
ORIGINAL AUTHOR JAMES REINDERS, INTEL PRESENTED BY ADITYA AMBARDEKAR Overview for Intel Xeon Processors and Intel Xeon Phi coprocessors.
Administration and management of Windows-based clusters Windows HPC Server 2008 Matej Ciesko HPC Consultant, PM
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
SSI-OSCAR A Single System Image for OSCAR Clusters Geoffroy Vallée INRIA – PARIS project team COSET-1 June 26th, 2004.
SGI Proprietary SGI Update IDC HPC User Forum September, 2008.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Bright Cluster Manager Advanced cluster management made easy Dr Matthijs van Leeuwen CEO Bright Computing Mark Corcoran Director of Sales Bright Computing.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Copyright © 2002, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
Headline in Arial Bold 30pt HPC User Forum, April 2008 John Hesterberg HPC OS Directions and Requirements.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
A New Parallel Debugger for Franklin: DDT Katie Antypas User Services Group NERSC User Group Meeting September 17, 2007.
UDI HDK Roadmap Matt Kaufman Senior Software Engineer
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
Linux History C151 Multi-User Operating Systems. Open Source Programming Open source programming: 1983, Richard Stallman started the GNU Project (GNU.
ARCHER Advanced Research Computing High End Resource
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Extreme Computing at Bull Jean-François Lavignon, Strategy and Cooperation VP.
Copyright © 2012, SAS Institute Inc. All rights reserved. SAS ® GRID AT PHAC SAS OTTAWA PLATFORM USERS SOCIETY, NOVEMBER 2012.
Gain High Availability Performance and Scale of Applications Running on Windows Azure with KEMP Technologies’ Virtual LoadMaster COMPANY PROFILE: KEMP.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Chapter Nine NetWare-Based Networking. Objectives Identify the advantages of using the NetWare network operating system Describe NetWare’s server hardware.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Plans for the National NERC HPC services UM vn 6.1 installations and performance UM vn 6.6 and NEMO(?) plans.
VisIt Project Overview
Virtualization for Cloud Computing
Univa Grid Engine Makes Work Management Automatic and Efficient, Accelerates Deployment of Cloud Services with Power of Microsoft Azure MICROSOFT AZURE.
HPC usage and software packages
DSS-G Configuration Bill Luken – April 10th , 2017
SUSE® Cloud The Open Source Private Cloud Solution for the Enterprise
LINUX WINDOWS Vs..
Performance Technology for Scalable Parallel Systems
Welcome: Intel Multicore Research Conference
GWE Core Grid Wizard Enterprise (
Structural Simulation Toolkit / Gem5 Integration
CRESCO Project: Salvatore Raia
Running OpenIFS on HECToR
Versatile HPC: Comet Virtual Clusters for the Long Tail of Science SC17 Denver Colorado Comet Virtualization Team: Trevor Cooper, Dmitry Mishin, Christopher.
ResourceFirst Puts Emphasis on Communication, Uses Power of Azure to Bring Successful Resource and Portfolio Management to Companies Globally MICROSOFT.
Data Security for Microsoft Azure
CloneManager® Helps Users Harness the Power of Microsoft Azure to Clone and Migrate Systems into the Cloud Cost-Effectively and Securely MICROSOFT AZURE.
HC Hyper-V Module GUI Portal VPS Templates Web Console
Dell Data Protection | Rapid Recovery: Simple, Quick, Configurable, and Affordable Cloud-Based Backup, Retention, and Archiving Powered by Microsoft Azure.
Chapter 2: The Linux System Part 1
Appcelerator Arrow: Build APIs in Minutes. Connect to Any Data Source
XtremeData on the Microsoft Azure Cloud Platform:
SiCortex Update IDC HPC User Forum
Quasardb Is a Fast, Reliable, and Highly Scalable Application Database, Built on Microsoft Azure and Designed Not to Buckle Under Demand MICROSOFT AZURE.
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Can (HPC)Clouds supersede traditional High Performance Computing?
Presentation transcript:

Cray Announces Cray Inc

Cray Strategy for Growth in HPC   Cray Linux Environment Version 3 (CLE3) Cray moves to x86 with AMD Opteron with XT3 Announces Intel and AMD will be incorporated into future Cascade Cray moves to Linux-base OS Announces XT5m “mini” for the mid-range Cray scales production systems to the petascale Cray Adaptive Supercomputing Vision Cray Introduces CX1 CX1000 for the cluster/hybrid for departmental markets ORNL, NICS, HECTOR, NERSC5&6, DOD HPCMP(AFRL, ERDC, NAVO, ARL, ARSC), CIELO, KMA, CSCS, CSC, NCAR, IndianaU, UofDuisberg-Essen, ISM, SWIFT, SNL, JRTRI, U of Bergen, HRLS and many others. 9/21/2018 Draft for Cray Internal Use Cray Proprietary

Cray Products and Markets High-End Supercomputing Production Petascale 100 TF to Petascale $2M+ Cray XT6 Mid-Range Supercomputing Production Scalable 10TF to 100+ TF $0.5M to $3M Cray XT6m Capability Clusters Hybrid Capable 2 TF to 10+ TF $100K to $800K Cray CX1000 Deskside “Ease of Everything” Deskside $15K to $120K Cray CX1 Cray Inc Cray Proprietary

Cray Linux Environment - Designed for HPC For the first time in a long time… The most powerful supercomputers on the planet… Enables applications at over a sustained petaflops and… can also run your key ISV applications. ® Cray Inc Cray Proprietary

CLE3 is a Scalable, Adaptive Linux OS Benefits of CLE3 and Cray’s Adaptive Supercomputing Vision Performance: Support ultimate scalability Scalability to run sustained petascale applications New storage and IO features with improved Lustre support Improved scheduler features and support Better high-multicore node support with “Core Specialization” Reliability: Industry-leading system availability Node health checking with “Advanced NodeKARE” Improved network resiliency Improved serviceability and node replacement Compatibility: Industry-standard OS base SUSE base linux Improved parallel IO support with “Data Virtualization Service (DVS)” Introducing CCM – Cluster Compatibility Mode Flexibility: OS adapts at job runtime Extreme Scalability Mode and Cluster Compatibility Mode Cray Inc

CLE3, An Adaptive Linux OS designed specifically for HPC No compromise scalability Low-Noise Kernel for scalability Native Comm. & Optimized MPI Application-specific performance tuning and scaling ESM – Extreme Scalability Mode No compromise compatibility Fully standard x86/Linux Standardized Communication Layer Out-of-the-box ISV Installation ISV applications simply install and run CCM –Cluster Compatibility Mode CLE3 run mode is set by the user on a job-by-job basis to provide full flexibility Cray Inc Cray Proprietary

CLE3 Allows simultaneous CCM and ESM Modes Cray XT6/Baker System ESM Mode Running Compute Nodes CCM Mode Running ESM Mode Idle Service Nodes Many Applications running in Extreme Scalability Mode (ESM) Submit CCM application through Batch Scheduler, nodes reserved qsub -q ccm Qname AppScript Nodes assigned by batch SW and configured for CCM Executes the batch script and Application Other nodes scheduled for ESM or CCM applications as available After CCM job completes, CCM nodes cleared CCM nodes available for ESM or CCM mode Applications Cray Inc Cray Inc. ConfidentialCray Proprietary

Installing Standard ISV is Simple! Install out of the box No changes to install procedure Install on shared storage Any globally shared storage available on the compute nodes Supports networked licenses Cray Inc Cray Proprietary

Cray Scalable Systems OS Roadmap 2009-2013   XT5/XT5m Cray Systems XT6/XT6m (G34/SS) Baker (G34/Gemini) Cascade CLE 2.x CLE 3.x Future OS Releases CLE 2 Released 2008-2009 CLE 3 - Released 2010 XT5/XT5m/XT6/XT6m/Baker AMD OpteronTM 6100 Series Processor Cluster Compatibility Mode 1.0 Core Specialization LUSTRE 1.8 DVS Enhancements 2011 Baker+ Future AMD OpteronTM 6000 Series Cluster Compatibility Mode 2.0 ISV Applications Acceleration CLE3 Offers our 3rd Generation SUSE Linux based OS New features for performance, scalability, reliability and compatibility Roadmap for further improvements Cray Inc Cray Proprietary

Cray Software Ecosystem ® ® DVS Speak to the fact that some applications, CD-adapco (Star-CD) and LSTC (LSDYNA) are compiled and run in “Extreme Scalability Mode (ESM)” using Cray’s native MPI libraries for maximum scalability. Cluster Compatibility Mode (CCM) is a new feature of the Cray Linux Environment (CLE) that allows other applications to run on Cray system out-of-the-box. Scalability and Performance may vary depending upon the application. CrayPAT Cray Apprentice Iterative Refinement Toolkit Cray PETSc, CASK ® Cray Inc Cray Proprietary

Cray Programming Environment Compilers, libraries, programming aids and tools to {enhance | improve} application performance and programmer productivity Every XT6 Cray System Includes Cray Integrated Tools Cray Compilation Environment Fortran/C/UPC/CAF/C++ Optimized OpenMP/MPI Libraries CrayPat, Cray Apprentice2 Optimized Math Libraries Iterative Refinement Toolkit Cray PETSc, CASK Customer-selected Options Compilers PGI, PathScale Debuggers TotalView, Allinea DDT Schedulers Moab, PBS Professional, LSF Cray Inc Cray Inc. ConfidentialCray Proprietary Cray Inc. ConfidentialCray Proprietary 11