Download presentation
Presentation is loading. Please wait.
Published byPiers Richard Modified over 9 years ago
1
Windows HPC: Launching to expand grid usage Windows HPC: Launching to expand grid usage Windows vs. Linux or Windows with Linux?
2
Agenda Launching Windows HPC Server 2008 Windows HPC: who is it for? Windows and Linux for the best fit to your needs Windows HPC Server 2008: characteristics Tools for Parallel Programming
3
Windows HPC launching Available on Nov 1 Evaluation version for download at: www.microsoft.com/france/hpc
4
Windows HPC: Who is it for? Research groups who work on Windows workstations and need a local computing capability for their day-to-day calculation, often as a complement to the large shared cluster in their institution that requires reservation. Their requirements: – Use of the cluster must be as easily as possible for a non-IT skilled scientist – Cluster must be managed by their standard local IT Scientists and engineers that develop code on their Windows workstations and want to expand them onto a cluster with as little effort as possible Academic institutions that want to open their grid infrastructure to new types of users, optimize their investment in hardware and foster new research initiatives Academic institutions that want to prepare their students to all the computing environment alternatives they will encounter in their careers
5
Open approach
6
Interoperability At application level: Services for Unix Applications Novell agreements Double OS clusters Proactive integration At OS levelAt scheduling level: Open Grid Forum
7
OGF: In existence since 2001 – Mission: Pervasive adoption of Grid technologies – Standards through an open community process HPC Profile Working Group (Started 2006) – Commercial: Microsoft, Platform, Altair, … – Research: U of Virginia, U of Southampton, Imperial College London, … – Grid Middleware: OMII-UK, Globus, EGEE, CROWN, … Interoperability & Open Grid Forum
8
LSF 7.0.1 HPC Basic Profile SGE 6.1 on SUSE HPC Basic Profile gSOAP WS Client Windows HPC v1 HPC Basic Profile Windows HPC v2 HPC Basic Profile SGE HPC Basic Profile PBS HPC Basic Profile Cross-platform integration
9
Cluster Resource Linux Cluster Resource Windows Scheduler AWindows HPC Server Applications Cluster Resource Linux Cluster Resource Windows Scheduler A Windows Compute Cluster Server (v2) HPCBP Applications Scheduler B Applications End Users HPCBP Isolated Application & Resource SilosIntegrated Resources Scenario: Metascheduling
10
Opening to new usages/users at UMEA Requirements: – Increase parallel computing capacity to support more demanding research projects – Expand services to a new set of departments and researchers that have expressed demand to support Windows-based applications Solution: – Deployment of a dual-OS system (WHPCS2008 and Linux) on a cluster consisting of 672 blades / 5376 cores Results: – Linpack on WHPCS2008 achieves 46.04Tflops / 85.6% efficiency – Ranked 39th in June 2008 Top500 list
11
NCSA at University of Illinois UC Requirements: – Meet the evolving needs of both Academic and private industry users of its supercomputing center – Enable HPC for a broader set of users in the future than the traditional ones Solution: – Add Windows HPC Server to 1,200-node, 9472-core cluster options Results: – Linpack on WHPCS2008 achieves 68.5 Tflops, 77.7% efficiency – Ranked 23rd in June 2008 Top500 list
12
70% eff The prize: NCSA’s Abe cluster #14 on Nov 2007 Top500 The goal: Unseat #13 Barcelona cluster at 63.8 TFlops #23 Top 500 #23 Top 500
13
1184 nodes online 4 hours from bare metal to Linpack
14
Using Excel to Drive Linpack
15
============================================================================ HPLinpack 1.0a -- High-Performance Linpack benchmark -- January 20, 2004 Written by A. Petitet and R. Clint Whaley, Innovative Computing Labs., UTK ============================================================================ The following parameter values will be used: N : 1008384 NB : 192 PMAP : Row-major process mapping P : 74 Q : 128 PFACT : Crout NBMIN : 4 NDIV : 2 RFACT : Right BCAST : 1ring DEPTH : 0 SWAP : Mix (threshold = 192) L1 : transposed form U : transposed form EQUIL : yes ALIGN : 16 double precision words ============================================================================ T/V N NB P Q Time Gflops ---------------------------------------------------------------------------- W00R2C4 1008384 192 74 128 9982.08 6.848e+004 ---------------------------------------------------------------------------- ||Ax-b||_oo / ( eps * ||A||_1 * N ) = 0.0005611...... PASSED ||Ax-b||_oo / ( eps * ||A||_1 * ||x||_1 ) = 0.0009542...... PASSED ||Ax-b||_oo / ( eps * ||A||_oo * ||x||_oo ) = 0.0001618...... PASSED ============================================================================ After 2.5 hours… 68.5 Tflops, 77.7% efficiency
16
Spring 2008, NCSA, #23 9472 cores, 68.5 TF, 77.7% Fall 2007, Microsoft, #116 2048 cores, 11.8 TF, 77.1% Spring 2007, Microsoft, #106 2048 cores, 9 TF, 58.8% Spring 2006, NCSA, #130 896 cores, 4.1 TF Spring 2008, Umea, #39 5376 cores, 46 TF, 85.5% 30% efficiency improvement 30% efficiency improvement Windows HPC Server 2008 Windows Compute Cluster 2003 Winter 2005, Microsoft 4 procs, 9.46 GFlops Spring 2008, Aachen, #100 2096 cores, 18.8 TF, 76.5%
17
Other examples Expanded its cluster to include Windows HPC to support a growing number of Windows-based parallel applications “A lot of Windows-based development is going on with the Microsoft® Visual Studio® development system, and most researchers have a Windows PC on their desk,” says Christian Terboven, Project Lead for HPC on Windows at the CCC. “In the past, if they needed more compute power, these researchers were forced to port their code to UNIX because we offered HPC services primarily on UNIX.” Dual boot system, 256 nodes @18.81 Tflops and 100 th in June08 Top500 list Facility for Breakthrough Science seeks to expand user base with Windows-based HPC “Windows HPC Server 2008 will help us extend our user base by taking high-performance computing to new user communities in a way we were unable to do before” “Porting codes from Linux to Windows HPC Server 2008 was very easy and painless. I was running the ported code within a day” 32 nodes, 256 cores, 2 head nodes, dual-boot system WHPCS2008/SuSe Linux 10.1 Integrated Windows HPC in the Proactive middleware. Goal is to offer identical access to computing ressources to users of Windows or Linux-based applications Leading Supercomputing Center in Italy improves access to supercomputing resources for more researchers from private industry sectors, many of whom were unfamiliar with its Linux-based tools and interfaces “Most researchers do not have time to acquire specialized IT skills. Now they can work with an HPC cluster that has an interface similar to the ones they use in their office environments. The interface is a familiar Windows feature, and it’s very easy to understand from the beginning” 16 nodes, 128 cores, dedicated additional Windows cluster
18
Systems Management Job Scheduling MPI Storage Rapid large scale deployment and built-in diagnostics suite Integrated monitoring, management and reporting Familiar UI and rich scripting interface Integrated security via Active Directory Support for batch, interactive and service-oriented applications High availability scheduling Interoperability via OGF’s HPC Basic Profile MS-MPI stack based on MPICH2 reference implementation Performance improvements for RDMA networking and multi-core shared memory MS-MPI integrated with Windows Event Tracing Access to SQL, Windows and Unix file servers Key parallel file server vendor support (GPFS, Lustre, Panasas) In-memory caching options Windows HPC Server 2008
19
Large Scale Deployments
20
And out-of-the-box, integrated solution for smaller environments …
21
Parallel Programming Available Now – Development and Parallel debugging in Visual Studio – 3 rd party Compilers, Debuggers, Runtimes etc.. available Emerging Technologies – Parallel Framework – LINQ/PLINQ – natural OO language for SQL queries in.NET – C# Futures – way to explicitly make loops parallel For the future: Parallel Computing Initiative (PCI) – Triple investment with a new engineering team – Focused on common tools for developing multi-core codes from desktops to clusters Compilers Visual Studio Intel C++ Gcc PGI Fortran Intel Fortran Absoft Fortran Fujitsu Profilers and Tracers PerfMon ETW (for MS-MPI) VSPerf /VSCover CLRProfiler Vampir (Being ported to Windows) Intel Collector/Analyzer(Runs on CCS w Intel MPI) Vtune & CodeAnalyst Marmot (Being ported to Windows) MPI Lint++ Debuggers Visual Studio WinDbg DDT Runtimes and Libraries MPI OpenMP C# Futures MPI.C++ and MPI.Net PLINQ
22
Resources Microsoft HPC Web site – download the evaluation version – http://www.microsoft.com/france/hpc http://www.microsoft.com/france/hpc Windows HPC Community site – http://www.windowshpc.net http://www.windowshpc.net Dual-OS cluster white paper – Online soon
23
© 2008 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. Microsoft makes no warranties, express or implied, in this summary.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.