Download presentation
Presentation is loading. Please wait.
Published byErika Lester Modified over 9 years ago
1
1 NSF High Performance Computing (HPC) Activities Established a HPC SWOT Team as part of Cyber Planning Process (August 2005) Held Meeting to Request Information on HPC Acquisition Models (September 9, 2005) Held Meeting to Request Input on HPC Performance Requirements (October 18, 2005) Released Program Solicitation for 1 or 2 HPC machines (November 10, 2005) Scheduled Future Releases of HPC Acquisition Solicitations (November 2007, 2008 and 2009)
2
2 NSF High Performance Computing (HPC) Background NSF planned to release one or more solicitations for the acquisition of high-performance computing (HPC) systems and support of subsequent HPC services. NSF planned to release one or more solicitations for the acquisition of high-performance computing (HPC) systems and support of subsequent HPC services. Prior to the release of the solicitation(s), NSF invited input on: 1. Processes for machine acquisition and service provision 2. Machine performance requirements of the S&E research community
3
3 HPC System Acquisition and Service Provision Meeting Goal: Receive feedback from machine vendors and resource providers on pros/cons of 3 possible acquisition models: Solicitation for a RP(s) who then selects machineSolicitation for a RP(s) who then selects machine Solicitation for RP-Vendor Team(s)Solicitation for RP-Vendor Team(s) Separate solicitations for machine(s) and RP(s)Separate solicitations for machine(s) and RP(s)
4
4 Other Topics for Discussion Metrics that could be used to define machine performance and reliability requirements Selection criteria that might be used as a basis for proposal evaluation Pros/Cons of acquiring an HPC system that meets a specified performance curve as a one time purchase or in phases Strengths and weaknesses of alternatives such as leasing HPC systems.
5
5 Participants Universities Case Western Reserve U.Case Western Reserve U. Cornell Univ.Cornell Univ. Georgia Inst. Of TechnologyGeorgia Inst. Of Technology Indiana UniversityIndiana University Louisiana State Univ.Louisiana State Univ. NCSANCSA Ohio Supercomputer CenterOhio Supercomputer Center PurduePurdue PSCPSC SDSCSDSC TACCTACC Univ. of NCUniv. of NC Univ. of UtahUniv. of Utah USCUSC Vendors CrayCray DELLDELL Hewlett PackardHewlett Packard IBMIBM IntelIntel Linux NetworxLinux Networx Rackable SystemsRackable Systems SGISGI Sun MicrosystemsSun Microsystems Other Argonne National LabArgonne National Lab CASCCASC DOD HPCMPDOD HPCMP Hayes ConsultingHayes Consulting NCARNCAR ORNLORNL RaytheonRaytheon
6
6 HPC System Acquisition and Service Provision Meeting Outcome: Vendors said any of the models would work for them but RP/Vendor Team least favoredVendors said any of the models would work for them but RP/Vendor Team least favored RP said all three will work but favored Model 1 - Solicitation for RP(s)RP said all three will work but favored Model 1 - Solicitation for RP(s) Acquisition Solicitation used Model 1Acquisition Solicitation used Model 1
7
7 HPC System Performance Requirements Meeting Goal: Obtain input from the S&E research community on: Performance metrics appropriate for use in HPC system acquisitionPerformance metrics appropriate for use in HPC system acquisition Potential benchmark codes representative of classes of S&E applicationsPotential benchmark codes representative of classes of S&E applications
8
8 BIO Participants David Badder - Georgia Institute of Technology James Beach - University of Kansas James Clark - Duke University William Hargrove - Oak Ridge National Lab Gwen Jacobs - University of Montana Phil LoCascio - Oak Ridge National Lab B.S. Manjunath - UC Santa Barbara Neo Martinez - Rocky Mountain Biological Lab Dan Reed - University of North Carolina Bruce Shapiro – JPL-NASA-Cal Tech Mark Schildhauer – USSB - NCEAS
9
9 HPC System Performance Meeting S&E Community Comments: Many S&E codes are “boutique” – not good for benchmarkingMany S&E codes are “boutique” – not good for benchmarking Machines that can run coupled codes are neededMachines that can run coupled codes are needed Speed not the problem, latency isSpeed not the problem, latency is Usability, staying up and running, a priorityUsability, staying up and running, a priority HPC needs not uniform, e.g. faster, moreHPC needs not uniform, e.g. faster, more Flexibility/COTS cost of clusters/desktop systems make them systems of choiceFlexibility/COTS cost of clusters/desktop systems make them systems of choice Software a big bottleneckSoftware a big bottleneck Benchmarks should include 20-30 factorsBenchmarks should include 20-30 factors systems more flexible, have replace specially Vendors said any of the models would work for them but RP/Vendor Team least favoredsystems more flexible, have replace specially Vendors said any of the models would work for them but RP/Vendor Team least favored RP said all three will work but favored Model 1 - Solicitation for RP(s)RP said all three will work but favored Model 1 - Solicitation for RP(s) Acquisition Solicitation used Model 1Acquisition Solicitation used Model 1
10
10 HPC System Performance Meeting Outcome: More community workshops neededMore community workshops needed Current solicitation uses a mixture of “tried and true” benchmarking codesCurrent solicitation uses a mixture of “tried and true” benchmarking codes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.