Welcome To The 33 rd HPC User Forum Meeting September 2009.

Slides:



Advertisements
Similar presentations
Welcome To The 28 th HPC User Forum Meeting April 2008.
Advertisements

So what can I expect when I serve on a NEASC/CPSS Visiting Committee? A Primer for New Visiting Committee Members.
Welcome To The 40 th HPC User Forum Meeting Beijing, China October 2010.
Cloud Computing Jonathan Weitz Bus: 550 June 3, 2013.
Welcome To The 46 th HPC User Forum Meeting At: Imperial College July 2012.
High-Performance Computing
Beowulf Supercomputer System Lee, Jung won CS843.
Welcome To The 41 st HPC User Forum Meeting April 2011.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
1 2 nd FAA Workshop on Composite Material Control September 16-18, 2003 Westin O’Hare Chicago, Il.
Welcome To The 42 st HPC User Forum Meeting September 2011.
Microsoft Services Provider Agreement. I want to provide my customers with software services that include Microsoft licensed products. Microsoft Services.
HP Partner Navigator Program
Welcome To The 47 th HPC User Forum Meeting At: HLRS/Stuttgart July 2012.
GETTING WEB READY Introduction to Web Hosting. Table of Contents + Websites: The face of your business …………………………………………………………………………1 + Get your website.
Welcome To The 49 th HPC User Forum Meeting April 2013.
So What Can I Expect When I Serve on an NEASC/CPSS Visiting Team? A Primer for New Team Members.
Welcome To The 45 th HPC User Forum Meeting April 2012.
David E. Swenson Affinity Static Control Consulting LLC (512) – static ESD Association ( Texas Chapter of the ESD Association.
Welcome To The 38 th HPC User Forum Meeting HLRS October 2010.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
Future Requirements for NSF ATM Computing Jim Kinter, co-chair Presentation to CCSM Advisory Board 9 January 2008.
© Logicalis Group About the Logicalis IT Forum. Are you A large-scale System i user, interested in the application of the System i platform from both.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Welcome To The 36 th HPC User Forum Meeting April 2010.
Welcome To The 48 th HPC User Forum Meeting September 2012.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, minute turn at the gate.
Agenda: Day One 13:15HPC User Forum Welcome/Introductions, Steve Finn and Earl Joseph 13:25Welcome/Introductions and Overview of HPC in NL, Anwar Osseyran,
LIVE INTERACTIVE YOUR DESKTOP Start recording—title slide—1 of 3 1 March 25, :30 p.m. – 7:30 p.m. Eastern time Preparing for the 2013.
Group Plan st Mosman 1908 Lower North Shore District: Sydney North Region New South Wales.
Early Experiences with Energy-Aware (EAS) Scheduling
© 2010 Voltaire Inc. HPCFS AT ORLANDO LUG 2011 BILL BOAS PATH FORWARD FOR LUSTRE COMMUNITY System Fabric Works.
HPC Business update HP Confidential – CDA Required
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
Understanding the chapter leader role. Responsibilities of the Leadership Board n Provides strategic direction, leadership planning and administration.
A New Parallel Debugger for Franklin: DDT Katie Antypas User Services Group NERSC User Group Meeting September 17, 2007.
Welcome To The 34 th HPC User Forum Meeting October 2009.
© 2012 Whamcloud, Inc. Lustre Development Update Dan Ferber Whamcloud, Inc. IDC HPC User Group April 16-17, 2012.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
Welcome To The Workshop On The IDC Recommendations For A EU HPC Strategy.
Utility Financial Management AWWA Intermountain Section Leadership Forum Session Two November 10, 2015.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Barriers to Industry HPC Use or “Blue Collar” HPC as a Solution Presented by Stan Ahalt OSC Executive Director Presented to HPC Users Conference July 13,
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.
Power and Cooling of HPC Data Centers Requirements Roger A Panton Avetec Executive Director DICE
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
AXUG Carolinas Regional Chapter 10:30 Using AX Data to Deliver Superior Service - JMP 11:00 Scheduling Solution – Dynamics Software &
SCI-BUS is supported by the FP7 Capacities Programme under contract no. RI Introduction to Science Gateway Sustainability Dr. Wibke Sudholt CloudBroker.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
IDC HPC ROI Research Update: Economic Models For Measuring The Financial ROI And Innovation From HPC Investments June 2016 Earl Joseph,
H2020, COEs and PRACE.
Cost Models for HPC and Supercomputing
Performance Technology for Scalable Parallel Systems
HPC System Acquisition and Service Provision
Appro Xtreme-X Supercomputers
Super Computing By RIsaj t r S3 ece, roll 50.
Cray Announces Cray Inc.
Title of Presentation Client
Title of Presentation Client
Title of Presentation Client
CSS Update for CoMC 19th September 2018
Title of Presentation Client
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Presentation transcript:

Welcome To The 33 rd HPC User Forum Meeting September 2009

Important Dates For Your Calendar FUTURE HPC USER FORUM MEETINGS: October 2009 International HPC User Forum Meetings:  HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday)  EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) US Meetings:  April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn  September 2010, Seattle Washington

Thank You To Our Meal Sponsors! Wednesday Breakfast -- Hitachi Cable America Wednesday Lunch -- Altair Engineering & AMD Wednesday Break -- Appro International Thursday Breakfast -- Mellanox Technologies Thursday Lunch -- Microsoft Thursday Break -- ScaleMP

A Petascale Triva Question How many years would 1,000 scientists have to calculate by hand to equal 1 second of work on a 0.1 PFLOPS supercomputer?  Assuming that they can do 1 calculation every second, with no rest time (and a long life)

A Petascale Triva Answer 3,200 Years  0.1 PF = 1,000x365x24x60x60x3,200 From DOD’s new Mana supercomputer in Hawaii:  A Dell PowerEdge M610 with 1,152 nodes  Each node contains two 2.8 Ghz Intel Nehalem processors for a total of 9,216 computer cores  That gives it a PEAK performance of 103 TFLOPS or 0.1 PFLOPS  From MHPCC Acting Director: David L. Stinson

Tuesday Dinner Vendor Updates: 10 Min. Only IBM Appro Hitachi Cable Luxtera Mellanox ScaleMP Tech-X Mitrionics

Welcome To The 33 rd HPC User Forum Meeting September 2009

Introduction: Logistics Ask Mary if you need a receipt Meals and events  Wednesday tour and dinner plans We have a very tight agenda (as usual)  Please help us keep on time! Review handouts  Note: We will post most of the presentations on the web site  Please complete the evaluation form

HPC User Forum Mission To improve the health of the high- performance computing industry through open discussions, information- sharing and initiatives involving HPC users in industry, government and academia, along with HPC vendors and other interested parties.

HPC User Forum Goals Assist HPC users in solving their ongoing computing, technical and business problems Provide a forum for exchanging information, identifying areas of common interest, and developing unified positions on requirements  By working with users in other sectors and vendors  To help direct and push vendors to build better products  Which should also help vendors become more successful Provide members with a continual supply of information on:  Uses of high end computers, new technologies, high end best practices, market dynamics, computer systems and tools, benchmark results, vendor activities and strategies Provide members with a channel to present their achievements and requirements to interested parties

Important Dates For Your Calendar FUTURE HPC USER FORUM MEETINGS: October 2009 International HPC User Forum Meetings:  HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday)  EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) US Meetings:  April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn  September 2010, Seattle Washington

Thank You To Our Meal Sponsors! Wednesday Breakfast -- Hitachi Cable America Wednesday Lunch -- Altair Engineering Wednesday Break -- Appro International & AMD Thursday Breakfast -- Mellanox Technologies Thursday Lunch -- Microsoft Thursday Break -- ScaleMP

1Q 2009 HPC Market Update

Q109 HPC Market Result – Down 16.8% Departmental ($250K - $100K) $754M Divisional ($250K - $500K) $237M Supercomputers (Over $500K) $802M Workgroup (under $100K) $282M HPC Servers $2.1B Source IDC, 2009

Q109 Vendor Share in Revenue

Q109 Cluster Vendor Shares

HPC Compared To IDC Server Numbers

HPC Qview Tie To Server Tracker: 1Q 2009 Data All WW Servers As Reported In IDC Server Tracker $9.9B HPC Qview Compute Node Revenues ~$1.05B* HPC Special Revenue Recognition Services Includes those sold through custom engineering, R&D offsets, or paid for over multiple quarters HPC Special Revenue Recognition Services ~$474M HPC Computer System Revenues Beyond The Base Compute Nodes: Includes interconnects and switches, inbuilt storage, scratch disks, OS, middleware, warranties, installation fees, service nodes, special cooling features, etc. Revenue Beyond Base Nodes ~$576M * This number ties the two data sets on an apples-to-apples basis Tracker QST Data Focus: Compute Nodes HPC Qview Data Focus: The Complete System: “Everything needed to turn it on” 3 HPC 1 QST 2 HPC

OEM Mix Of HPC Special Revenue Recognition Services Notes: Includes product sales that are not reported by OEMs as product revenue in a given quarter  Sometimes HPC systems are paid for across a number of quarters or even years Includes NRE – if required for a specific system Includes custom engineering sales Some examples – Earth Simulator, ASCI Red, ASCI Red Storm, DARPA systems, and many small and medium HPC systems that are sold through a custom engineering or services group because that need extra things added 2

Areas Of HPC “Uplift” Revenues 3

Notes: * Computer hardware (in cabinet) -- hybrid nodes, service nodes, accelerators, GPGPUs, FPGAs, internal interconnects, in-built disks, in-built switches, special cabinet doors, special signal processing parts, etc. * External interconnects -- switches, cables, extra cabinets to hold them, etc. * External storage -- scratch disks, interconnects to them, cabinets to hold them, etc. (This excludes user file storage devices) * Software -- includes both bundled and separately charged software if sold by the OEM, or on the purchase contract -- includes the operating system, license fees, the entire middleware stack, compilers, job schedules, etc. (it excludes all ISV applications unless sold by the OEM and in the purchase contract) * Bundled warranties * Misc. items -- Since the HPC taxonomy includes everything required to turn on the system and make it operational, items like bundled installation services, special features and other add-on hardware, and even a special paint job if required 3

Special Paint Jobs Are Back …

2010 IDC HPC Research Areas Quarterly HPC Forecast Updates  Until the world economy recovers New HPC End-user Based Reports:  Clusters, processors, accelerators, storage, interconnects, system software, and applications  The evolution of government HPC budgets  China and Russia HPC trends Power and Cooling Research Developing a Market Model For Middleware and Management Software Extreme Computing Data Center Assessment and Benchmarking Tracking Petascale and Exascale Initiatives

Agenda: Day One, Wednesday Morning 8:10amIntroductions and Welcome, Steve Finn and Earl Joseph Morning Session Chair: Steve Finn 8:15amWeather/climate presentation from ORNL, Jim Hack 8:45amWeather/climate presentation from NCAR, Henry Tufo 9:15amWeather/climate presentation from NASA/Goddard, Phil Webster 9:45amTwo short vendor technology updates (Altair and Sun) 10:15amBreak 10:30amWeather/climate presentation from NRL Monterey, Jim Doyle 11:00amWeather and Climate Directions from an IBM perspective, Jim Edwards 11:25amPanel on HPC Weather/Climate/Earth Sciences Requirements & Directions Moderators: Steve Finn and Earl Joseph 12:00pmNetworking Lunch

Lunch Break Thanks to Altair Engineering Please Return Promptly at 1:00pm

Thank You Altair Engineering For Lunch

Agenda: Day One, Wednesday Afternoon Afternoon Session Chair: Paul Muzio 1:00pm HPC in Europe, HECToR Update, Andrew Jones, NAG 1:30pmDOD HPCMP Program Update, Larry Davis 2:00pmWeather/climate Research at Northrop Grumman, Glenn Higgins 2:25pmWeather and Climate Directions from a Cray perspective, Per Nyberg 2:50pmPanel on Government and Political Issues, Concerns and Ideas for New Directions Moderator: Charlie Hayes 3:30pmDICE Parallel File System Project, Tracey Wilson 4:00pmNCAR HPC User Site Tour, return by 6:00pm 6:00pmNetworking break and time for 1-on-1 meetings 6:30pm Special Dinner Event

Welcome To Day 2 Of The HPC User Forum Meeting

Thank You To Our Meal Sponsors! Wednesday Breakfast -- Hitachi Cable America Wednesday Lunch -- Altair Engineering Wednesday Break -- Appro International Thursday Breakfast -- Mellanox Technologies Thursday Lunch -- Microsoft Thursday Break -- ScaleMP

Agenda: Day Two, Thursday Morning 8:10amWelcome, Earl Joseph and Steve Finn Morning Session Chair: Douglas Kothe 8:15amPower Grid Research at PNNL, Mo Khaleel 8:45amHPC Data Center Power and Cooling Issues, and New Ways to Measure HPC Systems, Roger Panton, Avetec 9:15amCompiler and Tools: User Requirements from ARSC, Edward Kornkven 9:45amNew HPC Directions at Microsoft, Roger Barga 10:15amBreak 10:30amTechnical Panel on HPC Front-End Compiler Requirements and Directions Moderators: Robert Singleterry, Vince Scarafino 12:15pmNetworking Lunch

73 ?

Lunch Break Thanks to Microsoft Please Return Promptly at 1:00pm

Thank You Microsoft For Lunch

Agenda: Day Two, Thursday Afternoon Afternoon Session Chair: Jack Collins 1:00pmARL HPC User Site Update, Thomas Kendall 1:30pmWeather/climate presentation from NCAR, John Michelakes 2:00pmTechnical Panel on HPC Application Scaling Issues, Requirements and Trends Moderators: Doug Kothe and Paul Muzio. Panel members: 3:15pmShort vendor technology update (Microsoft) 3:30pmBreak 4:00pmWeather/climate presentation from NASA Langley, Mike Little 4:30pm"Spider" the Largest Lustre File Stem, ORNL, Galen Shipman 5:00pmMeeting Wrap-Up and Future Meeting Dates, Earl Joseph and Steve Finn 5:00pmMeeting Ends

Important Dates For Your Calendar FUTURE HPC USER FORUM MEETINGS: October 2009 International HPC User Forum Meetings:  HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday)  EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) US Meetings:  April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn  September 2010, Seattle Washington

Thank You For Attending The 33 rd HPC User Forum Meeting

Please Or check out: Questions?

Please Or check out: Questions?

HPC User Forum Steering Committee Meeting September 2009

How Did The Meeting Go? What worked well? What needs to be changed or improved? Dates and locations for the next Steering Committee meetings?  SC09 – Monday  January, 2010 at NASA

Important Dates For Your Calendar FUTURE HPC USER FORUM MEETINGS: October 2009 International HPC User Forum Meetings:  HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday)  EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) US Meetings:  April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn  September 2010, Seattle Washington

Please Or check out: Questions?

Q408 vs. Q109

HPC Qview Tie To Server Tracker: 1Q 2009 Data HPC Server Revenue HPC Special Revenue Recognition Services Revenue Beyond Base NodesQ1 HPC TotalQ1 Share HP$268$140$190$59829% IBM$178$207$140$52525% Dell$102$57$90$24912% Sun$36$20$19$754% Other$464$50$137$65131% Total$1,048$474$576$2,098100% 44 This Number Ties to the Server Tracker This Number Ties to the HPC Qview

Government Panel Questions

#1 If you believe that the US’s greatest asset in the next 25 years will be our ability to lead to lead the world in the development of intellectual property:  Do you believe the USG is providing sufficient investment to ensure US competitiveness in science and technology, in general, and HPC in particular? Elaborate.  What do you think the USG should or should not do to help HPC?

Government Panel Questions #2Most hardware vendors will agree that profit margins on USG HPC procurements, especially those at the high end, are often negligible at best. a.While it is generally understood that the USG is obligated to try to get the best value for its money, is there a greater obligation beyond a specific procurement for the USG’s behavior towards the industry in general? b.If you believe a healthy US HPC community is important for US competitiveness, what, if anything, should the USG specifically do to help the financial or business health of the US HPC vendors? c.Should the vendors, via one or more of the industry groups, lobby for more lenient procurement terms, less stringent benchmarks, and lower penalties in advanced system procurements? d.Or, should the vendors simply “no bid” more frequently, until the USG relaxes its procurement terms?

Government Panel Questions #3Do you agree that the USG emphasis, especially within DOE and the NSF, in the area of petascale and exascale computing is appropriate and the best use of USG funding for support of the US HPC industry and HPC technology development? Please Elaborate

Government Panel Questions #4Over the past forty years or so, up to about the middle 1990s, industry traditionally followed the lead of the USG in adopting HPC technology. For example: Cray Research sold more YMP supercomputers to industry than to governments. Why hasn’t US industry followed the lead of the USG in the race to petascale computing? a.Is it because their traditional applications don’t need to scale that high? b.Is it because ISV software (and their own s/w) doesn’t scale? c.Is it because of the software per CPU costs? d.Will this effect US competitiveness? e.What action should the USG take, if any, to encourage industry adoption of high end HPC specifically, or HPC of any size, in general?

Government Panel Questions #5At the National Science Foundation there have been two major HPC system funding programs over the past three years: a.The Track 1 Program, to fund the worlds most powerful “leadership class” petascale supercomputers, an IBM system, developed under the DARPA HPCS Program, planned for installation at NCSA in ( It is important to note that Cray is also developing a multi petaFLOP system under the DARPA HPCS Program, which is currently expected to be installed at the Oak Ridge National Laboratory, funded by DOE.) b.The Track 2 Program, four annual procurements to install “mid range” systems smaller than the Track 1 system but of a size to bridge the gap between current HPC systems and more advanced petascale systems. The first Track 2 system was installed at TACC at the University of Texas. The second and third systems are scheduled for the University of Tennessee at ORNL and the University of Pittsburgh and Carnegie Mellon at the Pittsburgh Supercomputing Center. The results of the fourth annual procurement, promised to be a multiple buy of up to four systems, has yet to be announced. Questions: a.Do you agree specifically with the NSF Track 1 and 2 programs, or do you think NSF’s resources should have been or should, in the future, be distributed more broadly throughout academia? Why? b.Now that the forth and last Track 2 procurement is about over, what do you recommend NSF should do next with respect to HPC?

Government Panel Questions #6Do you believe the USG is funding HPC software to the degree necessary to ensure US leadership? a.Specifically with respect to the petascale programs? b.With respect to assisting the ISVs or industrial corporations themselves? c.What other actions would you recommend to improve the US posture with respect to software, especially for US competitiveness?

Government Panel Questions #7Do you think programs like DARPA HPCS will lead the mainstream HPC industry toward higher productivity and performance, or will the technologies developed for these programs split off from most of HPC and go their own way?

Government Panel Questions #8In summation, if any panel members have comments regarding HPC public policy not previously expressed heretofore, please take a few minutes to summarize those points of importance to you.