What’s New in the Cambridge High Performance Computer Service? Mike Payne Cavendish Laboratory Director - Dr. Paul Calleja.

Slides:



Advertisements
Similar presentations
Use of High-Performance Computing in Physics David Bird Works on quantum mechanics of electrons at surfaces and electromagnetism of photonic crystals.
Advertisements

National e-Science Centre Glasgow e-Science Hub Opening: Remarks NeSCs Role Prof. Malcolm Atkinson Director 17 th September 2003.
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
Research Computing and Facilitating Services CLMS Symposium 28 th June 2012 Clare Gryce Head of Research Computing & Facilitating Services.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
1 History of Computers Module 1 Section 1 Source: nfo.lindows.com/examples/powerpoint_example2.ppt.
MD240 - Management Information Systems Sept. 13, 2005 Computing Hardware – Moore's Law, Hardware Markets, and Computing Evolution.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
 States that the number of transistors on a microprocessor will double every two years.  Current technology is approaching physical limitations. The.
UK e-Science and the White Rose Grid Paul Townend Distributed Systems and Services Group Informatics Research Institute University of Leeds.
Parallel Algorithms - Introduction Advanced Algorithms & Data Structures Lecture Theme 11 Prof. Dr. Th. Ottmann Summer Semester 2006.
The science of simulation falsification algorithms phenomenology machines better theories computer architectures non-perturbative QFT experimental tests.
Energy Model for Multiprocess Applications Texas Tech University.
Results Matter. Trust NAG. Numerical Algorithms Group Mathematics and technology for optimized performance Andrew Jones IDC HPC User Forum, Imperial College.
2nd Workshop on Energy for Sustainable Science at Research Infrastructures Report on parallel session A3 Wayne Salter on behalf of Dr. Mike Ashworth (STFC)
Telecom Grade Cloud Computing László Szilágyi 26 April 2013.
Update on Center for High Performance Computing..
HPCx: Multi-Teraflops in the UK A World-Class Service for World-Class Research Dr Arthur Trew Director.
Chapter 2 Computer Clusters Lecture 2.3 GPU Clusters for Massive Paralelism.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
High Performance Computing: Applications in Science and Engineering REACH Symposium on HPC 10 October IITK REACH Symposia October’10.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Parallel and Distributed Systems Instructor: Xin Yuan Department of Computer Science Florida State University.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, minute turn at the gate.
Cross Council ICT Conference May High Performance Computing Ron Perrott Chairman High End Computing Strategy Committee Queen’s University Belfast.
Export Controls—What’s next? Joseph Young Bureau of Industry and Security Export Controls – What’s Next? Joseph Young Bureau of Industry and Security.
BUSINESS DRIVEN TECHNOLOGY
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
VTU – IISc Workshop Compiler, Architecture and HPC Research in Heterogeneous Multi-Core Era R. Govindarajan CSA & SERC, IISc
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)
Computer Organization & Assembly Language © by DR. M. Amer.
System On Chip Devices for High Performance Computing Design Automation Conference 2015 System On Chip Workshop Noel Wheeler
Professor Peter Landshoff 1 If you are trained as a physicist you can do anything!
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
UCISA Top Concerns Oxford 2011 Tom Mortimer. Top Concerns 2010 RankConcernRank Ongoing funding and sustainable resourcing of IT1 2Delivering services.
Barriers to Industry HPC Use or “Blue Collar” HPC as a Solution Presented by Stan Ahalt OSC Executive Director Presented to HPC Users Conference July 13,
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Professor Arthur Trew Director, EPCC EPCC: KT in novel computing.
Science Support for Phase 4 Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
The National Living Wage Managers’ Forum 22 October 2015 Nicola Brown.
Sensors and Instrumentation Computational and Data Challenges in Environmental Modelling Dr Peter M Allan Director, Hartree Centre, STFC.
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
Advanced User Support in the Swedish National HPC Infrastructure May 13, 2013NeIC Workshop: Center Operations best practices.
Current Issues in Sport National Governing Bodies.
1 A simple parallel algorithm Adding n numbers in parallel.
N8-HPC and Polaris Alan Real, Robin Pinning Technical Director(s), N8-HPC
SCI-BUS is supported by the FP7 Capacities Programme under contract no. RI Introduction to Science Gateway Sustainability Dr. Wibke Sudholt CloudBroker.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
6.1 Local government National government. Local government or national government? School admissions Set pay & conditions for school staff Monitor standards.
University GPU Club Tues 29 Oct
Earth System Modelling: an HPC perspective Mike Ashworth & Rupert Ford Scientific Computing Department and STFC Hartree Centre STFC Daresbury Laboratory.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
The Science Data Processor and Regional Centre Overview Paul Alexander UK Science Director the SKA Organisation Leader the Science Data Processor Consortium.
Conclusions on CS3014 David Gregg Department of Computer Science
Lynn Choi School of Electrical Engineering
Computing Facilities presented by Daniel Hung.
Computer Organization A Quick Tour for Introduction
Islamic studies: HEFCE’s proposed programme of work
The National Grid Service
Working with SMEs and Social Enterprises
Computer Organization A Quick Tour for Introduction
HPC Facilities for UK Theory
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
The Cambridge Research Computing Service
Presentation transcript:

What’s New in the Cambridge High Performance Computer Service? Mike Payne Cavendish Laboratory Director - Dr. Paul Calleja (pjc82) More staff

Darwin Dell PowerEdge cores, GHz Intel Woodcrest processors Infinipath (infiniband) throughout cluster ClusterVision software stack (+ HPC add-ons)

New paradigm for HPC? 20 th in Top500 list – TFlop Linpack Cost less than £2 million – of the order of 1/10 th cost per TFlop compared to similar machines in the Top500 list Explained by commoditisation of high performance computing Doubling of performance in HPC (for same cost) every 9 months (twice Moore’s Law) This will continue for the next 5 years primarily driven by multicore technology HPC will soon be almost free!!

Strategic Alliances UHECSS – University High End Computing Support Service Alliance between Cambridge, University College London, Bristol and Southampton with Daresbury Laboratory and Heriot-Watt University to develop model of shared support services (HEFCE) HPC-SIG – UK HPC Special Interest Group Lobby group for any UK University providing a HPC service – these centres will probably be delivering 300TFlops by the end of 2007 HECToR – 60 TFlops from October 2007

Paying for it The new era of ‘Full Economic Costs’ (FEC) demands that all University facilities are ‘sustainable’ which means that they must recover their costs from users who in turn raise the money needed from research grants...but this will destroy innovation, discourage use, etc etc….

Importance of Innovation in HPC Quantum mechanical atomistic simulations: 2 atoms in 1981, 400 atoms in Increase in computational effort of this calculation using 1981 techniques at least Increase in power of hardware in this period ~100. Hence, innovative software technologies increased efficiency of calculation in this period by at least 10 6.

Paying for it So the Cambridge HPCS will continue to offer free, rapid access to all users - particularly to new users of the system and/or HPC. BUT Large and established users of HPC will be expected to pay for access and will be given priority in the queues to deliver the time they have paid for. As new users expand their use of the service they will be expected to apply for funding to pay for time. 7p per core hour = £9 per TFlop hour (Linpack)