University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

Condor use in Department of Computing, Imperial College Stephen M c Gough, David McBride London e-Science Centre.
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Beowulf Supercomputer System Lee, Jung won CS843.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
Information Technology Center Introduction to High Performance Computing at KFUPM.
Managing Linux Clusters with Rocks Tim Carlson - PNNL
NPACI Panel on Clusters David E. Culler Computer Science Division University of California, Berkeley
HELICS Petteri Johansson & Ilkka Uuhiniemi. HELICS COW –AMD Athlon MP 1.4Ghz –512 (2 in same computing node) –35 at top500.org –Linpack Benchmark 825.
Comparative Study of Beowulf Clusters and Windows 2000 Clusters By Seshendranath Pitcha.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Undergraduate Poster Presentation Match 31, 2015 Department of CSE, BUET, Dhaka, Bangladesh Wireless Sensor Network Integretion With Cloud Computing H.M.A.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Building a High-performance Computing Cluster Using FreeBSD BSDCon '03 September 10, 2003 Brooks Davis, Michael AuYeung, Gary Green, Craig Lee The Aerospace.
11 The Ultimate Upgrade Nicholas Garcia Bell Helicopter Textron.
05/18/03Maurizio Davini Hepix2003 Department of Physics University of Pisa Site Report Maurizio Davini Department of Physics and INFN Pisa.
HPC at IISER Pune Neet Deo System Administrator
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
Chapter 4 COB 204. What do you need to know about hardware? 
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
CLUSTER COMPUTING STIMI K.O. ROLL NO:53 MCA B-5. INTRODUCTION  A computer cluster is a group of tightly coupled computers that work together closely.
1 W. Owen – ISECON 2003 – San Diego Designing Labs for Network Courses William Owen Michael Black School of Computer & Information Sciences University.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
EDAGRID TUTORIAL Dr. Tuan-Anh Nguyen Ho Chi Minh City University of Technology Prof. Pierre Kuonen University of Applied Sciences of Fribourg, Switzerland.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
11/15/04PittGrid1 PittGrid: Campus-Wide Computing Environment Hassan Karimi School of Information Sciences Ralph Roskies Pittsburgh Supercomputing Center.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Building and managing production bioclusters Chris Dagdigian BIOSILICO Vol2, No. 5 September 2004 Ankur Dhanik.
Interconnect Trends in High Productivity Computing Actionable Market Intelligence for High Productivity Computing Addison Snell, VP/GM,
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Open Source and Business Issues © 2004 Northrop Grumman Corp. All rights reserved. 1 Grid and Beowulf : A Look into the Near Future NorthNorth F-18C Weapons.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
6/14/20161 System Administration 1-Introduction to System Administration.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Low-Cost High-Performance Computing Via Consumer GPUs
White Rose Grid Infrastructure Overview
Lattice QCD Computing Project Review
Super Computing By RIsaj t r S3 ece, roll 50.
Grid Computing.
Low-Cost High-Performance Computing Via Consumer GPUs
Why PC Based Control ?.
The National Grid Service Mike Mineter NeSC-TOE
Cluster Computers.
Presentation transcript:

University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton University Computing Services Simon Cox Department of Electronics and Computer Science

University of Southampton Introduction Clusters background Procurement Configuration, installation and integration Performance Future prospects Changing the landscape Talk Outline

University of Southampton Introduction University of Southampton –20,000+ students (3000+ postgraduate) –1600+ academic and research staff –£182 million turnover 1999/2000

University of Southampton "to acquire, support and manage general-purpose computing, data communications facilities and telephony services within available resources, so as to assist the University to make the most effective use of information systems in teaching, learning and research activities".

University of Southampton HEFCE Computational and Data Handling Project Existing facilities outdated and overloaded £1.01 million total bid, including infrastructure costs and Origin 2000 upgrade Large compute facility to provide significant local HPC capability Large data store – several Terabytes Upgraded networking: Gigabit to the desktop Staff costs to support new facility

University of Southampton Cluster Computing Extremely attractive price/performance Good scalability achievable with high performance memory interconnects Fast serial nodes with lots of memory (up to 4 Gbytes) affordable High throughput, nodes are cheap Still require SMP for large (>4 Gigabytes) memory jobs – for now

University of Southampton Clusters at Southampton ECS: 8 node Alpha NT and 8 node AMD Athlon clusters Social Statistics/ECS/SUCS: 19 node Intel PIII cluster Chemistry: 39 AMD Athlon and 4 dual Intel PIII node cluster Computational Engineering and Design Centre: 21 dual node and 10 dual node Intel PIII clusters Aerodynamics and Flight Mechanics Group: 11 dual node Intel PIII cluster with Myrinet 2000 ISVR: 9 dual node Intel PIII Windows 2000 cluster Several high throughput workstation clusters on campus Windows Clusters research

University of Southampton User Profiles Users from many disciplines: –Engineering, Chemistry, Biology, Medicine, Physics, Maths, Geography, Social Statistics Many different requirements: –Scalability, memory, throughput, commercial apps Want to encourage new users and new applications

University of Southampton Procurement Ask users what they want - open discussion General-purpose cluster specification Open tender process Vendors, from big iron companies to home PC suppliers Shortlist vendors for detailed discussions Oct99Aug00Dec00Jan01Feb01Apr01Jun01 BidGo-aheadPTQRepliesClarificationsOrderInstall

University of Southampton Configuration Varied user requirements Limited budget – value for money crucial Heterogenous configuration optimum Balanced system: CPU, memory, disk Boxes-on-shelves or racks? Management options: serial network, power strips, fast ethernet backbone

University of Southampton IRIDIS Cluster Boxes-on-Shelves 178 Nodes –146 × dual 1GHz PIIIs –32 × 1.5GHz P4s Myrinet2000 –Connecting 150 cpu’s 100 Mbit/s fast Ethernet APC Power strips 3.2 Tb IDE-Fibre disk

University of Southampton Installation & Integration Initial installation by vendor – Compusys plc One week burn-in, still had 3 DOAs Major switch problem fixed by supplier Swap space increased on each node No problems since Pallas, Linpack, NAS benchmarks and user codes for thorough system shakedown Scheduler for flexible partitioning of jobs

University of Southampton NAS Serial Benchmarks Bigger is better

University of Southampton Chemistry Codes Smaller is better

University of Southampton Amber 6 Scalability

University of Southampton Future Prospects Roll-out Windows 2000/XP service –In response to user requirements –Increase HPC user-base –Drag-and-drop supercomputing Expand as part of Southampton Grid –Integration with other compute resources on and off campus –Double in size over next few years

University of Southampton Changing the Landscape Availability of serious compute power to many more users – HPC for the masses Heterogenous systems - tailored partitions for different types of users easy to cater for Compatability between desktops and servers improved – less intimidating New pricing model for vendors – costs are transparent to the customer Affordable, Expandable, Grid-able