Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.

Slides:



Advertisements
Similar presentations
Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 NW-GRID Future Developments R.J. Allan CCLRC Daresbury Laboratory.
Advertisements

Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
UK -Tomato Chromosome Four Sarah Butcher Bioinformatics Support Service Centre For Bioinformatics Imperial College London
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
Andrew McNabNorthGrid, GridPP8, 23 Sept 2003Slide 1 NorthGrid Status Andrew McNab High Energy Physics University of Manchester.
Sandor Acs 05/07/
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 North West Grid Overview R.J. Allan CCLRC Daresbury Laboratory A world-class Grid.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
Queensland University of Technology CRICOS No J VMware as implemented by the ITS department, QUT Scott Brewster 7 December 2006.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
Computational Research in the Battelle Center for Mathmatical medicine.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Next Steps R.J. Allan CCLRC Daresbury Laboratory.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Status of Grid & RPC-Tests Stand DAQ(PU) Sumit Saluja Programmer EHEP Group Deptt. of Physics Panjab University Chandigarh.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
Brief introduction about “Grid at LNS”
White Rose Grid Infrastructure Overview
NL Service Challenge Plans
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Diskless Remote Boot Linux
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Dana Kaufman SQL Server Appliance Engineering
The National Grid Service Mike Mineter NeSC-TOE
Presentation transcript:

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 NW-GRID Core Four compute clusters funded by NWDA are accessible using Grid middleware and connected using a dedicated fibre network: ● Daresbury – – dl1.nw-grid.ac.uk - 96 nodes (some locally funded) ● Lancaster – – lancs1.nw-grid.ac.uk nodes (96 dedicated to local users) ● Liverpool – – lv1.nw-grid.ac.uk - 44 nodes ● Manchester – – man2.nw-grid.ac.uk – 27 nodes

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Pictures

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Anatomy of a Cluster ● Compute clusters from Streamline Computing ● 2x Sun x4200 head nodes – dual-processor single-core AMD Opteron 2.6GHz (64-bit) – 16GB memory ● Panasas file store – around 3-10TB per cluster (Manchester client only) ● Sun x4100 nodes – Dual-processor dual-core AMD Opteron 2.4GHz (64-bit) – 2-4GB memory pre core – 2x 73GB disks per node – Gbit/s ethernet between nodes ● Sun Grid Engine and Streamline cluster management ● Software – will be discussed during the rest of this course

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Connecting the Nodes ● within a “module” and between modules ● dual Gbit-E network – inter-node communications via stacked Nortel Gbit switch – command, control and file access via 2 nd stacked Nortel Gbit switch ● Ethernet network – diagnostics via standard ethernet switch ● Link to Panasas if installed

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 DL Cluster Detail DL Cluster has 4 racks. 32 nodes per compute rack plus 3 switches. Gbit-E (80Gb/s) links between compute racks and into head node shown in green. 2x head nodes, Panasas storage and external connection in central rack.

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Connecting the Clusters ● Map of the NW – on next slide ● HP Procurve/ 3Com switch is part of each control module ● External connection – Private Fibre network – SuperJANET ● Commercial access is possible by VPN and will use the fibre network

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 NW-GRID Fibre Network ● 1Gb/s between sites – latency? – DL-Lancs trunk - 10x 1Gb/s links – DL-Manchester – single link – DL-Liverpool – single link ● What can it enable us to do? – Scheduling, job migration, visualisation etc. – do you have other ideas?

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 External Connections

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Other Resources ● Manchester – see – man1.nw-grid.ac.uk - Weyl cluster, It is a 44 node system plus login and master nodes. The address of the login node is weyl.mc.man.ac.uk and IP address is Compute nodes are dual processor AMD Opteron 248 2GHz, 78GB of disk, 2GB memory. Nodes are fat nodes with the same spec as above except the memory is 8GB. The user, scratch and software disks are RAID, 2.5TB, NFS mounted. – man3.nw-grid.ac.uk - Escher is a Silicon Graphics Prism (IA64) with 8 Itanium 2 processors, 4 FireGL 3 graphics pipelines running SUSE Linux Enterprise Server 9 with Silicon Graphics ProPack 4 – man4.nw-grid.ac.uk – Biruni is a Web server with Ganalia monitoring results.

Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Other Resources ● Daresbury – BlueGene L – 1000 processors, for software development, available soon. ● Liverpool – POL Cluster – 192x Xeon (32 bit), likely to become part of NW- GRID in April. ● Preston – 8-node SGI Altix Discussions underway with Dept. of Theoretical Astrophysics, U. Central Lancashire.