IPPP Grid Cluster Phil Roffe David Ambrose-Griffith.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Linux Operating System Linux is a free open-source operating system based on Unix. Linux was originally created by Linus Torvalds with the assistance of.
Tutorial1: NEMO5 Technical Overview
LIBRA: Lightweight Data Skew Mitigation in MapReduce
Cloud Computing Mick Watson Director of ARK-Genomics The Roslin Institute.
Things you need to know before installing Linux Alan M Durham WHO/TDR/USP Bioinformatics Course July 2003.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
IT Infrastructure: Software September 18, LEARNING GOALS Identify the different types of systems software. Explain the main functions of operating.
New Cluster for Heidelberg TRD(?) group. New Cluster OS : Scientific Linux 3.06 (except for alice-n5) Batch processing system : pbs (any advantage rather.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Technology Expectations in an Aeros Environment October 15, 2014.
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Cluster Computing Applications for Bioinformatics Thurs., Aug. 9, 2007 Introduction to cluster computing Working with Linux operating systems Overview.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
 First, check if Windows Server 2008 minimum hardware requirements matches your computer hardware through link below
HEPiX/HEPNT TRIUMF,Vancouver 1 October 18, 2003 NIKHEF Site Report Paul Kuipers
How to get started on cees Mandy SEP Style. Resources Cees-clusters SEP-reserved disk20TB SEP reserved node35 (currently 25) Default max node149 (8 cores.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
Creating and running an application.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
 Introduction  Architecture NameNode, DataNodes, HDFS Client, CheckpointNode, BackupNode, Snapshots  File I/O Operations and Replica Management File.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
LSST Cluster Chris Cribbs (NCSA). LSST Cluster Power edge 1855 / 1955 Power Edge 1855 (*LSST1 – LSST 4) –Duel Core Xeon 3.6GHz (*LSST1 2XDuel Core Xeon)
Cloud Computing project NSYSU Sec. 1 Demo. NSYSU EE IT_LAB2 Outline  Our system’s architecture  Flow chart of the hadoop’s job(web crawler) working.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using WMProxy advanced job submission.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Computational Sciences at Indiana University an Overview Rob Quick IU Research Technologies HTC Manager.
By: Joel Dominic and Carroll Wongchote 4/18/2012.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
FESR Trinacria Grid Virtual Laboratory Practical using WMProxy advanced job submission Emidio Giorgio INFN Catania.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Workstations & Thin Clients
Brief introduction about “Grid at LNS”
GRID COMPUTING.
Specialized Computing Cluster An Introduction
Welcome to Indiana University Clusters
SOFTWARE TECHNOLOGIES
Welcome to Indiana University Clusters
UBUNTU INSTALLATION
Enable computational and experimental  scientists to do “more” computational chemistry by providing capability  computing resources and services at their.
CommLab PC Cluster (Ubuntu OS version)
Hadoop Clusters Tess Fulkerson.
Operating Systems What are they and why do we need them?
IT Infrastructure: Software
Support for ”interactive batch”
Introduction to High Performance Computing Using Sapelo2 at GACRC
Presentation transcript:

IPPP Grid Cluster Phil Roffe David Ambrose-Griffith

Why Use The Cluster? 672 Job Slots (> 1M SpecInt2k) Dual processor, quad core (2.66GHz) providing 8 cores per machine. Low-power Xeon L5430 for greater power efficiency and lower running costs. 16GB RAM per machine, providing 2GB per core. Dual bonded gigabit ethernet 0.5TB Hard Disk Installed with Scientific Linux 4.7 (64 bit) –Large amount of storage 29TB currently unused in Durham alone –New High Powered Cluster

Increased Resources

IPPP and Grid Systems IPPP Desktops –Ubuntu 8.04 (32 bit) S1 – storage server –home –mail –data1 –share –/usr/local (Ubuntu s/w) Grid Nodes –SL 4.7 (64 bit) master – storage server –data-grid –/usr/local (SL s/w) –grid s/w area ui

Why can't I qsub? Actually, you can –Only from /mt/data-grid on the ui But... Home directories are not available on the nodes –Security reasons ( relies on unix file permissions, grid breach opens up wider consequences) –Performance reasons – a badly written job can cripple the home file space with potentially >600 simultaneous jobs. Cluster runs a different OS to desktops –Code needs to be recompiled for use on different architectures –/usr/local is therefore recompiled for SL4 64bit

Job submission ssh (log in to the UI) batch-session-init- converts grid cert, initialise proxy/myproxy batch-package-directory- package application to grid storage batch-job-submit- creates JDL and submit job(s) batch-job-status- check status (running, finished, etc) batch-job-get-output- get output (sandbox and grid storage results) (By default all jobs go to Durham only) –Small modification to the submit line to target the whole grid (currently in development)

Job submission Demonstration

Future Development /mt/data-grid –Use for shared local storage (rather than using grid storage) WMS –Local to Durham gsissh support –Allows you to log in to the ui as your pheno user in order to prepare your jobs Map to local user rather than phenoXXX –pheno job could run as ippp user (for file permissions) Other Requests? - Please ask MPI Support –For parallel or high mem jobs

Resources /usr/local/gridtools/examples/ on the UI