Overview of Midrange Computing Resources at LBNL Gary Jung March 26, 2002.

Slides:



Advertisements
Similar presentations
1 3/26/02 Midrange Computing Workshop Sandy Merola Gary Jung March 26, 2002.
Advertisements

Beowulf Supercomputer System Lee, Jung won CS843.
Managing Linux Clusters with Rocks Tim Carlson - PNNL
 Changes to sources of funding for computing in the UK.  Past and present computing resources.  Future plans for computing developments. UK Status &
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Unified Parallel C at LBNL/UCB UPC at LBNL/U.C. Berkeley Overview Kathy Yelick U.C. Berkeley, EECS LBNL, Future Technologies Group.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems.
TurboBLAST: A Parallel Implementation of BLAST Built on the TurboHub Bin Gan CMSC 838 Presentation.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
Understanding Operating Systems 1 Overview Introduction Operating System Components Machine Hardware Types of Operating Systems Brief History of Operating.
1/16/2008CSCI 315 Operating Systems Design1 Introduction Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
SuperMike: LSU’s TeraScale, Beowulf-class Supercomputer Presented to LASCI 2003 by Joel E. Tohline, former Interim Director Center for Applied Information.
An Introduction to Infrastructure Ch 11. Issues Performance drain on the operating environment Technical skills of the data warehouse implementers Operational.
NERSC User Group Meeting Future Technology Assessment Horst D. Simon NERSC, Division Director February 23, 2001.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems.
1 Midrange Computing Working Group Process and Goals Background The MRC Working Group Phase I: - Assessment and Findings - Recommendations for a path forward.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
 H.M.BILAL Operating System Concepts.  What is an Operating System?  Mainframe Systems  Desktop Systems  Multiprocessor Systems  Distributed Systems.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
Goals How can programs using MRCs help each other? How can ITSD help MRC-using programs? Is there utility in creating a shared resource?
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
1.1 Operating System Concepts Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Alessandra CiocioAugust 10, CSAC meeting1 Mid-Range Computing Working Group Report CSAC and ITSD are working in partnership to determine the value.
NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 State of the Labs: NERSC Update Juan Meza Lawrence Berkeley National.
Alessandra CiocioMay 11, CSAC meeting1 Mid-Range Computing Working Group Report CSAC and ITSD are working in partnership to determine the value.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
March, 2003 SOS 7 Jim Harrell Unlimited Scale Inc.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
Alessandra CiocioApril 6, CSAC meeting1 Mid-Range Computing Working Group Report CSAC and ITSD are working in partnership to determine the value.
The LBNL Perceus Cluster Infrastructure Next Generation Cluster Provisioning and Management October 10, 2007 Internet2 Fall Conference Gary Jung, SCS Project.
National Energy Research Scientific Computing Center (NERSC) HPC In a Production Environment Nicholas P. Cardo NERSC Center Division, LBNL November 19,
A. CiocioITSD/CSAC Retreat March 3, Scientific Cluster Support Program SCS Steering Committee Report.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Building and managing production bioclusters Chris Dagdigian BIOSILICO Vol2, No. 5 September 2004 Ankur Dhanik.
Interconnect Trends in High Productivity Computing Actionable Market Intelligence for High Productivity Computing Addison Snell, VP/GM,
PDSF and the Alvarez Clusters Presented by Shane Canon, NERSC/PDSF
LBNL/NERSC/PDSF Site Report for HEPiX Catania, Italy April 17, 2002 by Cary Whitney
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
Alessandra CiocioCSAC meeting - November 9, Milestones Completion of Phase I - MRC WG Document on initial assessment and findings - Recommendations.
PDSF Computing model Thomas Davis ASG/NERSC, LBNL LCCWS.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Berkeley Lab Software Distribution Site NLIT Dan Pulsifer - Engineering May 11 th, 2008.
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real.
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real.
Chapter 1: Introduction
Lattice QCD Computing Project Review
Chapter 1: Introduction
Chapter 1: Introduction
Constructing a system with multiple computers or processors
Chapter 1: Introduction
Chapter 1: Introduction
Chapter 1: Introduction
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Chapter 1: Introduction
Constructing a system with multiple computers or processors
Chapter 1: Introduction
Chapter 1: Introduction
Chapter 1: Introduction
Chapter 1: Introduction
Designing a PC Farm to Simultaneously Process Separate Computations Through Different Network Topologies Patrick Dreher MIT.
Chapter 1: Introduction
Presentation transcript:

Overview of Midrange Computing Resources at LBNL Gary Jung March 26, 2002

A Look at Existing Midrange Resources Purpose of this exercise Illustrate the usage and demand for midrange computing cycles at LBNL –Lab usage of NERSC computing resources –Division-owned resources Show which platforms/architectures are used. Observe current trends

What is of interest? NERSC Resources Find out number of projects using NERSC Look at allocation Division-owned resources Systems larger than a desktop. –4 or more processors/nodes Linux Clusters Distributed memory systems SMP systems

Findings - NERSC Usage Use of NERSC resources through the standard allocation process has been increasing. –50 LBNL projects awarded 709,850 MPP hrs in FY00 –52 projects awarded 2,128,822 MPP hrs in FY01 –50 projects awarded 9,471,200 MPP hrs in FY02 Use of NERSC resources through the LBNL T3E agreement increased each year during the FY98-FY00 period. –13 LBNL projects awarded 86,000 MPP hrs in FY98 –15 LBNL projects awarded 100,000 MPP hrs in FY99 –12 LBNL projects awarded 150,000 MPP hrs in FY00 NERSC PDSF used by many Physics and Nuclear Science projects including STAR, E871, E895, ATLAS, Amanda, Babar, SNO.

Findings - Division-owned Resources

Observations Increased usage of NERSC resources through the standard allocation process and PDSF. Purchases of large SMP systems have declined as evidenced by survey. Most existing SMP systems surveyed are utilizing older generation processors. Recently, the trend has been for Divisions to purchase and operate their own Linux clusters. Most of these are loosely coupled systems utilizing fast ethernet as the interconnect. Fewer purchases of parallel clusters utilizing a high speed interconnect such as Myrinet because of increased costs and complexity.

Example of a Division-owned Resource Berkeley Drosophila Genome Project Usage: Genomic sequence annotation using BLAST. Hardware: LINUX Cluster by LINUX Networx. 20 nodes, 2 x 700Mhz Intel PIII CPUs, 512MB/node Interconnect: 100BaseT interconnect Scheduler: Batch process using PBS Procurement 2 months to determine RFP 3 weeks to get quotes from 4 vendors and determine vendor 2 months to get funding approval (outside agency) Setup: 3 weeks in semi operations. 2 months to get in shape for users 0.75 FTE effort Purchase Costs: $65K ($95K - $30K rebate because of academic/showcase.) $3K vendor installation fee Added another 12 nodes Aug 2001 to bring total nodes to 32. $32K purchase including vendor installation