CERN openlab for DataGrid applications

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

Fabric Management at CERN BT July 16 th 2002 CERN.ch.
Level 1 Components of the Project. Level 0 Goal or Aim of GridPP. Level 2 Elements of the components. Level 2 Milestones for the elements.
LHCb Bologna Workshop Glenn Patrick1 Backbone Analysis Grid A Skeleton for LHCb? LHCb Grid Meeting Bologna, 14th June 2001 Glenn Patrick (RAL)
IBM SMB Software Group ® ibm.com/software/smb Maintain Hardware Platform Health An IT Services Management Infrastructure Solution.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Beowulf Supercomputer System Lee, Jung won CS843.
High Performance Computing Center North  HPC2N 2002 all rights reserved HPC2N and SweGrid Åke Sandgren, HPC2N and SweGrid Technology Group.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Server Platforms Week 11- Lecture 1. Server Market $ 46,100,000,000 ($ 46.1 Billion) Gartner.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
CERN - European Organization for Nuclear Research Windows 2000 at CERN HepNT- Orsay, France April 24 th, 2001.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
Fabric Management for CERN Experiments Past, Present, and Future Tim Smith CERN/IT.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CHEP 97 Berlin, April 97Frédéric Hemmer1/11 Managing Outsourcing Contracts for System Administration Frédéric Hemmer CERN.
LINUX/UNIX WORKSTATIONS Franklin Montenegro Carlos Sierra.
CERN 25-Mar-99 Computer architectures and operating systems How many do we have to support in HEP? HEPCCC meeting CERN - 9 April.
1 Linux in the Computer Center at CERN Zeuthen Thorsten Kleinwort CERN-IT.
The NICE 2000 Web Services Ivan Deloose, Frédéric Hemmer, Alberto Pace, Maciej Sobczac, and others Information Technology Division - CERN.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
Dave Newbold, University of Bristol8/3/2001 UK Testbed 0 Sites Sites that have committed to TB0: RAL (R) Birmingham (Q) Bristol (Q) Edinburgh (Q) Imperial.
…building the next IT revolution From Web to Grid…
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
CERN 29-Apr-99 Computer architectures and operating systems Is there an opportunity for convergence? FOCUS meeting CERN - 29 April.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
HEPiX 2 nd Nov 2000 Alan Silverman Proposal to form a Large Cluster SIG Alan Silverman 2 nd Nov 2000 HEPiX – Jefferson Lab.
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
LHC Computing, CERN, & Federated Identities
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
PDSF and the Alvarez Clusters Presented by Shane Canon, NERSC/PDSF
November 27, 2001DOE/NSF review of US LHC S&C projects1 The Software and Computing Committee (SC2) in the LHC Computing Grid Project M Kasemann, FNAL.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Dave Newbold, University of Bristol14/8/2001 Testbed 1 What is it? First deployment of DataGrid middleware tools The place where we find out if it all.
Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Introduction to Cloud Technology
1.4 Wired and Wireless Networks
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
EGEE Middleware Activities Overview
Grid site as a tool for data processing and data analysis
Monitoring and Fault Tolerance
Grid related projects CERN openlab LCG EDG F.Fluckiger
Long-term Grid Sustainability
Russian Regional Center for LHC Data Analysis
Fabric and Storage Management
UK Testbed Status Testbed 0 GridPP project Experiments’ tests started
LHC Computing re-costing for
Grid Means Business OGF-20, Manchester, May 2007
US ATLAS Physics & Computing
OffLine Physics Computing
CERN, the LHC and the Grid
LCG experience in Integrating Grid Toolkits
The Problem ~6,000 PCs Another ~1,000 boxes But! Affected by:
Using an Object Oriented Database to Store BaBar's Terabytes
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

CERN openlab for DataGrid applications 2nd Workshop on Fabric Management July 7-8, 2003 Welcome !

GRIDs - The next step in the evolution of large scale computing from Super-Computer through Cluster to Grid performance, capacity 1980 1990 2000 July 8, 2003 Frédéric Hemmer - CERN/IT

CERN “Fabric Management” history 1980’s: Mainframe era ~5-10 analysts or system programmers + help of the company on site Dedicated room for … system manuals The network is the proprietary backplane/switch/interconnect 1990’s: Unix RISC -> PC Linux 1 sytem administrator/ 100’s machines Started to invest in automation/recovery Outsourcing model (MCHF/Year) – now being insourced High speed multivendor local area network (Gigabit networking as of 1990!) 2000’s: Grid’s Must go one step further in automation/configuration… Complex infrastructure problems Operation centers High Speed/high latency multivendor wide area networks July 8, 2003 Frédéric Hemmer - CERN/IT

Integrated Management 1994: Proposal to install a Convex Exemplar Was ~100% HP/UX compatible Single system image Single Management In theory 1 system administrator for 1000 nodes But 2-3 times more expensive than HP Kittyhawks Project finally abandoned but triggered the adoption of outsourcing July 8, 2003 Frédéric Hemmer - CERN/IT

Integrated Management (2) 1994: IBM SP2 Not single system image Very similar to a single-vendor Unix/RISC distributed system Integrated Management Set of (customizable) tools to give a single system management view A few people to manage the 64-node system (but probably could expand with many more nodes without noticeable manpower increase) Things started to become difficult as we were trying to integrate non-IBM equipment (e.g. standard SCSI disks, difficult to enable TAG queing) More expensive than commodity RISC (~ x2) July 8, 2003 Frédéric Hemmer - CERN/IT

Integrated Management (3) 1998: SGI Origin series Single system image 1 system administrator/1024 CPU’s But actually ~ 20 CPU’s installed at CERN Complex performance issues Poor support from the company Too expensive July 8, 2003 Frédéric Hemmer - CERN/IT

Frédéric Hemmer - CERN/IT Today 1000’s of PC/Linux Complex Interdependencies Still not able to shutdown the computer centre in 5 minutes Took > 1 day to recover from last Computer Centre power failure But … Able to reinstall all systems overnight EDG WP4 tools in real use July 8, 2003 Frédéric Hemmer - CERN/IT

Frédéric Hemmer - CERN/IT Monitoring Cray: 6 monitoring consoles/machine IBM: could call IBM in Montpellier Distributed Unix/Linux Sure 1/2/3, PEM, PVSS, EDG WP4 Management Suites Looked at CA Unicenter, IBM Tivoli, BMC Too complex, too expensive New initiatives Intel WfM, PXE DMI (DMTF)/WBEM HP UDC, IBM Eliza, SUN N1 July 8, 2003 Frédéric Hemmer - CERN/IT

Frédéric Hemmer - CERN/IT Reality check July 8, 2003 Frédéric Hemmer - CERN/IT

What you would like to see reliable available powerful calm cool easy to use …. and nice to look at July 8, 2003 Frédéric Hemmer - CERN/IT

Frédéric Hemmer - CERN/IT What you get physics department    Tier2 Lab a Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x Germany Tier 1 USA UK France Italy ………. CERN Tier 1 Japan CERN Tier 0 physicist les.robertson@cern.ch July 8, 2003 Frédéric Hemmer - CERN/IT

Frédéric Hemmer - CERN/IT Summary Fabric Management is an issue we need to look at The problem has been around for > 10 years Will be of importance for most Tier 0/1 centers New ideas/initiatives are popping up Costs are not negligible E.g. infrastructure costs Manpower costs even more important Will Launch a TCO study later in the year Let’s try to address some of the issues in the next two days… July 8, 2003 Frédéric Hemmer - CERN/IT