SJ – Nov.2002 1 CERN’s openlab Project Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002.

Slides:



Advertisements
Similar presentations
Istituto Nazionale di Fisica Nucleare Italy LAL - Orsay April Site Report – R.Gomezel Site Report Roberto Gomezel INFN - Trieste.
Advertisements

CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
CROSSGRID WP41 Valencia Testbed Site: IFIC (Instituto de Física Corpuscular) CSIC-Valencia ICMoL (Instituto de Ciencia Molecular) UV-Valencia 28/08/2002.
Tier1A Status Andrew Sansum GRIDPP 8 23 September 2003.
18. November 2003Bernd Panzer, CERN/IT1 LCG Internal Review Computing Fabric Overview.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
5 Nov 2001CGW'01 CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
Site report: CERN HEPiX/HEPNT Autumn 2003 Vancouver.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Copyright © 2002, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners
May 2004Sverre Jarp1 Preparing the computing solutions for the Large Hadron Collider (LHC) at CERN Sverre Jarp, openlab CTO IT Department, CERN.
Oracle RAC and Linux in the real enterprise October, 02 Mark Clark Director Merrill Lynch Europe PLC Global Database Technologies October, 02 Mark Clark.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
F. Rademakers - CERN/EPLinux Certification - FOCUS Linux Certification Fons Rademakers.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
1 The Gelato Federation What is it exactly ? Sverre Jarp March, 2003.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
DataGrid is a project funded by the European Commission under contract IST rd EU Review – 19-20/02/2004 DataGrid Project Status Fabrizio Gagliardi.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
…building the next IT revolution From Web to Grid…
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
The CERN openlab for DataGrid Applications A Practical Channel for Collaboration François Fluckiger.
SJ – Mar The “opencluster” in “openlab” A technical overview Sverre Jarp IT Division CERN.
CERN openlab II (2006 – 2008) Grid-related activities Sverre Jarp CERN openlab CTO sverre.jarp at cern.ch.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
SJ – Sept Future processors: What is on the horizon for HEP computing Sverre Jarp CERN openlab CTO IT Department, CERN.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
CERN News on Grid and openlab François Fluckiger, Manager, CERN openlab for DataGrid Applications.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
Peter Idoine Managing Director Oracle New Zealand Limited.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
© 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP & CERN OPENLAB an R&D Partnership.
Managing Large Linux Farms at CERN OpenLab: Fabric Management Workshop Tim Smith CERN/IT.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Bob Jones EGEE Technical Director
ALICE Computing Data Challenge VI
CERN’s openlab Project
OpenCluster Planning Sverre Jarp IT Division CERN October 2002.
OpenLab Enterasys Meeting
CERN openlab for DataGrid applications Programme of Work Overview F
Grid related projects CERN openlab LCG EDG F.Fluckiger
UK GridPP Tier-1/A Centre at CLRC
LHC Computing re-costing for
The CERN openlab and the European DataGrid Project
The CERN openlab for DataGrid Applications François Fluckiger
CERN openlab for DataGrid applications Setting the Scene F
Short to middle term GRID deployment plan for LHCb
Presentation transcript:

SJ – Nov CERN’s openlab Project Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002

SJ – Nov Our ties to IA-64 (IPF) A long history already…. Nov. 1992: Visit to HP Labs (Bill Worley): “We shall soon launch PA-Wide Word!” : CERN becomes one of the few external definition partners for IA-64 Now a joint effort between Intel and HP : Creation of a vector math library for IA-64 Full prototype to demonstrate the precision, versatility, and unbeatable speed of execution (with HP Labs) : Port of Linux onto IA-64 “Trillian” project: glibc Real applications Demonstrated already at Intel’s “Exchange” exhibition on Oct. 2000

SJ – Nov openlab Status Industrial Collaboration Enterasys, HP, and Intel are our partners Technology aimed at the LHC era Network switch at 10 Gigabits Connect via both 1 Gbit and 10 Gbits Rack-mounted HP servers Itanium processors Storage subsystem may be coming from a 4th partner Cluster evolution: 2002: Cluster of 32 systems (64 processors) 2003: 64 systems (“Madison” processors) 2004: 64 systems (“Montecito” processors)

SJ – Nov The compute nodes HP rx2600 Rack-mounted (2U) systems Two Itanium-2 processors 900 or 1000 MHz Field upgradable to next generation 4 GB memory (max 12 GB) 3 hot pluggable SCSI discs (36 or 73 GB) On-board 100 and 1000 Mbit Ethernet 4 full-size 133 MHz/64-bit PCI-X slots Built-in management processor Accessible via serial port or Ethernet interface

SJ – Nov openlab SW strategy Exploit existing CERN infrastructure Which is based on RedHat Linux, GNU compilers OpenAFS SUE (Standard Unix Env.) systems maintenance tools Native 64-bit port Key LHC applications: CLHEP, GEANT4, ROOT, etc. Important subsystems: Castor, Oracle, MySQL, LSF, etc. Intel compiler where it is sensible Performance 32-bit emulation mode Wherever it makes sense Low usage, no special performance need Non-strategic areas

SJ – Nov openlab - phase 1 Also: Prepare porting strategy for phase 2 Estimated time scale: 6 months Awaiting recruitment of: 1 system programmer Integrate the openCluster 32 nodes + development nodes Rack-mounted DP Itanium-2 systems RedHat 7.3 (AW2.1 beta) – kernel at OpenAFS 1.2.7, LSF 4 GNU, Intel Compilers (+ ORC?) Database software (MySQL, Oracle?) CERN middleware: Castor data mgmt GRID middleware: Globus, Condor, etc. CERN Applications Porting, Benchmarking, Performance improvements CLHEP, GEANT4, ROOT, CERNLIB Cluster benchmarks 1  10 Gigabit interfaces

SJ – Nov openlab - phase 2 European Data Grid Integrate OpenCluster alongside EDG testbed Porting, Verification Relevant software packages Large number of RPMs Document prerequisites Understand dependency chain Decide when to use 32-bit emulation mode Interoperability with WP6 Integration into existing authentication scheme Interoperability with other partners GRID benchmarks (As available) Estimated time scale: 9 months (May be subject to change!) Awaiting recruitment of: 1 GRID programmer Also: Prepare porting strategy for phase 3

SJ – Nov openlab - phase 3 LHC Computing Grid Need to understand Software architectural choices To be made between now and mid-2003 Need new integration process of selected software Time scales Disadvantage: Possible porting of new packages Advantage: Aligned with key choices for LHC deployment Impossible at this stage to give firm estimates for timescale and required manpower

SJ – Nov openlab time line End-02End-03End-04End-05 Order/Install 32 nodes Systems experts in place – Start phase 1 Complete phase 1 openCluster Start phase 2 Order/Install Madison upgrades + 32 more nodes EDG Complete phase 2 Order/Install Montecito upgrades LCG Start phase 3

SJ – Nov IA-64 wish list For IA-64 (IPF) to establish itself solidly in the market-place: Better compiler technology Offering better system performance Wider range of systems and processors For instance: Really low-cost entry models, low power systems State-of-the-art process technology Similar “commoditization” as for IA-32

SJ – Nov openlab starts with CPU Servers Multi-gigabit LAN

SJ – Nov … and will be extended … CPU Servers Multi-gigabit LAN Gigabit long- haul link WAN Remote Fabric

SJ – Nov … step by step Gigabit long- haul link CPU Servers WAN Multi-gigabit LAN Storage system Remote Fabric

SJ – Nov Annexes The potential of openlab The openlab “advantage” The LHC Expected LHV needs The LHC Computing Grid Project – LCG

SJ – Nov The openlab “advantage” openlab will be able to build on the following strong points: 1)CERN/IT’s technical talent 2)CERN existing computing environment 3)The size and complexity of the LHC computing needs 4)CERN strong role in the development of GRID “middleware” 5)CERN’s ability to embrace emerging technologies

SJ – Nov The potential of openlab Leverage CERN’s strengths Integrates perfectly into our environment OS, Compilers, Middleware, Applications Integration alongside EDG testbed Integration into LCG deployment strategy Show with success that the new technologies can be solid building blocks for the LHC computing environment

SJ – Nov The openlab “advantage” openlab will be able to build on the following strong points: 1)CERN/IT’s technical talent 2)CERN existing computing environment 3)The size and complexity of the LHC computing needs 4)CERN strong role in the development of GRID “middleware” 5)CERN’s ability to embrace emerging technologies

SJ – Nov The Large Hadron Collider - 4 detectors CMS ATLAS LHCb Huge requirements for data analysis Storage – Raw recording rate 0.1 – 1 GByte/sec Accumulating data at 5-8 PetaBytes/year (plus copies) 10 PetaBytes of disk Processing – 100,000 of today’s fastest PCs

SJ – Nov Expected LHC needs Moore’s law (based on 2000)

SJ – Nov The LHC Computing Grid Project – LCG 1) Applications support: develop and support the common tools, frameworks, and environment needed by the physics applications 2) Computing system: build and operate a global data analysis environment integrating large local computing fabrics and high bandwidth networks to provide a service for ~6K researchers in over ~40 countries Goal – Prepare and deploy the LHC computing environment This is not “yet another grid technology project” – it is a grid deployment project LCG