CERN’s openlab Project

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
CERN Enrico CHIAVERI Head, Human Resources Department 22 June 2009.
From GEANT to Grid empowered Research Infrastructures ANTONELLA KARLSON DG INFSO Research Infrastructures Grids Information Day 25 March 2003 From GEANT.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Rackspace Analyst Event Tim Bell
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
…building the next IT revolution From Web to Grid…
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
WelcomeWelcome CSEM – CERN Day 23 rd May 2013 CSEM – CERN Day 23 rd May 2013 to Accelerating Science and Innovation to Accelerating Science and Innovation.
SJ – Mar The “opencluster” in “openlab” A technical overview Sverre Jarp IT Division CERN.
SJ – Nov CERN’s openlab Project Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002.
1 Future Circular Collider Study Preparatory Collaboration Board Meeting September 2014 R-D Heuer Global Future Circular Collider (FCC) Study Goals and.
LHC Computing, CERN, & Federated Identities
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
CERN News on Grid and openlab François Fluckiger, Manager, CERN openlab for DataGrid Applications.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
5-minutes tour of CERN (based on official CERN slides) 5-minutes tour of CERN (based on official CERN slides) Christian Joram / CERN EIROfrum Topical Workshop.
Application Support Environment Based on experience in High Energy Physics at CERN Presented at the UNESCO/CERN Workshop April 2002 Jürgen Knobloch.
Bob Jones EGEE Technical Director
European Organization for Nuclear Research
The 5 minutes tour of CERN The 5 minutes race of CERN
OpenCluster Planning Sverre Jarp IT Division CERN October 2002.
Introduction to CERN F. Hahn / CERN PH-DT1 10. May 2007.
Clouds , Grids and Clusters
Grid site as a tool for data processing and data analysis
PROGRAMME 10:00 Introduction to presentations and tour (10‘) Francois Grey  10:10 CERN openlab student programme - CERN opencluster (05')    Stephen Eccles 
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
European Organization for Nuclear Research
CERN presentation & CFD at CERN
Ian Bird GDB Meeting CERN 9 September 2003
CERN openlab for DataGrid applications Programme of Work Overview F
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Grid related projects CERN openlab LCG EDG F.Fluckiger
The 5 minutes tour of CERN The 5 minutes race of CERN
UK GridPP Tier-1/A Centre at CLRC
The LHC Computing Grid Visit of Her Royal Highness
CERN Teacher Programmes
The CERN openlab and the European DataGrid Project
CERN, the LHC and the Grid
The Role of Europe in Developing Future Internet Technologies, EC Initiatives
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
EGI Webinar - Introduction -
What is CERN? About CERN's Name from the Web
Future EU Grid Projects
CERN openlab for DataGrid applications Setting the Scene F
LHC Computing Grid Project
CERN openlab for DataGrid applications Overview F.Fluckiger
What is CERN?.
Short to middle term GRID deployment plan for LHCb
EUChinaGRID Federico Ruggieri INFN Roma3
The LHC Computing Grid Visit of Professor Andreas Demetriou
LHC Computing Grid Project
CERN: from fundamental sciences to daily applications
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

CERN’s openlab Project François Fluckiger, Sverre Jarp, Wolfgang von Rüden IT Division CERN November 2002

CERN site: Next to Lake Geneva Mont Blanc, 4810 m Downtown Geneva Lake Geneva LHC Tunnel November 2002

What is CERN ? European Centre for Nuclear Research (European Laboratory for Particle Physics) Frontier of Human Scientific Knowledge Creation of ‘Big bang’ like conditions Accelerators with latest super-conducting technologies Tunnel is 27 km in circumference Large Electron/Positron Ring (used until 2000) Large Hadron Collider (LHC) as of 2007 Detectors as ‘big as cathedrals’ Four LHC detectors ALICE, ATLAS, CMS, LHCb World-wide participation Europe, plus USA, Canada, Brazil, Japan, China, Russia, Israel, etc. November 2002

Member States 20 countries Founded in 1954 In recent years: Initially Western Europe, from Norway to Greece In recent years: Poland, Czech Republic, Slovakia, Hungary, and Bulgaria Founded in 1954 Scientific collaboration often precedes economic exploitation or political cooperation November 2002

CERN in more detail Organisation with: Inventor of the World-Wide Web 2400 staff, plus 6000 visitors (per year) Inventor of the World-Wide Web Tim Berners-Lee’s vision: “Tie all the physicists together – no matter where they are” CERN’s Budget 1000 MCHF (~685 M€/US$) 50-50 materials/personnel Computing budget ~25 MCHF (central infrastructure) Desktop/Departmental computing in addition November 2002

Why CERN may be interesting to the computing industry Several arguments: Huge computing requirements: The Large Hadron Collider will need unprecedented computing resources, driven by scientists’ needs At CERN, but even more in regional centres ..and in hundreds of institutes world-wide Early adopters: Scientific community is willing to ‘take risk’, hoping it is linked to ‘rewards’ Lots of source-based applications, easy to port Reference site: Many institutes adopt CERN’s computing policies Applications/Libraries are ported; software runs well Lots of expertise available in relevant areas November 2002

CERN's Users and Collaborating Institutes another problem? or a challenge? Unite the computing resources of the LHC community Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users November 2002

Our ties to IA-64 (IPF) A long history already…. Nov. 1992: Visit to HP Labs (Bill Worley): “We will soon launch PA-Wide Word!” 1994-6: CERN becomes one of the few external definition partners for IA-64 Now a joint effort between HP and Intel 1997-9: Creation of a vector math library for IA-64 Full prototype to demonstrate the precision, versatility, and unbeatable speed of execution 2000-1: Port of Linux onto IA-64 “Trillian” project Real applications Demonstrated already at Intel’s “Exchange” exhibition on Oct. 2000 November 2002

Similar “commoditization” as for IA-32 IA-64 wish list For IA-64 (IPF) to establish itself solidly in the market-place: Better compiler technology Offering better system performance Wider range of systems and processors For instance: Really low-cost entry models, low power systems State-of-the-art process technology Similar “commoditization” as for IA-32 November 2002

The openlab “advantage” openlab will be able to build on: CERN-IT’s technical talent CERN’s existing computing environment The size and complexity of the LHC computing needs CERN’s strong role in the development of GRID “middleware” CERN’s ability to embrace emerging technologies November 2002

Groups have both a development and a service responsibility IT Division 250 people, ~about 200 engineers 11 groups: Advanced Projects’ Group (part of DI) Applications for Physics and Infrastructure (API) (Farm) Architecture and Data Challenges (ADC) Fabric Infrastructure and Operations (FIO) (Physics) Data Services (DS) Databases (DB) Internet (and Windows) Services (IS) Communications Services (CS) User Services (US) Product Support (PS) (Detector) Controls (CO) Groups have both a development and a service responsibility November 2002

CERN’s Computer Environment (today) High-throughput computing (based on reliable “commodity” technology) More than 1400 (dual processor) PCs with RedHat Linux More than 1 Petabyte of data (on disk and tapes) November 2002

The Large Hadron Collider - 4 detectors Huge requirements for data analysis CMS ATLAS LHCb Storage – Raw recording rate 0.1 – 1 GByte/sec Accumulating data at 5-8 PetaBytes/year (plus copies) 10 PetaBytes of disk Processing – 100,000 of today’s fastest PCs November 2002

Expected LHC needs Moore’s law (based on 2000) November 2002

The LHC Computing Grid Project – LCG Goal – Prepare and deploy the LHC computing environment 1) Applications support: develop and support the common tools, frameworks, and environment needed by the physics applications 2) Computing system: build and operate a global data analysis environment integrating large local computing fabrics and high bandwidth networks to provide a service for ~6K researchers in over ~40 countries This is not “yet another grid technology project” – it is a grid deployment project November 2002

European Data Grid Work Packages WP1: Workload Management WP2: Grid Data Management WP3: Grid Monitoring Services WP4: Fabric management WP5: Mass Storage Management WP6: Integration Testbed – Production quality International Infrastructure WP7: Network Services WP8: High-Energy Physics Applications WP9: Earth Observation Science Applications WP10: Biology Science Applications WP11: Information Dissemination and Exploitation WP12: Project Management Managed by CERN Main activity at CERN November 2002

“Grid” technology providers CrossGrid The LHC Computing Grid will choose the best parts and integrate them! American projects European projects November 2002

Back to: openlab Industrial Collaboration Enterasys, HP, and Intel are our partners today Additional partner(s) joining soon Technology aimed at the LHC era Network switch at 10 Gigabits Rack-mounted HP servers Itanium processors Disk subsystem may be coming from a 4th partner Cluster evolution: 2002: Cluster of 32 systems (64 processors) 2003: 64 systems (“Madison” processors) 2004: 64 systems (“Montecito” processors) November 2002

openlab Staffing Management in place: Additional part-time experts W. von Rüden: Head (Director) F. Fluckiger: Associate Head S. Jarp: Chief Technologist, HP & Intel liaison F. Grey: Communications & Development D. Foster: LCG liaison J.M. Juanigot: Enterasys liaison Additional part-time experts Help from general IT services Actively recruiting the systems experts: 2-3 persons November 2002

openlab - phase 1 Integrate the openCluster 32 nodes + development nodes Rack-mounted DP Itanium-2 systems RedHat 7.3 Advanced Server OpenAFS, LSF GNU, Intel Compilers (+ ORC?) Database software (MySQL, Oracle?) CERN middleware: Castor data mgmt CERN Applications Porting, Benchmarking, Performance improvements CLHEP, GEANT4, ROOT, Anaphe, (CERNLIB?) Cluster benchmarks 1  10 Gigabit interfaces Also: Prepare porting strategy for phase 2 Estimated time scale: 6 months Prerequisite: 1 system administrator November 2002

openlab - phase 2 European Data Grid Integrate OpenCluster alongside EDG testbed Porting, Verification Relevant software packages (hundreds of RPMs) Understand chain of prerequisites Interoperability with WP6 Integration into existing authentication scheme GRID benchmarks To be defined later Also: Prepare porting strategy for phase 3 Estimated time scale: 9 months (may be subject to change!) Prerequisite: 1 system programmer November 2002

openlab - phase 3 LHC Computing Grid Need to understand: Disadvantage Software architectural choices, to be made by LCG project by mid-2003 Need new integration process of selected software Time scales Disadvantage Possible porting of new packages Advantage: Aligned with key choices for LHC deployment Impossible at this stage to give firm estimates for timescale and required manpower November 2002

openlab time line openCluster EDG LCG Order/Install 32 nodes Systems experts in place – Start phase 1 Complete phase 1 Start phase 2 Order/Install Madison upgrades + 32 more nodes Complete phase 2 Start phase 3 Order/Install Montecito upgrades openCluster EDG LCG End-02 End-03 End-04 End-05 November 2002

openlab starts with CPU Servers Multi-gigabit LAN November 2002

Gigabit long-haul link … and will be extended … Gigabit long-haul link WAN Remote Fabric CPU Servers Multi-gigabit LAN November 2002

Gigabit long-haul link … step by step Remote Fabric WAN Gigabit long-haul link CPU Servers Multi-gigabit LAN Storage system November 2002

The potential of openlab Leverage CERN’s strengths Integrates perfectly into our environment OS, Compilers, Middleware, Applications Integration alongside EDG testbed Integration into LCG deployment strategy Show with success that the new technologies can be solid building blocks for the LHC computing environment November 2002

The next step in the evolution of large scale computing from Super-Computer through Clusters to Grids performance, capacity Thank You ! 1980 1990 2000 November 2002