Presentation is loading. Please wait.

Presentation is loading. Please wait.

CERN’s openlab Project

Similar presentations


Presentation on theme: "CERN’s openlab Project"— Presentation transcript:

1 CERN’s openlab Project
François Fluckiger, Sverre Jarp, Wolfgang von Rüden IT Division CERN November 2002

2 CERN site: Next to Lake Geneva
Mont Blanc, 4810 m Downtown Geneva Lake Geneva LHC Tunnel November 2002

3 What is CERN ? European Centre for Nuclear Research (European Laboratory for Particle Physics) Frontier of Human Scientific Knowledge Creation of ‘Big bang’ like conditions Accelerators with latest super-conducting technologies Tunnel is 27 km in circumference Large Electron/Positron Ring (used until 2000) Large Hadron Collider (LHC) as of 2007 Detectors as ‘big as cathedrals’ Four LHC detectors ALICE, ATLAS, CMS, LHCb World-wide participation Europe, plus USA, Canada, Brazil, Japan, China, Russia, Israel, etc. November 2002

4 Member States 20 countries Founded in 1954 In recent years:
Initially Western Europe, from Norway to Greece In recent years: Poland, Czech Republic, Slovakia, Hungary, and Bulgaria Founded in 1954 Scientific collaboration often precedes economic exploitation or political cooperation November 2002

5 CERN in more detail Organisation with: Inventor of the World-Wide Web
2400 staff, plus 6000 visitors (per year) Inventor of the World-Wide Web Tim Berners-Lee’s vision: “Tie all the physicists together – no matter where they are” CERN’s Budget 1000 MCHF (~685 M€/US$) 50-50 materials/personnel Computing budget ~25 MCHF (central infrastructure) Desktop/Departmental computing in addition November 2002

6 Why CERN may be interesting to the computing industry
Several arguments: Huge computing requirements: The Large Hadron Collider will need unprecedented computing resources, driven by scientists’ needs At CERN, but even more in regional centres ..and in hundreds of institutes world-wide Early adopters: Scientific community is willing to ‘take risk’, hoping it is linked to ‘rewards’ Lots of source-based applications, easy to port Reference site: Many institutes adopt CERN’s computing policies Applications/Libraries are ported; software runs well Lots of expertise available in relevant areas November 2002

7 CERN's Users and Collaborating Institutes
another problem? or a challenge? Unite the computing resources of the LHC community Europe: institutes, 4603 users Elsewhere: 208 institutes, 1632 users November 2002

8 Our ties to IA-64 (IPF) A long history already….
Nov. 1992: Visit to HP Labs (Bill Worley): “We will soon launch PA-Wide Word!” 1994-6: CERN becomes one of the few external definition partners for IA-64 Now a joint effort between HP and Intel 1997-9: Creation of a vector math library for IA-64 Full prototype to demonstrate the precision, versatility, and unbeatable speed of execution 2000-1: Port of Linux onto IA-64 “Trillian” project Real applications Demonstrated already at Intel’s “Exchange” exhibition on Oct. 2000 November 2002

9 Similar “commoditization” as for IA-32
IA-64 wish list For IA-64 (IPF) to establish itself solidly in the market-place: Better compiler technology Offering better system performance Wider range of systems and processors For instance: Really low-cost entry models, low power systems State-of-the-art process technology Similar “commoditization” as for IA-32 November 2002

10 The openlab “advantage”
openlab will be able to build on: CERN-IT’s technical talent CERN’s existing computing environment The size and complexity of the LHC computing needs CERN’s strong role in the development of GRID “middleware” CERN’s ability to embrace emerging technologies November 2002

11 Groups have both a development and a service responsibility
IT Division 250 people, ~about 200 engineers 11 groups: Advanced Projects’ Group (part of DI) Applications for Physics and Infrastructure (API) (Farm) Architecture and Data Challenges (ADC) Fabric Infrastructure and Operations (FIO) (Physics) Data Services (DS) Databases (DB) Internet (and Windows) Services (IS) Communications Services (CS) User Services (US) Product Support (PS) (Detector) Controls (CO) Groups have both a development and a service responsibility November 2002

12 CERN’s Computer Environment (today)
High-throughput computing (based on reliable “commodity” technology) More than 1400 (dual processor) PCs with RedHat Linux More than 1 Petabyte of data (on disk and tapes) November 2002

13 The Large Hadron Collider - 4 detectors
Huge requirements for data analysis CMS ATLAS LHCb Storage – Raw recording rate 0.1 – 1 GByte/sec Accumulating data at 5-8 PetaBytes/year (plus copies) 10 PetaBytes of disk Processing – 100,000 of today’s fastest PCs November 2002

14 Expected LHC needs Moore’s law (based on 2000) November 2002

15 The LHC Computing Grid Project – LCG
Goal – Prepare and deploy the LHC computing environment 1) Applications support: develop and support the common tools, frameworks, and environment needed by the physics applications 2) Computing system: build and operate a global data analysis environment integrating large local computing fabrics and high bandwidth networks to provide a service for ~6K researchers in over ~40 countries This is not “yet another grid technology project” – it is a grid deployment project November 2002

16 European Data Grid Work Packages WP1: Workload Management
WP2: Grid Data Management WP3: Grid Monitoring Services WP4: Fabric management WP5: Mass Storage Management WP6: Integration Testbed – Production quality International Infrastructure WP7: Network Services WP8: High-Energy Physics Applications WP9: Earth Observation Science Applications WP10: Biology Science Applications WP11: Information Dissemination and Exploitation WP12: Project Management Managed by CERN Main activity at CERN November 2002

17 “Grid” technology providers
CrossGrid The LHC Computing Grid will choose the best parts and integrate them! American projects European projects November 2002

18 Back to: openlab Industrial Collaboration
Enterasys, HP, and Intel are our partners today Additional partner(s) joining soon Technology aimed at the LHC era Network switch at 10 Gigabits Rack-mounted HP servers Itanium processors Disk subsystem may be coming from a 4th partner Cluster evolution: 2002: Cluster of 32 systems (64 processors) 2003: 64 systems (“Madison” processors) 2004: 64 systems (“Montecito” processors) November 2002

19 openlab Staffing Management in place: Additional part-time experts
W. von Rüden: Head (Director) F. Fluckiger: Associate Head S. Jarp: Chief Technologist, HP & Intel liaison F. Grey: Communications & Development D. Foster: LCG liaison J.M. Juanigot: Enterasys liaison Additional part-time experts Help from general IT services Actively recruiting the systems experts: 2-3 persons November 2002

20 openlab - phase 1 Integrate the openCluster
32 nodes + development nodes Rack-mounted DP Itanium-2 systems RedHat 7.3 Advanced Server OpenAFS, LSF GNU, Intel Compilers (+ ORC?) Database software (MySQL, Oracle?) CERN middleware: Castor data mgmt CERN Applications Porting, Benchmarking, Performance improvements CLHEP, GEANT4, ROOT, Anaphe, (CERNLIB?) Cluster benchmarks 1  10 Gigabit interfaces Also: Prepare porting strategy for phase 2 Estimated time scale: 6 months Prerequisite: 1 system administrator November 2002

21 openlab - phase 2 European Data Grid
Integrate OpenCluster alongside EDG testbed Porting, Verification Relevant software packages (hundreds of RPMs) Understand chain of prerequisites Interoperability with WP6 Integration into existing authentication scheme GRID benchmarks To be defined later Also: Prepare porting strategy for phase 3 Estimated time scale: 9 months (may be subject to change!) Prerequisite: 1 system programmer November 2002

22 openlab - phase 3 LHC Computing Grid Need to understand: Disadvantage
Software architectural choices, to be made by LCG project by mid-2003 Need new integration process of selected software Time scales Disadvantage Possible porting of new packages Advantage: Aligned with key choices for LHC deployment Impossible at this stage to give firm estimates for timescale and required manpower November 2002

23 openlab time line openCluster EDG LCG Order/Install 32 nodes
Systems experts in place – Start phase 1 Complete phase 1 Start phase 2 Order/Install Madison upgrades + 32 more nodes Complete phase 2 Start phase 3 Order/Install Montecito upgrades openCluster EDG LCG End-02 End-03 End-04 End-05 November 2002

24 openlab starts with CPU Servers Multi-gigabit LAN November 2002

25 Gigabit long-haul link
… and will be extended … Gigabit long-haul link WAN Remote Fabric CPU Servers Multi-gigabit LAN November 2002

26 Gigabit long-haul link
… step by step Remote Fabric WAN Gigabit long-haul link CPU Servers Multi-gigabit LAN Storage system November 2002

27 The potential of openlab
Leverage CERN’s strengths Integrates perfectly into our environment OS, Compilers, Middleware, Applications Integration alongside EDG testbed Integration into LCG deployment strategy Show with success that the new technologies can be solid building blocks for the LHC computing environment November 2002

28 The next step in the evolution of large scale computing
from Super-Computer through Clusters to Grids performance, capacity Thank You ! November 2002


Download ppt "CERN’s openlab Project"

Similar presentations


Ads by Google