The LHC Computing Grid Visit of Her Royal Highness

Slides:



Advertisements
Similar presentations
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Why Grids Matter to Europe Bob Jones EGEE.
Advertisements

Your university or experiment logo here What is it? What is it for? The Grid.
Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
Fighting Malaria With The Grid. Computing on The Grid The Internet allows users to share information across vast geographical distances. Using similar.
Computing for LHC Dr. Wolfgang von Rüden, CERN, Geneva ISEF students visit CERN, 28 th June - 1 st July 2009.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
CERN IT Department CH-1211 Genève 23 Switzerland Visit of Professor Jerzy Szwed Under Secretary of State Ministry of Science and Higher.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
1. 2 CERN European Organization for Nuclear Research Founded in 1954 by 12 countries – Norway one of them Today: 20 member states, around 2500 staff –
Frédéric Hemmer, CERN, IT DepartmentThe LHC Computing Grid – October 2006 LHC Computing and Grids Frédéric Hemmer IT Deputy Department Head October 10,
Frédéric Hemmer, CERN, IT Department The LHC Computing Grid – June 2006 The LHC Computing Grid Visit of the Comité d’avis pour les questions Scientifiques.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Ian Bird LCG Deployment Manager EGEE Operations Manager LCG - The Worldwide LHC Computing Grid Building a Service for LHC Data Analysis 22 September 2006.
To the Grid From the Web. From the Web to the Grid – 2007 Why was the Web invented at CERN? Science depends on free access to information and exchange.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
To the Grid From the Web Dr. Francois Grey IT Department, CERN.
1 The LHC Computing Grid – February 2007 Frédéric Hemmer, CERN, IT Department LHC Computing and Grids Frédéric Hemmer Deputy IT Department Head January.
CERN IT Department CH-1211 Genève 23 Switzerland Visit of Professor Karel van der Toorn President University of Amsterdam Wednesday 10 th.
EGEE-II INFSO-RI Enabling Grids for E-sciencE WISDOM in EGEE-2, biomed meeting, 2006/04/28 WISDOM : Grid-enabled Virtual High Throughput.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
…building the next IT revolution From Web to Grid…
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
Your university or experiment logo here What is it? What is it for? The Grid.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Frédéric Hemmer, CERN, IT DepartmentThe LHC Computing Grid – September 2007 Wolfgang von Rüden, CERN, IT Department The LHC Computing Grid Frédéric Hemmer.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
Dr. Andreas Wagner Deputy Group Leader - Operating Systems and Infrastructure Services CERN IT Department The IT Department & The LHC Computing Grid –
Tiers and GRID computing 김 민 석 ( 성균관대 )
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
The LHC Computing Grid Visit of Dr. John Marburger
1 The LHC Computing Grid – April 2007 Frédéric Hemmer, CERN, IT Department The LHC Computing Grid A World-Wide Computer Centre Frédéric Hemmer Deputy IT.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
The ATLAS detector … … is composed of cylindrical layers: Tracking detector: Pixel, SCT, TRT (Solenoid magnetic field) Calorimeter: Liquid Argon, Tile.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Status of WLCG FCPPL project
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Physics Data Management at CERN
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Grid site as a tool for data processing and data analysis
PROGRAMME 10:00 Introduction to presentations and tour (10‘) Francois Grey  10:10 CERN openlab student programme - CERN opencluster (05')    Stephen Eccles 
IT Department and The LHC Computing Grid
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
The Worldwide LHC Computing Grid BEGrid Seminar
Dagmar Adamova, NPI AS CR Prague/Rez
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Tour of CERN Computer Center
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
New strategies of the LHC experiments to meet
Tour of CERN Computer Center
CERN, the LHC and the Grid
Visit of US House of Representatives Committee on Appropriations
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
Nuclear Physics Data Management Needs Bruce G. Gibbard
The LHC Grid Service A worldwide collaboration Ian Bird
The LHC Computing Grid Visit of Prof. Friedrich Wagner
WLCG Status – 1 Use remains consistently high
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

The LHC Computing Grid Visit of Her Royal Highness Princess Maha Chakri Sirindhorn The Kingdom of Thailand Monday 16th March 2009 Frédéric Hemmer IT Department Head 1

generating data 40 millions times per second The ATLAS experiment 7000 tons, 150 million sensors generating data 40 millions times per second i.e. a petabyte/s

A collision at LHC 3 3

The Data Acquisition Ian.Bird@cern.ch 4 4

Tier 0 at CERN: Acquisition, First pass processing Storage & Distribution The next two slides illustrate what happens to the data as it moves out from the experiments. Each of CMS and ATLAS produce data at the rate of 1 DVD-worth every 15 seconds or so, while the rates for LHCb and ALICE are somewhat less. However, during the part of the year when LHC will accelerate lead ions rather than protons, ALICE (which is an experiment dedicated to this kind of physics) alone will produce data at the rate of over 1 Gigabyte per second (1 DVD every 4 seconds). Initially the data is sent to the CERN Computer Centre – the Tier 0 - for storage on tape. Storage also implies guardianship of the data for the long term – the lifetime of the LHC – at least 20 years. This is not passive guardianship but requires migrating data to new technologies as they arrive. We need large scale sophisticated mass storage systems that not only are able to manage the incoming data streams, but also allow for evolution of technology (tapes and disks) without hindering access to the data. The Tier 0 centre provides the initial level of data processing – calibration of the detectors and the first reconstruction of the data. 1.25 GB/sec (ions) Ian.Bird@cern.ch 5 5

The LHC Computing Grid, March 2009 The LHC Data Challenge The accelerator will be completed in 2008 and run for 10-15 years Experiments will produce about 15 Million Gigabytes of data each year (about 20 million CDs!) LHC data analysis requires a computing power equivalent to ~100,000 of today's fastest PC processors Requires many cooperating computer centres, as CERN can only provide ~20% of the capacity The LHC Computing Grid, March 2009

The LHC Computing Grid, March 2009 Solution: the Grid Use the Grid to unite computing resources of particle physics institutes around the world The World Wide Web provides seamless access to information that is stored in many millions of different geographical locations The Grid is an infrastructure that provides seamless access to computing power and data storage capacity distributed over the globe The LHC Computing Grid, March 2009

The LHC Computing Grid, March 2009 How does the Grid work? It makes multiple computer centres look like a single system to the end-user Advanced software, called middleware, automatically finds the data the scientist needs, and the computing power to analyse it. Middleware balances the load on different resources. It also handles security, accounting, monitoring and much more. The LHC Computing Grid, March 2009

Tier 0 – Tier 1 – Tier 2 Tier-0 (CERN): Data recording Initial data reconstruction Data distribution Tier-1 (11 centres): Permanent storage Re-processing Analysis Tier-2 (~130 centres): Simulation End-user analysis The Tier 0 centre at CERN stores the primary copy of all the data. A second copy is distributed between the 11 so-called Tier 1 centres. These are large computer centres in different geographical regions of the world, that also have a responsibility for long term guardianship of the data. The data is sent from CERN to the Tier 1s in real time over dedicated network connections. In order to keep up with the data coming from the experiments this transfer must be capable of running at around 1.3 GB/s continuously. This is equivalent to a full DVD every 3 seconds. The Tier 1 sites also provide the second level of data processing and produce data sets which can be used to perform the physics analysis. These data sets are sent from the Tier 1 sites to the around 130 Tier 2 sites. A Tier 2 is typically a university department or physics laboratories and are located all over the world in most of the countries that participate in the LHC experiments. Often, Tier 2s are associated to a Tier 1 site in their region. It is at the Tier 2s that the real physics analysis is performed. Ian.Bird@cern.ch 10 10

Frédéric Hemmer, CERN, IT Department WLCG Grid Activity WLCG ran ~ 44 million jobs in 2007 – workload has continued to increase Average over May total: (10.5 M) 340k jobs / day ATLAS average >200k jobs/day CMS average > 100k jobs/ day with peaks up to 200k This is the level needed for 2008/9 Data distribution from CERN to Tier-1 sites: All experiments exceeded required rates for extended periods, & simultaneously 1.3 GB/s target Well above 2 GB/s achievable All Tier 1s achieved (or exceeded) their target acceptance rates Latest test in May show that the data rates required for LHC start-up have been reached and can be sustained over long periods Frédéric Hemmer, CERN, IT Department

Example: The Grid Attacks Avian Flu The Grid has been used to analyse 300,000 possible potential drug compounds against bird flu virus, H5N1. 2000 computers at 60 computer centres in Europe, Russia, Asia and Middle East ran during four weeks - the equivalent of 100 years on a single computer. BioSolveIt donated 6000 FlexX licenses. Results Avian flu: 20% of compounds better than Tamiflu Malaria: 6/30 compounds similar or better than PepstatinA Ongoing tests with compounds from later calculations. Neuraminidase, one of the two major surface proteins of influenza viruses, facilitating the release of virions from infected cells. Image Courtesy Ying-Ta Wu, AcademiaSinica. The LHC Computing Grid, March 2009

For more information about the Grid: Thank you for your kind attention! www.cern.ch/lcg www.eu-egee.org www.gridcafe.org www.eu-egi.org/ 17