EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 1 Accomplishments of the project from the end.

Slides:



Advertisements
Similar presentations
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Advertisements

Introduction to Grids and Grid applications Gergely Sipos MTA SZTAKI
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
The CrossGrid project Juha Alatalo Timo Koivusalo.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
Application Use Cases NIKHEF, Amsterdam, December 12, 13.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
1 DataGRID Application Status and plans
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Applications and Use Cases The European DataGrid Project Team
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
J.J.Blaising April 02AMS DataGrid-status1 DataGrid Status J.J Blaising IN2P3 Grid Status Demo introduction Demo.
…building the next IT revolution From Web to Grid…
DataGRID PTB, Geneve, 10 April 2002 Testbed Software Test Plan Status Laurent Bobelin on behalf of Test Group.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
5b. ESA & WP9: effort and recovery plan ESA DataGrid Review Frascati, 10 June Welcome and introduction (15m) 2.WP9 – Earth Observation Applications.
EC Review – 01/03/2002 – WP9 – Earth Observation Applications – n° 1 WP9 Earth Observation Applications 1st Annual Review Report to the EU ESA, KNMI, IPSL,
2. WP9 – Earth Observation Applications ESA DataGrid Review Frascati, 10 June Welcome and introduction (15m) 2.WP9 – Earth Observation Applications.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
1 Application status F.Carminati 11 December 2001.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
WP10 Goals and accomplishments from WP10 point of view J. Montagnat, CNRS, CREATIS V. Breton, CNRS/IN2P3 DataGrid Biomedical Work Package.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
CrossGrid Workshop, Kraków, 5 – 6 Nov-2001 Distributed Data Analysis in HEP Piotr MALECKI Institute of Nuclear Physics Kawiory 26A, Kraków, Poland.
Hall D Computing Facilities Ian Bird 16 March 2001.
Ian Bird GDB Meeting CERN 9 September 2003
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Testbed Software Test Plan Status
New strategies of the LHC experiments to meet
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 1 Accomplishments of the project from the end user point of view General Perspective

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 2 Application objectives for year 1 u Define use cases for the applications u Define application requirements u Deploy realistic applications on the TestBed 1 u Evaluate TestBed 1

online system multi-level trigger filter out background reduce data volume level 1 - special hardware 40 MHz (40 TB/sec) level 2 - embedded processors level 3 - PCs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz (100 MB/sec) data recording & offline analysis

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 4 The LHC Detectors CMS ATLAS LHCb ~6-8 PetaBytes / year ~10 8 events/year ~10 3 batch and interactive users

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 5 Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users CERN’s Network in the World

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 6 HEP & Distributed computing u The investment for LHC computing is massive n 1.25 GB/s in HI mode n ~5 PB/y of tape n ~1.5 PB of disk n ~1800 kSI95/exp (~70,000 PC2000) n ~ order of 60MSFr of hardware s Without media + personpower + infrastructure and networking u Politically, technically and sociologically it cannot be concentrated in a single location n It is unlikely that countries will make deploy massive computing at CERN n Competence is naturally distributed n Cannot ask to people to travel to CERN so often

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 7 The Monarc Model Tier 0+1 centre 800 kSI TB disk Robot CERN Tier3 Universit y 1 Tier3 Universit y N Tier3 Universit y MB/s Tier 2 centre 20 kSI95 20 TB disk Robot Tier 2 centre 20 kSI95 20 TB disk Robot 622 MB/s Tier 1 centre 200 kSI TB disk Robot Lyon RAL BNL 1500 MB/s

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 8 Why we need the GRID? u Every physicist should have equal access to data and resources u The system will be extremely complex n Number of sites and components in each site n Different tasks performed in parallel: simulation, reconstruction, scheduled and unscheduled analysis u We need transparent access to dynamic resources u Bad news is that the basic tools are missing n Distributed resource management, file and object namespace and authentication n Local resource management of large clusters n Data replication and caching u Good news is that we are not alone n All the above issues are central to the new developments going on in the US and Europe under the collective name of GRID

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 9 The Grid Vision The GRID: networked data processing centres and ”middleware” software as the “glue” of resources. Researchers perform their activities regardless geographical location, interact with colleagues, share and access data Scientific instruments and experiments provide huge amount of data

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 10 Biomedical Applications Explore strategies that facilitate the sharing of genomic databases and test grid-aware algorithms for comparative genomics Genomics, post-genomics, and proteomics Medical images analysis Process the huge amount of data produced by digital imagers in hospitals.

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 11 Grid added value for biomedical applications u Data mining on genomics databases (exponential growth). u Indexing of medical databases (Tb/hospital/year). u Collaborative framework for large scale experiments (e.g. epidemiological studies). u Parallel processing for n Databases analysis n Complex 3D modelling

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 12 Earth Observations ESA missions: about 100 Gbytes of data per day (ERS 1/2) 500 Gbytes, for the next ENVISAT mission (2002). DataGrid contribute to EO: enhance the ability to access high level products allow reprocessing of large historical archives improve Earth science complex applications (data fusion, data mining, modelling …) Source: L. Fusco, June 2001

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 13 Achievements u A wide-ranging dialogue has been launched within the applications on the opportunities opened by the GRID technology n Each HEP experiment is a world-wide distributed community of some researchers n Dialogue and coordination at several levels has been necessary to reach a common understanding n Understand how to do things differently and not more of the same u To improve their understanding of the issue, applications have asked to have a GLOBUS only TestBed deployed immediately n Applications produced a set of requirements for TestBed 0 in 1Q 2001 n Testbed 0 has been in operation since 2Q 2001 n A very large amount of unfunded effort, particularly from WP8, has made this possible

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 14 Achievements u Realistic applications using GLOBUS authentication have been deployed on TestBed 0 u Applications have expressed their requirements in June 2001 n For this it has been necessary to develop detailed “long term use cases” n A very large exercise leveraging essentially unfunded effort (some person-month) n This has left few months to the developers to produce the middleware u Funded WP8 effort has participated to the deployment and configuration of the TestBed u Funded WP8 effort has performed a thorough validation of the TestBed with “generic” HEP applications developed by them for this purpose n Resource Broker n Replica catalogue functionality

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 15 Achievements u Detailed TestBed evaluation plans have been elaborated and applied n Few hours after the official opening of the TestBed on December 9 a standard physics simulation job has been run n Although TB1 had not reached stability much middleware functionality was demonstrated by the applications in a short period of time

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 16 Achievements u Mechanisms to provide feedback to the developers have been put in place n Priority list constantly updated by WP8 and discussed at weekly WP manager meetings Priority list n Feedback on the release plan n Detailed user requirements for crucial components (e.g. Storage Element) in preparation u Large unfunded participation from experiments n Probably as much as 200 person-month in the first year

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 17 Issues and actions u Configuration and management of the VO’s on the TestBed has been unexpectedly difficult n The responsibility is shared, some of our needs were not expressed too clearly and were not given enough attention n Procedures should be worked out to automate it u Assignment and escalation of problems is complicated n It has worked well thanks to the heroic efforts of all people involved, but it will have to be better formalised n Need tight coordination between site-managers, TestBed and applications u Coexistence of pseudo-production and development environment n We have done it for years on a local environments n On a distributed ones is more difficult and we are learning u Deployment plans to all TestBed sites are still unclear n This will be clarified in the next minor releases

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 18 What we want from a GRID u This is the result of our experience on TB0 & TB1 OS & Net services Basic Services High level GRID middleware LHC VO common application layer Other apps ALICEATLASCMSLHCb Specific application layer Other apps GLOBU S team GRID architecture

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 19 Common use cases OS & Net services Bag of Services (GLOBUS) Specific application layer GLOBU S team MiddleWare MW1MW2MW3MW4MW5 MiddleWare MW1MW2MW3MW4MW5 LHC VO use cases & requirements Other apps If we manage to define ALICEATLASCMSLHCbOther apps MiddleWare MW1MW2MW3MW4MW5 VO use cases & requirements Common core use case Or even better LHCOther apps MiddleWare MW1MW2MW3MW4MW5 VO use cases & requirements It will be easier to arrive at Common use cases LHCOther apps

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 20 Plan for the next year u Continue exploitation of TB1 n Provide continuous feedback to developers n Deploy larger and more realistic applications on TB1 n Use the TB1 for “data challenges” and “production challenges” n Use more sites as TB1 expands u Refine the user requirements and long term views n We should be able to express them in a more uniform way n A common set of requirements / use cases to define common solutions n This will also facilitate the relation with other GRID MW projects s Collaboration with DataTag WP4 has MW interoperability as specific goal n A Requirement Technical Assessment Group (RTAG) has been launched in the context of the LHC GRID Computing Project with this mandate n This has been possible because of the work of the DataGRID project and is heavily based on WP8's findings to date

EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 21 Summary u Applications have been able to deploy and demonstrate large applications on the TestBed n The LHC experiments are participating with enthusiasm to the project u The side effect is that the pressure is high from the users for n Support n Stable environment n Functionality n Documentation n Wide deployment u These are healthy signs n Continued positive reaction to the feedback from the users will maintain their high level of enthusiasm and participation u The evolution of TB1 leading to TB2 will be a crucial test of the project’s ability to reach its final goals