LISHEP, Rio de Janeiro, 20 February 2004 Russia in LHC DCs and EDG/LCG/EGEE V.A. Ilyin Moscow State University.

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
EGEE is a project funded by the European Union under contract IST Application Identification and Support (NA4) Activities in the RDIG-EGEE.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
HEP Computing Coordination in Russia V.A. Ilyin Moscow State University ICFA Workshop on Grid and Digital Divide, Daegu, 26 May 2005.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Participation of JINR in the LCG and EGEE projects V.V.Korenkov (JINR, Dubna) NEC’2005, Varna 17 September, 2005.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Nick Brook Current status Future Collaboration Plans Future UK plans.
EU DataGrid segment in Russia. Testbed WP6. V.Ilyin 1, N. Kruglov 1, A. Kryukov 1, V. Korenkov 2, V. Kolosov 3, V. Mitsyn 2, L. Shamardin 1 1 SINP MSU.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks NA3 Activity in Russia Sergey Oleshko, PNPI,
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
WP8 Status – Stephen Burke – 30th January 2003 WP8 Status Stephen Burke (RAL) (with thanks to Frank Harris)
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Российский GRID для операций с большими массивами научных данных Ильин В.А. (НИИЯФ МГУ) проекты LCG (LHC Computing GRID) и EGEE (Enabling Grids for E_science.
EGEE is a project funded by the European Union under contract IST The Russian Research Centre Kurchatov Institute Partner Introduction Dr.Sergey.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
RDIG (Russian Data Intensive Grid) e-Infrastructure: Status and Plans Vyacheslav Ilyin (SINP, MSU), Vladimir Korenkov (JINR, Dubna), Aleksey Soldatov (RRC.
Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna.
…building the next IT revolution From Web to Grid…
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
INFSO-RI Enabling Grids for E-sciencE NA3 activity in Russia during EGEE project Elena Slabospitskaya NA3 Manager for Russia Varna,
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Networking and Grid in Russia V.A. Ilyin DDW’06, Cracow 11 October 2006.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Russia-CERN Joint Working Group on LHC Computing Russia-CERN Joint Working Group on LHC Computing, 19 March, 2004, CERN V.A. Ilyin 1.Some about JWGC 2.Russia.
INFSO-RI Enabling Grids for E-sciencE RDIG - Russia in EGEE Viatcheslav Ilyin RDIG Consortium Director, EGEE PMB SINP MSU (48),
LHC Computing, CERN, & Federated Identities
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Bob Jones EGEE Technical Director
Regional Operations Centres Core infrastructure Centres
The EDG Testbed Deployment Details
Ian Bird GDB Meeting CERN 9 September 2003
Russian Regional Center for LHC Data Analysis
RDIG for ALICE today and in future
LHC Data Analysis using a worldwide computing grid
Gridifying the LHCb Monte Carlo production system
Presentation transcript:

LISHEP, Rio de Janeiro, 20 February 2004 Russia in LHC DCs and EDG/LCG/EGEE V.A. Ilyin Moscow State University

MONARC project regional group LHC Computing GRID: the “cloud” view CERN Tier3 physics department    Desktop Germany UK France Italy CERN Tier1 USA Tier1 The opportunity of Grid technology Tier2 Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x Russia

Russian Tier2-Cluster Russian regional center for LHC computing (RRC-LHC) Cluster of institutional computing centers with Tier2 functionality and summary resources at % level of the canonical Tier1 center for each experiment (ALICE, ATLAS, CMS, LHCb): analysis; simulations; users data support. Participating institutes: Moscow ITEP, RRC KI, MSU, LPI, MEPhI… Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI RAS, … Novosibirsk BINP SB RAS Coherent use of distributed resources by means of LCG (EDG+VDT, …) technologies. Active participation in the LCG Phase1 Prototyping and Data Challenges (at 5% level) Q42007 CPU kSI Disk TB Tape TB International connectivity (CERN) Mbps /…Gbps/…

Russia Country Map Three regions are indicated on the map, where HEP centers are located: Moscow, St-Petersburg and Novosibirsk

Site (Centre) Acc./Coll. HEP Fac. Other Exp’sParticipation in major HEP Int. Collab. BINP SB RAS (Novosibirsk) VEPP-2M (linear collider at 1.4 GeV) VEPP-4 (linear collider up to 6 GeV) Non-Acc. HEP Exp’s. (Neutrino Phys., etc), Synchrotron Rad. F. CERN: ATLAS, LHC-acc, CLIC FNAL: Tevatron-acc DESY: TESLA KEK: BELLE SLAC: BaBar IHEP (Protvino, Moscow Region) U-70 (fix target, proton beam 70 GeV) Medical Exp’sBNL: PHENIX, STAR CERN: ALICE, ATLAS, CMS, LHCb DESY: ZEUS, HERA-B, TESLA FNAL: D0, E-781(Selex) ITEP (Moscow) U-10 (fix target, proton beam 10 GeV) Non-Acc. HEP Exp’s. (Neutrino Phys., etc) CERN: ALICE, ATLAS, CMS, LHCb, AMS DESY: H1, HERMES, HERA-B, TESLA FNAL: D0, CDF, E-781(Selex) KEK: BELLE DAFNE: KLOE JINR (Dubna, Moscow Region) Nuclotron (heavy ions coll. at 6 GeV/n) Low Ener. Acc., Nuclear Reactor, Synchrotron Rad.F., Non-Acc. HEP Exp’s: Neutrino Phys., Medical Exp’s, Heavy-ion Physics BNL: PHENIX, STAR CERN: ALICE, ATLAS, CMS, NA48, COMPASS, CLIC, DIRAC DESY: H1, HERA-B, HERMES, TESLA FNAL: D0, CDF KEK: E391a

Site (Centre) HEP Acc./Coll. Other Exp’s Participation in major HEP Int. Collab. INR RAS (Troitsk, Moscow region, Research Centre) Low Energy Acc., Non-Acc. HEP Exp’s (Neutrino Phys.) CERN: ALICE, CMS, LHCb KEK: E-246 TRIUMF: E-497 RRC KI (Moscow, Res. Centre) Low Energy Acc., Nuclear Reactors, Synchrotron Rad. F. BNL: PHENIX CERN: ALICE, AMS MEPhI (Moscow, University) Low Energy Acc., Nuclear Reactor BNL: STAR CERN: ATLAS DESY: ZEUS, HERA-B, TESLA PNPI RAS (Gatchina, St-Petersburg region, Research Centre) Mid/Low Energy Acc., Nulcear Reactor BNL: PHENIX CERN: ALICE, ATLAS, CMS, LHCb DESY: HERMES FNAL: D0, E-781(Selex) SINP MSU (Moscow, University) Low Energy Acc., Non-Acc. HEP Exp. (EAS-1000) CERN: ATLAS, CMS, AMS, CLIC DESY: ZEUS, TESLA FNAL: D0, E-781(Selex)

Goals of Russian (distributed) Tier2  to provide a full-scale participation of Russian physicists in the analysis only in this case Russian investments in LHC would lead to the final goal of obtaining a new fundamental knowledge on the structure of matter  to open wide possibilities for participation of students and young scientists in research at LHC support and improve a high level of scientific schools in Russia  participation in the creation of international LHC Computing GRID will mean for Russia an access to new advanced computing techniques

Functions of Russian (distributed) Tier2 physical analysis of AOD (Analysis Object Data); access to (external) ESD/RAW and SIM data bases  for preparing necessary (local) AOD sets ; replication of AOD sets from Tier1/Tier2 grid (cloud); event simulation  at the level of 5-10% of the whole SIM data bases for each experiment; replication and store of 5-10% of ESD  required for testing the procedures of the AOD creation; storage of data produced by users.  participation in distributed storage of full ESD data (Tier1 function)…?

Architecture of Russian (distributed) Tier2 a clusterTier2  RRC-LHC will be a cluster of institutional centers with Tier2 functionality distributed system - DataGrid cloud of Tier2(/Tier3) centers coherent interaction  a coherent interaction of computing centers of participating Institutes: each Institute knows its resources but can get significantly more if others agree; summary resources4-5  for each Collaboration summary resources (of about 4-5 basic institutional centers) will reach the level of 50-70% of a canonical Tier1 center: each Collaboration knows its summary resources but can get significantly more if other Collaborations agree; global grid for data store and access  RRC-LHC will be connected to Tier1 at CERN and/or to other Tier(s)1 in a context of a global grid for data store and access : each Institute and each Collaboration can get significantly more if other reg.centers agree.

Russian Regional Center: the DataGrid cloud PNPI IHEP RRC KI ITEP JINR SINP MSU The opportunity of Grid technology RRC-LHC LCG Tier1/Tier2 cloud CERN … Gbits/s FZK Regional connectivity: cloud backbone – Gbit’s/s to labs – 100–1000 Mbit/s Collaborative centers Tier2 cluster GRID access

“Users”-“Tasks” and resources (analysis from 2001 – need to be updated – conception of Tier2s) The number of active users is main parameter for estimation of the resources needed. We did some estimates, in particular based on extrapolation of Tevatron analysis tasks performed by our physicists (single top production at D0, …). Thus, in some “averaging” figures: an “user task” – analysis of 10 7 events per a day (8 hours) by one physicist ALICE ATLAS CMS LHCb In the following we estimate RRC resources (Phase 1) by the assumption that our participation in SIM data base production is at 5% level for each experiment. Very poor understanding of this key (for Tier2) characteristics!

Resources required by the 2008 ALICEATLASRDMS CMSLHCbTotal CPU (KSI95) Disk (TB) Tape (TB) We suppose: each active user will create local AOD sets ~10 times per year, and keep these sets on the disks during the year the general AOD sets will be replicated from the Tier1 cloud ~10 times per year, storing previous sets on the tapes. The disk space usage will be partitioned as  15% to store general AOD+TAG sets;  15% to store local sets of AOD+TAG;  15% to store users data;  15% to store current sets of sim.data (SIM-AOD, partially SIM-ESD);  30-35% to store the 10% portion of ESD;  5-10% cache.

Construction timeline Timeline for the RRC-LHC resources at the construction phase: %30%55% After 2008 investments will be necessary for supporting the computing and storage facilities and increasing the CPU power and storage space. In 2008 about 30% of the expenses in Every next year: renewing of 1/3 of CPU, increase the disk space for 50%, and increase the tape storage space for 100%.

Financial aspects Phase1 ( ) 2.5 MCHF equipment, 3.5 MCHF network + initial inivestments to some regional networks Construction phase ( ) 10 MCHF equipment, 3 MCHF network _____________________________________ in total ( ) 19 MCHF 2009 – 200x 2 MCHF/year 2003, December – new Protocol has been signed by Russia and CERN on frameworks for Russia participation in LHC project on period from 2007, including: 1) M&O, 2) computing in Exps, 3) RRC-LHC and LCG.

LHCb DC03 Resource Usage c.f. DC02 –3.3M evts –49 days CERN 44% Bologna 30% Lyon 18% RAL 3.9% Cambridge 1.1% Moscow 0.8% Amsterdam 0.7% Rio 0.7% Oxford 0.7% ITEP Moscow IHEP Protvino JINR Dubna SINP MSU

CMS Productions (2001) Simulation Digitization GDMP Common Production tools (IMPALA) No PUPU CERN Fully operational FNAL Moscow (First!) INFN Caltech UCSD UFL Bristol Wisconsin IN2P3Not Op. HelsinkiNot Op.

Man Power for CMS Computing in Russia Institutesfarm administration installation& production running production tools PRS SW code ORCA Physics generators  FTE SINP MSU & RCC JINR ITEP IHEP Kharkov (Ukrain) LPI  FTE in a total – 25.3 FTE Sept.2003

IMPALA/BOSS integration with GRID GRID CMKIN IMPALA Job Executer Gate Keeper Batch Manager BOSS Dolly WN1 WN2 Wnn UI CE Recource Broker Job Jobs MySQL DB CERN RefDB Environment SINP MSU (Moscow) – JINR (Dubna) – INFN (Padova) 2002

Russia in LCG We have started activity in LCG in autumn Russia joined to the LCG-1 infrastructure (CERN press-release ). First SINP MSU, soon RRC KI, JINR, ITEP and IHEP (already to LHC-2). Manpower contribution to LCG (started in May 2003): the Agreement is under signing by CERN and Russia and JINR officials, 3 tasks for our responsibility: 1) testing new GRID mw to be used in LCG 2) evaluation of new-on-the-market GRID mw (first task – evaluation of OGSA/GT3) 3) common solutions for event generators (event data bases) Twice per year (spring-autumn) meetings of the Russia-CERN Joint Working Group on Computing. Next meeting on 19 March at CERN.

Information System testing for LCG-1 Elena Slabospitskaya Institute for High Energy Physics, Protvino, Russia

Information System testing for LCG-1 UI RB CE WN Edg-job-submit CondorG PBS, LSF.... CondorG Globus EDG Globusrun Gatekeeper The schema of the job submission via RB and directly to the CE via Globus GRAM Network server Workload Manager CondorG

It was designed and realized OGSA/GT3 testbed (named 'Beryllium') on the basis of PCs located at CERN and SINP MSU modelling a GT3 based Grid system. Created software for common library of MC generators, GENSER, New project MCDB (Monte Carlo Data Base) for LCG AA is proposed with Russia responsibility, as common solution for storing and providing access cross the LCG sites to the samples of events at partonic level.

The simplified schema of Beryllium testbed (CERN-SINP) ● The resource broker plays a central role: – Accepts requests from the User – Using the Information Service information, selects the suitable computer elements – Reserve the selected Computing Element – Communicates to the user a “ticket” to allow job submission – Maintains a list of all jobs running and receive confirmation – messages of the ongoing processing from the CEs – At job end, it updates the table of running job/CE status

Externally Funded LCG Personnel at CERN

EU-DataGrid Russia institutes participated in the EU-DataGrid project  WP6 (Testbed and Demonstration)  WP8 (HEP Application) 2001: Grid information service (GRIS-GIIS), DataGrid Certificate Authority (CA) and Registration Authority (RA). WP6 Testbed0 (Spring-Summer 2001) – 2 sites. WP6 Testbed1 (Autumn 2001) – 4 active sites (SINP MSU, ITEP, JINR, IHEP), significant resources (160 CPUs, 7.5 TB disk). 2002: Testbed1 new active site – PNPI Testbed1 Virtual Organizations (VO) – WP6, ITeam WP8 – CMS VO, ATLAS and ALICE VO ’ s, WP8 CMS MC Run (spring) – ~1 Tbyte data transferred to CERN and FNAL, Resource Broker (RB) – SINP MSU +CERN+INFN experiment Metadispatcher (MD) – colaboration with Keldysh Inst.Appl.Math. (Moscow) – algorithms of dispatchering (scheduling) jobs in DataGrid environment.

CEWN lhc01.sinp.msu.ru lhc02.sinp.msu.ru SINP MSU Site SE lhc03.sinp.msu.ru EDG Software deployment at SINP MSU (example - CMS VO, 7 June 2002) SINP MSU RB+ Information Index lhc20.sinp.msu.ru User Interface Node lhc04.sinp.msu.ru CERN lxshare0220.cern.ch Padova grid011.pd.infn.it

EGEE Enabling Grids for e-Science in Europe – EGEE EU project approved to provide partial funding for operation of a general e-Science grid in Europe, including the supply of suitable middleware. EGEE is proposed as a project funded by the European Union under contract IST Budget – about 32 Meuro per EGEE provides funding for 70 partners, large majority of which have strong HEP ties. Russia: 8 institutes (SINP MSU, JINR, ITEP, IHEP, RRC KI, PNPI, KIAM RAS, IMPB RAS), budget 1 Meuro per Russian matching of the EC budget is in good shape (!)

EGEE Partner Federations Integrate regional Grid efforts

EGEE Timeline

Distribution of Service Activities over Europe : Operations Management at CERN; Core Infrastructure Centres in the UK, France, Italy, Russia (PM12) and at CERN, responsible for managing the overall Grid infrastructure; Regional Operations Centres, responsible for coordinating regional resources, regional deployment and support of services. Russia: CIC – SINP MSU, RRC KI ROC – IHEP, PNPI, IMPB RAS Dissemination&Outreach – JINR,

 S.E. Europe, Russia: Catching Up  Latin Am., Mid East, China:Keeping Up  India, Africa: Falling Behind ICFA SCIC Feb 2004

LHC Data Challenges Typical example – transferring of 100 Gbyte of data from Moscow to CERN for one working day  50 Mbps of bandwidth !

GLORIAD 10 Gbps REGIONAL CONNECTIVITY for RUSSIA HEP Moscow 1 Gbps IHEP 8 Mbps (m/w), under construction 100 Mbps fiber-optic (Q1-Q2 2004?) JINR 45 Mbps, Mbps (Q1-Q2 2004), Gbps ( ) INR RAS 2 Mbps+2x4Mbps(m/w) BINP 1 Mbps, 45 Mbps (2004 ?), … GLORIAD PNPI 512 Kbps (commodity Internet), and 34 Mbps f/o but (!) budget is only for 2 Mbps INTERNATIONAL CONNECTIVITY for RUSSIA HEP USA NaukaNET 155 Mbps GEANT 155 Mbps basic link, plus 155 Mbps additional link for GRID projects Japan through USA by FastNET, 512 Kbps Novosibirsk(BINP) – KEK(Belle)