December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Data Clustering Research in CMS Koen Holtman CERN/CMS Eindhoven University of technology CHEP ’2000 Feb 7-11, 2000.
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
EU-GRID Work Program Massimo Sgaravatto – INFN Padova Cristina Vistoli – INFN Cnaf as INFN members of the EU-GRID technical team.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
GRID DATA MANAGEMENT PILOT (GDMP) Asad Samar (Caltech) ACAT 2000, Fermilab October , 2000.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
MONARC : results and open issues Laura Perini Milano.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
23 Feb 2000F Harris Hoffmann Review Status1 Status of Hoffmann Review of LHC computing.
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
1 Grid Related Activities at Caltech Koen Holtman Caltech/CMS PPDG meeting, Argonne July 13-14, 2000.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Models of Networked Analysis at Regional Centres Harvey Newman MONARC Workshop CERN May 10, 1999
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Plenary December 9 Agenda u Introductions HN, LP15’ è Status of Actual CMS ORCA databases.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Hall D Computing Facilities Ian Bird 16 March 2001.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
ALICE Computing Data Challenge VI
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
UK GridPP Tier-1/A Centre at CLRC
Simulation use cases for T2 in ALICE
Ákos Frohner EGEE'08 September 2008
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document Progress  Suggested Format: Similar to PEP Extension è Introduction: deliverables are realistic technical options and the associated resource requirements for LHC Computing; to be presented to the experiments and CERN, in support of Computing Model development for the Computing TDRs. è Brief Status; Existing Notes è Motivations for a Common Project --> Justification (1) è Goals and Scope of the Extension --> Justification (2) è Schedule: Preliminary estimate is 12 Months from completion of Phase 2, that will occur with the submission of the final Phase 1+2 Report. Final report will contain a proposal for the Phase 3 milestones and detailed schedule k Phase 3A: Decision on which prototypes to build or exploit u MONARC/Experiments/Regional Centres Working Meeting k Phase 3B: Specification of resources and prototype configurations u Setup of simulation and prototype environment k Phase 3C: Operation of prototypes and of simulation; analysis of results k Phase 3D: Feedback; strategy optimization

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (2/2)  Equipment Needs (Scale specified further in Phase 3A) è MONARC Sun E450 server upgrade k TB RAID Array, GB memory upgrade k To act as a client to the System in CERN/IT, for distributed system studies è Access to Substantial system in the CERN/IT infrastructure consisting of a Linux farm, and a Sun-based data server over Gigabit Ethernet è Access to a Multi-Terabyte robotic tape store è Non-blocking access to WAN links to some of the main potential RC (e.g. 10 Mbps reserved to Japan; some tens of Mbps to US) è Temporary use of a large volume of tape media  Relationship to Other Projects and Groups è Work in collaboration with CERN/IT groups involved databases and large scale data and processing services è Our role is to seek common elements that may be used effectively in the experiments’ Computing Models è Computational Grid Projects in US; Cooperate in upcoming EU Grid proposals è US other National Funded efforts with R&D components  Submitted to Hans Hoffmann for Information on our Intention to Continue è Copy to Manuel Delfino

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 LoI Status  Monarc has met its milestones up until now è Progress Report è Talks in Marseilles: General + Simulation è Testbed Notes: 99/4, 99/6, Youhei’s Note --> MONARC number è Architecture group notes: 99/1-3 è Simulation: Appendix of Progress Report è Short papers (Titles) for CHEP 2000 by January 15

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3: Justification (1) General: TIMELINESS and USEFUL IMPACT  Facilitate the efficient planning and design of mutually compatible site and network architectures, and services è Among the experiments, the CERN Centre and Regional Centres  Provide modelling consultancy and service to the experiments and Centres  Provide a core of advanced R&D activities, aimed at LHC computing system optimisation and production prototyping  Take advantage of work on distributed data-intensive computing for HENP this year in other “next generation” projects [*] è For example in US: “Particle Physics Data Grid” (PPDG) of DoE/NGI; + “Joint “GriPhyN” proposal on Computational Data Grids by ATLAS/CMS/LIGO/SDSS. Note EU Plans as well. [*] See H. Newman,

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3 Justification (2A) More Realistic Computing Model Development (LHCb and Alice Notes)  Confrontation of Models with Realistic Prototypes;  At Every Stage: Assess Use Cases Based on Actual Simulation, Reconstruction and Physics Analyses; è Participate in the setup of the prototyopes è We will further validate and develop MONARC simulation system using the results of these use cases (positive feedback) u Continue to Review Key Inputs to the Model k CPU Times at Various Phases k Data Rate to Storage k Tape Storage: Speed and I/O  Employ MONARC simulation and testbeds to study CM variations, and suggest strategy improvements

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3 Justification (2B) u Technology Studies u Data Model u Data structures k Reclustering, Restructuring; transport operations k Replication k Caching, migration (HMSM), etc. è Network k QoS Mechanisms: Identify Which are important è Distributed System Resource Management and Query Estimators k (Queue management and Load Balancing)  Development of MONARC Simulation Visualization Tools for interactive Computing Model analysis (forward reference)

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3: Justification (3) Meet Near Term Milestones for LHC Computing  For example CMS Data Handling Milestones: ORCA4: March 2000 ~1 Million event fully-simulated data sample(s) è Simulation of data access patterns, and mechanisms used to build and/or replicate compact object collections è Integration of database and mass storage use (including caching/migration strategy for limited disk space) è Other milestones will be detailed, and/or brought forward to meet the actual needs for HLT Studies and the TDRs for the Trigger, DAQ, Software and Computing and Physics  ATLAS Geant4 Studies  Event production and and analysis must be spread amongst regional centres, and candidates è Learn about RC configurations, operations, network bandwidth, by modeling real systems, and analyses actually with è Feedback information from real operations into simulations è Use progressively more realistic models to develop future strategies

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC: Computing Model Constraints Drive Strategies u Latencies and Queuing Delays è Resource Allocations and/or Advance Reservations è Time to Swap In/Out Disk Space è Tape Handling Delays: Get a Drive, Find a Volume, Mount a Volume, Locate File, Read or Write è Interaction with local batch and device queues è Serial operations: tape/disk, cross-network, disk-disk and/or disk-tape after network transfer u Networks è Useable fraction of bandwidth (Congestion, Overheads): 30-60% (?) Fraction for event-data transfers: 15-30% ? è Nonlinear throughput degradation on loaded or poorly configured network paths. u Inter-Facility Policies è Resources available to remote users è Access to some resources in quasi-real time