July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
Chapter 9: Moving to Design
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
MONARC : results and open issues Laura Perini Milano.
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Modeling Regional Centers with MONARC Simulation Tools Modeling LHC Regional Centers with the MONARC Simulation Tools Irwin Gaines, FNAL for the MONARC.
No vember 15, 2000 MONARC Project Status Report Harvey B Newman (CIT) MONARC Project Status Report Harvey Newman California Institute.
1 Grid Related Activities at Caltech Koen Holtman Caltech/CMS PPDG meeting, Argonne July 13-14, 2000.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
ATLAS, U.S. ATLAS, and Databases David Malon Argonne National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
The GriPhyN Planning Process All-Hands Meeting ISI 15 October 2001.
…building the next IT revolution From Web to Grid…
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Network design Topic 6 Testing and documentation.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
1 Future Circular Collider Study Preparatory Collaboration Board Meeting September 2014 R-D Heuer Global Future Circular Collider (FCC) Study Goals and.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Models of Networked Analysis at Regional Centres Harvey Newman MONARC Workshop CERN May 10, 1999
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
June 22, 1999MONARC Simulation System I.C. Legrand1 MONARC Models Of Networked Analysis at Regional Centres Distributed System Simulation Iosif C. Legrand.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Site Services and Policies Summary Dirk Düllmann, CERN IT More details at
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Plenary December 9 Agenda u Introductions HN, LP15’ è Status of Actual CMS ORCA databases.
IT-DSS Alberto Pace2 ? Detecting particles (experiments) Accelerating particle beams Large-scale computing (Analysis) Discovery We are here The mission.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Patrick Dreher Research Scientist & Associate Director
Database System Architectures
Proposal for a DØ Remote Analysis Model (DØRAM)
Development of LHCb Computing Model F Harris
Presentation transcript:

July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999

MONARC Meeting CERN MONARC July 26 Agenda u IntroductionHN, LP 10’ u Discussion of results from RD45 Workshop;Eva Arderiu; 25’ Items most relevant to MONARCYouhei Morita u Validation ProcedureK. Sliwa 15’ u Recent Objectivity tests and implications Youhei; 15’ for the validation milestoneM. Sgaravatto 15’ u Strawman Computing Facility for One Les Robertson 30’ Large Experiment at CERN u Tier1 Regional Centre Facility Irwin Gaines 20’ u Items from the WG Chairs 15’ u Preparations and Policy for the Marseilles HN, LP 20’ Meeting, Worldwide Computing Session u AOB; Adjourn by 19:30 or earlier BREAK 15’ u Steering Committee Steering Committee Adjourns by 21:00 Or Earlier

July 26, 1999MONARC Meeting CERN MONARC Phase 1 and 2 Possible Deliverables u Summer 1999: Benchmark test validating the simulation u Fall 1999: A Baseline Model representing a possible (somewhat simplified) solution for LHC Computing. è Baseline numbers for a set of system and analysis process parameters è Reasonable “ranges” of parameters k “Derivatives”: How the effectiveness depends on some of the more sensitive parameters è Agreement of the experiments on the reasonableness of the Baseline Model è Progress towards the Baseline shown at Marseilles LCB Meeting, at the end of September u Chapter on Computing Models in the CMS and ATLAS Computing Progress Reports

July 26, 1999MONARC Meeting CERN MONARC Phase 2 Milestones July 1999: Complete Phase 1; Begin Second Cycle of Simulations with More Refined Models

July 26, 1999MONARC Meeting CERN MONARC Issues to Discuss at this Meeting u Basic Parameters: Starting with the ESD Size u Architecture: LAN/SAN Only, or with SMP “I/O Server” u Simulation Modelers’ Team è CERN/US/Japan/Italy/Russia Coordination è Set up and maintenance of simulation software base è Division of labor for implementing “Objects” and performance/load characteristics è Release-tool and support for simulation software releases è Coordination of runs and reporting of results k Repository of Models and Results on the Web

July 26, 1999MONARC Meeting CERN Projects Aimed at LHC Data Analysis+ Other APPROVED PROJECTS uPPDG: Particle Physics Data Grid [DoE/NGI]HENP Labs, ANL (CS), Caltech, Uwisc (CS), SDSC uALPhAD: Access to Large Physics and Astronomy Databases [NSF/KDI]Johns Hopkins, Caltech and FNAL (SDSS) PROPOSAL IN PROGRESS uHENPVDS: HENP Virtual Data System [DoE/SSI ?] US ATLAS/US CMS/ LIGO PROPOSAL uGRAND: Grid Analysis of Networked Data, or uI(O)DA(LL): Internetworked (Object) Data Analysis (for LHC and LIGO), or... Additional Projects or Services uCLIPPER, NILE, I2-DSI, Condor, GLOBUS; Data Grids

July 26, 1999MONARC Meeting CERN Architectural Sketch: One Major LHC Experiment, At CERN See

July 26, 1999MONARC Meeting CERN MONARC Analysis Process WG A “Short” List of Upcoming Issues A “Short” List of Upcoming Issues è Review Event Sizes: How much data is stored; how much accessed for different analyses ? è Review CPU Times: Tracking at full luminosity è How much Reprocessing, and where (sharing scheme) ? è Priorities, schedules and policies k Production vs. Analysis Group vs. Individual activities k Allowed percentage of access to higher data tiers (TAG /Physics Objects/Reconstructed/RAW) è Including MC production; simulated data storage and access è Understanding how to manage persistent data: e.g. storage / migration / transport / re-compute strategies è Deriving a methodology for Model testing and optimisation k Metrics for evaluating the global efficiency of a Model: Cost vs throughput; turnaround; reliability of data access è Determining the role of Institutes’ workgroup servers (Tier3) and desktops (Tier4), in the Regional Centre Hierarchy

July 26, 1999MONARC Meeting CERN MONARC Testbeds WG Some Parameters to Be Measured, Installed in the MONARC Simulation Models, and Used in First Round Validation of Models. Isolation of “Key” Parameters Via Studies of u Objectivity AMS Response Time-Function, and its dependence on è Object clustering, page-size, data class-hierarchy and access pattern è Mirroring and caching with the DRO option u Scalability of the System Under “Stress”: è Performance as a function of the number of jobs, relative to the single-job performance u Performance and Bottlenecks for a variety of data access patterns è Frequency of following TAG  AOD; AOD  ESD; ESD  RAW è Data volume accessed remotely k Fraction on Tape, and on Disk k As Function of Net Bandwidth; Use of QoS

July 26, 1999MONARC Meeting CERN MONARC Strategy and Tools for Phase 2 Strategy : Vary System Capacity and Network Performance Parameters Over a Wide Range Strategy : Vary System Capacity and Network Performance Parameters Over a Wide Range u Avoid complex, multi-step decision processes that could require protracted study. k Keep for a possible Phase 3 u Majority of the workload satisfied in an acceptable time k Up to minutes for interactive queries, up to hours for short jobs, up to a few days for the whole workload u Determine requirements “baselines” and/or flaws in certain Analysis Processes in this way Tools and Operations to be Designed in Phase 2 u Query estimators u Affinity evaluators, to determine proximity of multiple requests in space or time u Strategic algorithms for caching, reclustering, mirroring, or pre-emptively moving data (or jobs)

July 26, 1999MONARC Meeting CERN MONARC Possible Phase 3 TIMELINESS and USEFUL IMPACT u Facilitate the efficient planning and design of mutually compatible site and network architectures, and services è Among the experiments, the CERN Centre and Regional Centres u Provide modelling consultancy and service to the experiments and Centres u Provide a core of advanced R&D activities, aimed at LHC computing system optimisation and production prototyping u Take advantage of work on distributed data-intensive computing for HENP this year in other “next generation” projects [*] è PPDG, ALPhAD, HENPVDS, and Our Project and joint proposal to NSF by ATLAS/CMS/LIGO in the US [*] See H. Newman,

July 26, 1999MONARC Meeting CERN MONARC Phase 3 Possible Technical Goal: System Optimisation Maximise Throughput and/or Reduce Long Turnaround u Include long and potentially complex decision-processes in the studies and simulations è Potential for substantial gains in the work performed or resources saved Phase 3 System Design Elements u RESILIENCE, resulting from flexible management of each data transaction, especially over WANs u FAULT TOLERANCE, resulting from robust fall-back strategies to recover from abnormal conditions u SYSTEM STATE & PERFORMANCE TRACKING, to match and co-schedule requests and resources, detect or predict faults Synergy with PPDG and other Advanced R&D Projects. Potential Importance for Scientific Research and Industry: Simulation of Distributed Systems for Data-Intensive Computing.