Download presentation
Presentation is loading. Please wait.
Published byMae Nash Modified over 9 years ago
1
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2) Short: N Pages è May Refer to MONARC Internal Notes to Document Progress Suggested Format: Similar to PEP Extension è Introduction: deliverables are realistic technical options and the associated resource requirements for LHC Computing; to be presented to the experiments and CERN, in support of Computing Model development for the Computing TDRs. è Brief Status; Existing Notes è Motivations for a Common Project --> Justification (1) è Goals and Scope of the Extension --> Justification (2) è Schedule: Preliminary estimate is 12 Months from completion of Phase 2, that will occur with the submission of the final Phase 1+2 Report. Final report will contain a proposal for the Phase 3 milestones and detailed schedule k Phase 3A: Decision on which prototypes to build or exploit u MONARC/Experiments/Regional Centres Working Meeting k Phase 3B: Specification of resources and prototype configurations u Setup of simulation and prototype environment k Phase 3C: Operation of prototypes and of simulation; analysis of results k Phase 3D: Feedback; strategy optimization
2
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (2/2) Equipment Needs (Scale specified further in Phase 3A) è MONARC Sun E450 server upgrade k TB RAID Array, GB memory upgrade k To act as a client to the System in CERN/IT, for distributed system studies è Access to Substantial system in the CERN/IT infrastructure consisting of a Linux farm, and a Sun-based data server over Gigabit Ethernet è Access to a Multi-Terabyte robotic tape store è Non-blocking access to WAN links to some of the main potential RC (e.g. 10 Mbps reserved to Japan; some tens of Mbps to US) è Temporary use of a large volume of tape media Relationship to Other Projects and Groups è Work in collaboration with CERN/IT groups involved databases and large scale data and processing services è Our role is to seek common elements that may be used effectively in the experiments’ Computing Models è Computational Grid Projects in US; Cooperate in upcoming EU Grid proposals è US other National Funded efforts with R&D components Submitted to Hans Hoffmann for Information on our Intention to Continue è Copy to Manuel Delfino
3
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 LoI Status Monarc has met its milestones up until now è Progress Report è Talks in Marseilles: General + Simulation è Testbed Notes: 99/4, 99/6, Youhei’s Note --> MONARC number è Architecture group notes: 99/1-3 è Simulation: Appendix of Progress Report è Short papers (Titles) for CHEP 2000 by January 15
4
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3: Justification (1) General: TIMELINESS and USEFUL IMPACT Facilitate the efficient planning and design of mutually compatible site and network architectures, and services è Among the experiments, the CERN Centre and Regional Centres Provide modelling consultancy and service to the experiments and Centres Provide a core of advanced R&D activities, aimed at LHC computing system optimisation and production prototyping Take advantage of work on distributed data-intensive computing for HENP this year in other “next generation” projects [*] è For example in US: “Particle Physics Data Grid” (PPDG) of DoE/NGI; + “Joint “GriPhyN” proposal on Computational Data Grids by ATLAS/CMS/LIGO/SDSS. Note EU Plans as well. [*] See H. Newman, http://www.cern.ch/MONARC/progress_report/longc7.html
5
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3 Justification (2A) More Realistic Computing Model Development (LHCb and Alice Notes) Confrontation of Models with Realistic Prototypes; At Every Stage: Assess Use Cases Based on Actual Simulation, Reconstruction and Physics Analyses; è Participate in the setup of the prototyopes è We will further validate and develop MONARC simulation system using the results of these use cases (positive feedback) u Continue to Review Key Inputs to the Model k CPU Times at Various Phases k Data Rate to Storage k Tape Storage: Speed and I/O Employ MONARC simulation and testbeds to study CM variations, and suggest strategy improvements
6
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3 Justification (2B) u Technology Studies u Data Model u Data structures k Reclustering, Restructuring; transport operations k Replication k Caching, migration (HMSM), etc. è Network k QoS Mechanisms: Identify Which are important è Distributed System Resource Management and Query Estimators k (Queue management and Load Balancing) Development of MONARC Simulation Visualization Tools for interactive Computing Model analysis (forward reference)
7
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3: Justification (3) Meet Near Term Milestones for LHC Computing For example CMS Data Handling Milestones: ORCA4: March 2000 ~1 Million event fully-simulated data sample(s) è Simulation of data access patterns, and mechanisms used to build and/or replicate compact object collections è Integration of database and mass storage use (including caching/migration strategy for limited disk space) è Other milestones will be detailed, and/or brought forward to meet the actual needs for HLT Studies and the TDRs for the Trigger, DAQ, Software and Computing and Physics ATLAS Geant4 Studies Event production and and analysis must be spread amongst regional centres, and candidates è Learn about RC configurations, operations, network bandwidth, by modeling real systems, and analyses actually with è Feedback information from real operations into simulations è Use progressively more realistic models to develop future strategies
8
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC: Computing Model Constraints Drive Strategies u Latencies and Queuing Delays è Resource Allocations and/or Advance Reservations è Time to Swap In/Out Disk Space è Tape Handling Delays: Get a Drive, Find a Volume, Mount a Volume, Locate File, Read or Write è Interaction with local batch and device queues è Serial operations: tape/disk, cross-network, disk-disk and/or disk-tape after network transfer u Networks è Useable fraction of bandwidth (Congestion, Overheads): 30-60% (?) Fraction for event-data transfers: 15-30% ? è Nonlinear throughput degradation on loaded or poorly configured network paths. u Inter-Facility Policies è Resources available to remote users è Access to some resources in quasi-real time
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.