MONARC : results and open issues Laura Perini Milano.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
CHEP 2000 (Feb. 7-11)Paul Avery (Data Grids in the LHC Era)1 The Promise of Computational Grids in the LHC Era Paul Avery University of Florida Gainesville,
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Modeling Regional Centers with MONARC Simulation Tools Modeling LHC Regional Centers with the MONARC Simulation Tools Irwin Gaines, FNAL for the MONARC.
No vember 15, 2000 MONARC Project Status Report Harvey B Newman (CIT) MONARC Project Status Report Harvey Newman California Institute.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
26 Nov 1999 F Harris LHCb computing workshop1 Development of LHCb Computing Model F Harris Overview of proposed workplan to produce ‘baseline computing.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999.
Preliminary Validation of MonacSim Youhei Morita *) KEK Computing Research Center *) on leave to CERN IT/ASD.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Models of Networked Analysis at Regional Centres Harvey Newman MONARC Workshop CERN May 10, 1999
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
June 22, 1999MONARC Simulation System I.C. Legrand1 MONARC Models Of Networked Analysis at Regional Centres Distributed System Simulation Iosif C. Legrand.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Plenary December 9 Agenda u Introductions HN, LP15’ è Status of Actual CMS ORCA databases.
1 Evaluation of Cooperative Web Caching with Web Polygraph Ping Du and Jaspal Subhlok Department of Computer Science University of Houston presented at.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
Workload Management Workpackage
Data Challenge with the Grid in ATLAS
Bernd Panzer-Steindel, CERN/IT
Simulation use cases for T2 in ALICE
LHCb thinking on Regional Centres and Related activities (GRIDs)
Development of LHCb Computing Model F Harris
Presentation transcript:

MONARC : results and open issues Laura Perini Milano

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Layout of the talk Most material from Irwin Gaines talk at Chep2000 u The basic goals and structure of the project u The Regional Centers è Motivation è Characteristics è Functions u Same Results from the simulations u The need for more realistic “implementation oriented” Models: Phase-3 u Relations with GRID Status of the project: Phase-3 LOI presented in January, Phase-2 Final Report to be published next week, Milestones and basic goals met

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL MONARCMONARC u A joint project (LHC experiments and CERN/IT) to understand issues associated with distributed data access and analysis for the LHC u Examine distributed data plans of current and near future experiments u Determine characteristics and requirements for LHC regional centers u Understand details of analysis process and data access needs for LHC data u Measure critical parameters characterizing distributed architectures, especially database and network issues u Create modeling and simulation tools u Simulate a variety of models to understand constraints on architectures

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL MONARCMONARC Models Of Networked Analysis At Regional Centers Caltech, CERN, FNAL, Heidelberg, INFN, Helsinki, KEK, Lyon, Marseilles, Munich, Orsay, Oxford, RAL,Tufts,... GOALS è Specify the main parameters characterizing the Model’s performance: throughputs, latencies è Determine classes of Computing Models feasible for LHC (matched to network capacity and data handling resources) è Develop “Baseline Models” in the “feasible” category è Verify resource requirement baselines: (computing, data handling, networks) COROLLARIES: è Define the Analysis Process è Define Regional Center Architectures è Provide Guidelines for the final Models 622 Mbits/s Desk tops CERN n.10 7 MIPS m Pbyte Robot University n.10 6 MIPS m Tbyte Robot FNAL MIPS 110 Tbyte Robot 622 Mbits/s N x 622 Mbits/s 622Mbits/s Desk tops Desk tops

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Working Groups Architecture WG u Baseline architecture for regional centres, Technology tracking, Survey of computing model of current HENP experiments Analysis Model WG u Evaluation of LHC data analysis model and use cases Simulation WG u Develop a simulation tool set for performance evaluation of the computing models Testbed WG u Evaluate the performance of ODBMS, network in the distributed environment

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL General Need for distributed data access and analysis: General Need for distributed data access and analysis: Potential problems of a single centralized computing center include: - scale of LHC experiments: difficulty of accumulating and managing all resources at one location - geographic spread of LHC experiments: providing equivalent location independent access to data for physicists - help desk, support and consulting in same time zone - cost of LHC experiments: optimizing use of resources located world wide

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Motivations for Regional Centers A distributed computing architecture based on regional centers offers: u A way of utilizing the expertise and resources residing in computing centers all over the world u Provide local consulting and support u To maximize the intellectual contribution of physicists all over the world without requiring their physical presence at CERN u Acknowledgement of possible limitations of network bandwidth u Allows people to make choices on how they analyze data based on availability or proximity of various resources such as CPU, data, or network bandwidth.

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Future Experiment Survey Analysis/Results u From the previous survey, we saw many sites contributed to Monte Carlo generation è This is now the norm u New experiments trying to use the Regional Center concept è BaBar has Regional Centers at IN2P3 and RAL, a smaller one in Rome è STAR has Regional Center at LBL/NERSC è CDF and D0 offsite institutions paying more attention as run gets closer.

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Future Experiment Survey Other observations/ requirements u In the last survey, we pointed out the following requirements for RC’s: è 24X7 support è software development team è diverse body of users è good, clear documentation of all s/w and s/w tools u The following are requirements for the central site (I.e. CERN) è Central code repository easy to use and easily accessible for remote sites è be “sensitive” to remote sites in database handling, raw data handling and machine flavors è provide good, clear documentation of all s/w and s/w tools u The experiments in this survey achieving the most in distributed computing are following these guidelines

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Tier0: CERN Tier1: National “Regional” Center Tier2: Regional Center Tier3: Institute Workgroup Server Tier4: Individual Desktop Total 5 Levels

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL 250 Gbps 0.8 Gbps 8 Gbps ………… 1400 boxes 160 clusters 40 sub-farms 12 Gbps* 480 Gbps* 3 Gbps* 1.5 Gbps 100 drives 12 Gbps 5400 disks 340 arrays ……... LAN-SAN routers LAN-WAN routers CERN CMS Offline Farm at CERN circa 2006 lmr for Monarc study- april 1999 tapes 0.8 Gbps (daq) 0.8 Gbps 5 Gbps disks processor s storage network farm network * assumes all disk & tape traffic on storage network double these numbers if all disk & tape traffic through LAN-SAN router

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Processor cluster basic box four 100 SI95 processors standard network connection (~2 Gbps) 15% of systems configured as I/O servers (disk server, disk-tape mover, Objy AMS,..) with additional connection to the storage network cluster 9 basic boxes with a network switch (<10 Gbps) sub-farm 4 clusters - with a second-level network switch (<50 Gbps) one sub-farm fits in one rack 3 Gbps* 1.5 Gbps configured as I/O servers storage network farm network cluster and sub-farm sizing adjusted to fit conveniently the capabilities of network switch, racking, power distribution components sub-farm: 36 boxes, 144 cpus, 5 m 2 lmr for Monarc study- april 1999

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Regional Centers Regional Centers will u Provide all technical services and data services required to do the analysis u Maintain all (or a large fraction of) the processed analysis data. Possibly may only have large subsets based on physics channels. Maintain a fixed fraction of fully reconstructed and raw data u Cache or mirror the calibration constants u Maintain excellent network connectivity to CERN and excellent connectivity to users in the region. Data transfer over the network is preferred for all transactions but transfer of very large datasets on removable data volumes is not ruled out. u Share/develop common maintenance, validation, and production software with CERN and the collaboration u Provide services to physicists in the region, contribute a fair share to post-reconstruction processing and data analysis, collaborate with other RCs and CERN on common projects, and provide services to members of other regions on a best effort basis to further the science of the experiment u Provide support services, training, documentation, trouble shooting to RC and remote users in the region

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software Development R&D Systems and Testbeds Info servers Code servers Web Servers Telepresence Servers Training Consulting Help Desk Production Reconstruction Raw/Sim-->ESD Scheduled, predictable experiment/ physics groups Production Analysis ESD-->AOD AOD-->DPD Scheduled Physics groups Individual Analysis AOD-->DPD and plots Chaotic Physicists Desktops Tier 2 Local institutes CERN Tapes Support Services

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software Development R&D Systems and Testbeds Info servers Code servers Web Servers Telepresence Servers Training Consulting Help Desk Production Reconstruction Raw/Sim-->ESD Scheduled, predictable experiment/ physics groups Production Analysis ESD-->AOD AOD-->DPD Scheduled Physics groups Individual Analysis AOD-->DPD and plots Chaotic Physicists Desktops Tier 2 Local institutes CERN Tapes Data Input Rate from CERN: Raw Data - 5% 50TB/yr ESD Data - 50% 50TB/yr AOD Data - All 10TB/yr Revised ESD - 20TB/yr Data Input from Tier 2: Revised ESD and AOD - 10TB/yr Data Input from Simulation Centers: Raw Data - 100TB/yr Data Output Rate to CERN: AOD Data - 8 TB/yr Recalculated ESD - 10 TB/yr Simulation ESD data - 10 TB/yr Data Output to Tier 2: Revised ESD and AOD - 15 TB/yr Data Output to local institutes: ESD, AOD, DPD data - 20TB/yr Total Storage: Robotic Mass Storage - 300TB Raw Data: 50TB 5*10**7 events (5% of 1 year) Raw (Simulated) Data: 100TB 10**8 events EDS (Reconstructed Data): 100TB - 10**9 events (50% of 2 years) AOD (Physics Object) Data: 20TB 2*10**9 events (100% of 2 years) Tag Data: 2TB (all) Calibration/Conditions data base: 10TB (only latest version of most data types kept here) Central Disk Cache - 100TB (per user demand) CPU Required for AMS database servers: ??*10**3 SI95 power

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Physics Sftware Development R&D Systems and Testbeds Info servers Code servers Web Servers Telepresence Servers Training Consulting Help Desk Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Production Reconstruction Raw/Sim-->ESD Scheduled experiment/ physics groups Production Analysis ESD-->AOD AOD-->DPD Scheduled Physics groups Individual Analysis AOD-->DPD and plots Chaotic Physicists Desktops Tier 2 Local institutes CERN Tapes Farms of low cost commodity computers, limited I/O rate, modest local disk cache Reconstruction Jobs: Reprocessing of raw data: 10**8 events/year (10%) Initial processing of simulated data: 10**8/year 1000 SI95-sec/event ==> 10**4 SI95 capacity: 100 processing nodes of 100 SI95 power Event Selection Jobs: 10 physics groups * 10**8 events (10%samples) * 3 times/yr based on ESD and latest AOD data 50 SI95/evt ==> 5000 SI95 power Physics Object creation Jobs: 10 physics groups * 10**7 events (1% samples) * 8 times/yr based on selected event sample ESD data 200 SI95/event ==> 5000 SI95 power Derived Physics data creation Jobs: 10 physics groups * 10**7 events * 20 times/yr based on selected AOD samples, generates “canonical” derived physics data 50 SI95/evt ==> 3000 SI95 power Total 110 nodes of 100 SI95 power Derived Physics data creation Jobs: 200 physicists * 10**7 events * 20 times/yr based on selected AOD and DPD samples 20 SI95/evt ==> 30,000 SI95 power Total 300 nodes of 100 SI95 power

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL MONARC Analysis Process Example

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Model and Simulation parameters Have a new set of parameters common to all simulating groups. More realistic values, but still to be discussed/agreed on the basis of Experiment’s information Proc_time_RAW SI95sec/event (350) 25 Proc_Time_ESD “ (2.5) 5 Proc_Time_AOD “ (0.5) 3 Analyze_Time_TAG “ 3 Analyze_Time_AOD “ 15 Analyze_Time_ESD “ (3) 600 Analyze_Time_RAW “ (350) 100 Memory of Jobs MB 5000 Proc_Time_Create_RAW SI95sec/event (35) 1000 Proc_Time_Create_ESD “ (1) 25 Proc_Time_Create_AOD “ (1)

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Base Model used Basic Jobs u Reconstruction of 10 7 events : RAW--> ESD --> AOD --> TAG at CERN It’s the production while the data are coming from the DAQ (100 days of running collecting a billion of events per year) u Analysis of 5 Working Groups each of 25 analyzers on TAG only (no request to higher level data samples). Every analyzer submit 4 sequential jobs on 10 6 events. Each analyzer work start-time is a flat random choice in the range of 3000 seconds. Each analyzer data sample of 10 6 events is a random choice in the complete data sample of TAG DataBase consisting of 10 7 events. u Transfer (FTP) of a 10 7 events ESD, AOD and TAG from CERN to RC –CERN Activities : Reconstruction, 5 WG Analysis, FTP transfer –RC Activities : 5 (uncorrelated) WG Analysis, receive FTP transfer Job’s “paper estimate”: –Single Analysis Job : 1.67 CPU hours at CERN = 6000 sec at CERN (same at RC) –Reconstruction at CERN for 1/500 RAW to ESD : 3.89 CPU hours = sec –Reconstruction at CERN for 1/500 ESD to AOD : 0.03 CPU hours = 100 sec

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Resources: LAN speeds ?! In our Models the DB Servers are uncorrelated and thus one activity uses a single Server. The bottlenecks are the “read” and “write” speed to and from the Server. In order to use the CPU power at reasonable percentage we need a read speed of at least 300 MB/s and a write speed of 100 MB/s (milestone already met today) We use 100 MB/s in current simulations (10 Gbits/sec switched LANs in 2005 may be possible). Processing node link speed is negligible in our simulations. Of course the “real” implementation of the Farms can be different, but the results of the simulation do not depend on “real” implementation: they are based on usable resources. See following slides

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL More realistic values for CERN and RC 100 MB/sec Data Link speeds at 100 MB/sec (all values) except : u Node_Link_Speed at 10 MB/sec u WAN Link speeds at 40 MB/sec CERN u 1000 Processing nodes each of 500 SI95 RC u 200 Processing nodes each of 500 SI Processing nodes times 500SI95 = 500kSI95 about the CPU power of CERN Tier0 disk space as for the number of DBs 100kSI95 processing Power = 20% CERN disk space as for the number of DBs

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Overall Conclusions MONARC simulation tools are: u sophisticated enough to allow modeling of complex distributed analysis scenarios u simple enough to be used by non experts Initial modeling runs are alkready showing interesting results Future work will help identify bottlenecks and understand constraints on architectures

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL MONARC Phase 3 More Realistic Computing Model Development  Confrontation of Models with Realistic Prototypes;  At Every Stage: Assess Use Cases Based on Actual Simulation, Reconstruction and Physics Analyses; u Participate in the setup of the prototyopes u We will further validate and develop MONARC simulation system using the results of these use cases (positive feedback) u Continue to Review Key Inputs to the Model è CPU Times at Various Phases è Data Rate to Storage è Tape Storage: Speed and I/O  Employ MONARC simulation and testbeds to study CM variations, and suggest strategy improvements

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL MONARC Phase 3 u Technology Studies u Data Model u Data structures è Reclustering, Restructuring; transport operations è Replication è Caching, migration (HMSM), etc. u Network è QoS Mechanisms: Identify Which are important u Distributed System Resource Management and Query Estimators è (Queue management and Load Balancing)  Development of MONARC Simulation Visualization Tools for interactive Computing Model analysis

LUND 16 Mar 2000 L. Perini MONARC: results and open issuesL Relation to GRID The GRID project is great! u Development of s/w tools needed for implementing realistic LHC Computing Models è farm management, WAN resource and data management, etc…. u Help in getting funds for real life testbed systems (RC prototypes) Complementarity GRID-MONARC hierarchical RC Model u Hierarchy of RC is a safe option. If GRID will bring big advancements, less hierarchical models should alo become possible Timings well matched u MONARC Phase-3 to last ~1 year: bridge to GRID project starting early in 2001 u Afterwards common work by LHC experiments for developping the computing models will surely be still needed: in which project framework and for how long we will see then...