Models of Networked Analysis at Regional Centres Harvey Newman MONARC Workshop CERN May 10, 1999

Slides:



Advertisements
Similar presentations
Software Process Models
Advertisements

1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Information Systems Analysis and Design
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
SE curriculum in CC2001 made by IEEE and ACM: Overview and Ideas for Our Work Katerina Zdravkova Institute of Informatics
The CrossGrid project Juha Alatalo Timo Koivusalo.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
Chapter 9: Moving to Design
© 2006 Pearson Addison-Wesley. All rights reserved2-1 Chapter 2 Principles of Programming & Software Engineering.
Capacity Planning in SharePoint Capacity Planning Process of evaluating a technology … Deciding … Hardware … Variety of Ways Different Services.
Computer System Lifecycle Chapter 1. Introduction Computer System users, administrators, and designers are all interested in performance evaluation. Whether.
What is Business Analysis Planning & Monitoring?
Effective Methods for Software and Systems Integration
Introduction to Information System Development.
Chapter 10 Architectural Design
Chapter 9 Elements of Systems Design
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
UML - Development Process 1 Software Development Process Using UML (2)
RUP Fundamentals - Instructor Notes
CPIS 357 Software Quality & Testing
Rational Unified Process Fundamentals Module 4: Disciplines II.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Chapter 9 Moving to Design
CS 3610: Software Engineering – Fall 2009 Dr. Hisham Haddad – CSIS Dept. Chapter 2 The Software Process Discussion of the Software Process: Process Framework,
Lecture 7: Requirements Engineering
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.
Information Systems Engineering. Lecture Outline Information Systems Architecture Information System Architecture components Information Engineering Phases.
Chapter 10 Analysis and Design Discipline. 2 Purpose The purpose is to translate the requirements into a specification that describes how to implement.
Systems Analysis and Design in a Changing World, Fourth Edition
9 Systems Analysis and Design in a Changing World, Fourth Edition.
Framework for MDO Studies Amitay Isaacs Center for Aerospace System Design and Engineering IIT Bombay.
Software Architecture Evaluation Methodologies Presented By: Anthony Register.
© 2006 Pearson Addison-Wesley. All rights reserved2-1 Chapter 2 Principles of Programming & Software Engineering.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
© 2006 Pearson Addison-Wesley. All rights reserved 2-1 Chapter 2 Principles of Programming & Software Engineering.
Architecture View Models A model is a complete, simplified description of a system from a particular perspective or viewpoint. There is no single view.
7 Strategies for Extracting, Transforming, and Loading.
Overview of RUP Lunch and Learn. Overview of RUP © 2008 Cardinal Solutions Group 2 Welcome  Introductions  What is your experience with RUP  What is.
July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999.
Modelling the Process and Life Cycle. The Meaning of Process A process: a series of steps involving activities, constrains, and resources that produce.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
Chapter 8 System Management Semester 2. Objectives  Evaluating an operating system  Cooperation among components  The role of memory, processor,
ANALYSIS PHASE OF BUSINESS SYSTEM DEVELOPMENT METHODOLOGY.
General requirements for BES III offline & EF selection software Weidong Li.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
SwCDR (Peer) Review 1 UCB MAVEN Particles and Fields Flight Software Critical Design Review Peter R. Harvey.
Paul Alexander1 DS3 Deliverable Status 4 th SKADS Workshop, Lisbon, 2-3 October 2008 DS3 Deliverables Review.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Plenary December 9 Agenda u Introductions HN, LP15’ è Status of Actual CMS ORCA databases.
Resource Optimization for Publisher/Subscriber-based Avionics Systems Institute for Software Integrated Systems Vanderbilt University Nashville, Tennessee.
HPHC - PERFORMANCE TESTING Dec 15, 2015 Natarajan Mahalingam.
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
9 Systems Analysis and Design in a Changing World, Fifth Edition.
Chapter 1: Introduction to Systems Analysis and Design
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Business System Development
Presented by Munezero Immaculee Joselyne PhD in Software Engineering
EIN 6133 Enterprise Engineering
Simulation use cases for T2 in ALICE
Introduction to Software Testing
Chapter 5 Designing the Architecture Shari L. Pfleeger Joanne M. Atlee
SDM workshop Strawman report History and Progress and Goal.
Systems Construction and Implementation
Chapter 1: Introduction to Systems Analysis and Design
System Construction and Implementation
Systems Construction and Implementation
Development of LHCb Computing Model F Harris
Chapter 1: Introduction to Systems Analysis and Design
System Analysis and Design:
Presentation transcript:

Models of Networked Analysis at Regional Centres Harvey Newman MONARC Workshop CERN May 10,

PROJECT On LHC COMPUTING MODELS MONARC Models of Networked Analysis at Regional Centres

MONARC Primary Goals u Determine which classes of Computing Models are feasible for the LHC Experiments Ý Match network capacity, data handling resources likely to be available u Specify the main parameters characterizing this class of Models u Produce example “Baseline” models that fall into the “feasible” category ` COROLLARIES: Ý Help define Regional Center architecture and functionality Ý Help define the Analysis Process for LHC experiments Ý Provide guidelines to keep the final Computing Models in the feasible range

MONARC DELIVERABLES u Specifications for a set of feasible Models u Guidelines for the Collaborations to use in building their Computing Models u A set of Modeling Tools to enable the experiments to simulate and refine their CM

MONARC SCHEDULE (in PAP) u PHASE 1: to Summer 1999 Ý First-round set of Modeling tools u PHASE 2: to Submission of CTPR Ý Refined set of tools Ý Guidelines for constructing feasible CM, in time for the Computing TPR u PHASE 3: to 2001 (Future R&D) Ý Prototype designs and test implementations of the CM, for the second round Computing TDRs

MONARC PHASES (Ideal) u 0 COMPLETE Setup of WG 8/ /98 u 1A SETUP10/ /98 u 1B STARTUP 11/98 - 1/99 u 1C MODELING 2/99 - 5/99 u 2A REFINED MODELING 6/99 - 8/99 u 2B VERIFICATION and CONVERGENCE 9/ /99

MONARC Phase 1B STARTUP: 11/98 - 2/99 u 1. Validate chosen tool(s) with existing Model Ù Final choice of simulation tool by 1/99 u 2. Formulate first detailed model for simulation Integrate Configurations, Performances and Workloads From Phase 1A u 3. Code and test Model Study 1 (MS1) u 4. MS1 Simulation Runs: Monitor, Display and Refine... u 5. MS1 Analysis Results u 6. Validate MS1 Results: Spot-check using testbed u 7. MS1 Design Reviews: 1/15/99; 2/15/99 u 8. MS1 Conclusions, and Preparation for Phase 1C u 9. In parallel: Setup federated Objectivity/DB on testbed (with RD45, GIOD)

MONARC Phase 1C MODELING: 2/99 - 6/99 u 1. Refine coding for Sites (CERN, Regionals), networks, workloads, ODBMS, HPSS u 2. Choose Models to be studied: A range of working methods, and system input parameters “MS2 - MSn” Ù Setup Team to Run and Analyze Models in Parallel, with Cross-checks Ù Run and Analyze MS2 - MSn Ù Evaluate MS2 - MSn Ù Extract achievable workload (throughput), latency as a function of network bandwidth and site performance u 3. Identify key parameters, performance bottlenecks, sources of long latency; derive priorities to match the workload u 4. Identify preferred Analysis Process(es) u 5. Validate key results with testbed Ù Propose measurements to be done to narrow critical uncertainties in the Models u 6. Setup first Models with ODBMS and/or HPSS: MSOHn Ù Run and Analyze MSOHn Ù Evaluate Impact of Use of ODBMS, HPSS

MONARC PHASE 2A (1) REFINED MODELING 6/99 - 9/99 è MONARC Model Design/Evaluation Review in 5/99; LCB Progress Report/Review/Discussion by 6/99 u 1. Build Model-set with refinements for HPSS, ODBMS, network and dynamic workload behaviors u 2. Run, analyze and evaluate Model-set u 3. Focus on “Good” combinations of site configuration/Analysis Process/Data handling strategy (factorizable ?) u 4. Agree on a standard set of evaluation criteria: system performance and working efficiency u 5. Choose a promising set: the “Baseline Models” u 6. Run, analyze, evaluate in detail, using standard criteria u 7. Verify key features of the Baseline Model simulations using the testbed, including ODBMS and HPSS features

MONARC System Design and Development Task u Use the modeling and simulation constructs and tools Ù To design the overall system: this means defining and making choices for XThe Site Architecture(s) XThe Analysis Processes Ù Layout a Complete set of user, site, and inter-site “tasks” Ù Define Key Limits: Quotas, Max Transaction-Times; Priorities Ù Profile the behaviors of the site and network components, and the “Actors” (Sites and Users) Ù Set the scale of the analysis: this means defining X“how much data is accessed, processed and transmitted; by how many people; and how often” X the major intra-site and inter-site “events” X high water marks to trigger events X conditions that alter component response-time or performance Ù Match to the foreseen level of site and network resources: a feasible overall picture of “The Analysis”

MONARC System Design and Development (II) u OPTIMIZATION: Choosing a System Architecture and Deciding How Best to Use It (for a given level of resources) Ù Cost Versus Value Metrics X Recalculation versus data transport time and resource usage X Time to transaction completion XAffinity: concurrence of requests for data l Proximity in Space (File location) l Proximity in Time Ù Strategies to allow for dynamic non-local data access Ù Caching, mirroring, preemptive data movement Ù High water marks and “branch points” in system behavior Ù Turnaround time and/or workflow Targets Ù Isolation of Key Parameters: e.g. how much re-reconstruction; how much non-local data; how much access from tape

MONARC System Design and Development (III) u Complete Top-Down Design: Nothing Significant Left Out Tools for Building Architectures Deployed to the Team Ù Profiling Structures + Utilities Ù Profile Setup: X Site-Architecture Profile X Site-Task Profile X Site-Pair Interaction Profile (for all pairs; or a hierarchy) X User Workload Profiles: Batch Oriented and Rate Oriented X Response Time Profiles (Dynamic) X Network Profiles: Base capability; performance/load characteristics X Priority Profiles; Quotas; Marginal Utility Factors X Decision High Watermark Profile(s) Ù Adaptation Mechanisms; Algorithms Ù Recovery Mechanisms u Simulation Team Ù Tool developers (developers’ developers) Ù System Specification developers Ù Operations group: run, analyze, suggest modified and new tools

MONARC A SAMPLING OF ADDITIONAL ISSUES u Scaling tests of the system as a function of network bandwidth u ODBMS/HPSS Interactions; Custom software to enable the ODBMS to work in a WAN-distributed environment u Adaptability of the system to the architectures likely to exist in different countries u Impact on overall performance of the system, for different approaches to the analysis u Minimum level of flexibility required for the sake of physicists’ working efficiency, in spite of limited resources

MONARC PHASE 2A (2) REFINEMENTS 6/99 - 8/99 u 1. Refine coding for preferred site (CERN, Regionals), network, and workload configuration and priority schemes u 2. Refine the testbed in terms of its connections (to HPSS), network performance, and measurement capability u 3. Refine code for ODBMS, HPSS configuration and behaviors u 4. Refine code for network performance vs. load; QoS mechanisms; Congestion conditions u 5. Refine the Analysis Process Ù Production, Analysis by Groups, Individuals Ù Calibrations and Reruns of the Data Ù Role of the Desktop u 6. Implement d ata transport/recompute/delete strategies Ù Automatic and manual (?), speculative (?) replication Ù Caching and reclustering ?

MONARC PHASE 2B VERIFICATION and CONVERGENCE 8/ /99 u 1. In depth study of the Baseline Models chosen in Phase 2A u 2. Verification of Model Prototypes using the testbed u 3. Identify/extract the features that distinguish the Baseline Models as being “feasible” u 4. Investigate the consequences of variations in key parameters (e.g. WAN bandwidth, I/O bandwidth, desktop CPU) u 5. Evaluate Baseline Models for Ù Adaptability: of the Models system to architectures likely to exist in different countries Ù Responsiveness: turnaround time and ability to respond to peak loads for “urgent” analysis Ù Scalability: performance vs. time as the data volumes and component performances increase Ù Flexibility: to adapt and/or migrate to different Models over time. Ù Overall Performance Vs. Cost: Users’ throughput, working efficiency; operational infrastructure, manpower, maintainability u 6. Report: Guidelines and recommendations; Input for the CTPR by 12/99

MONARC WORKING GROUP TASKS (1) u SYSTEMS DESIGN Ý CERN Center Architectures: CPU, Storage, I/O, LAN Ý Regional Center Architectures: CPU, Storage, I/O, LAN Ý MAN (Regional) and WAN configurations Ý Site and Network Performance Parameters/Profiles Ý Queueing and Prioritization Mechanisms u ANALYSIS + NETWORK PROCESS DESIGN Ý Analysis Tasks (Flow Diagram) Ý Workloads: Frequency, Duration, Day/Week/Month Cycles Ý Other Network Loads Ý Formulate ODBMS and HPSS behavior for use in Simulations (with RD45 and GIOD) Ý Priorities Among Tasks

MONARC WORKING GROUPS (2) u STEERING GROUP Ù CLASSIFICATION of “Baseline Models”, Based on Figures of Merit Ù COORDINATE Comparative Model study cycles Ù OVERSEE Model Evolution Ù PERIODIC REVIEWS Ù GATHER and COORDINATE RESOURCES

MONARC Principles of Project Design: SPE u Formality: Systematic production of tools, documents and procedures; Some central support (software and ops.) u Completeness: Nothing important left out u Abstraction: Represent in a simplified general form (Extensible) u Decomposition: Break into tractable parts u Hiding: Encapsulation; Design for variable granularity u “Standards”: Tools, Interfaces, Methods u Design Techniques: Stepwise Refinement Localization (Modules; OO Approach)