The new The new MONARC Simulation Framework Iosif Legrand California Institute of Technology
June 2003I.C. Legrand2 The GOALS of the Simulation Framework The aim of this work is to continue and improve the development of the MONARC simulation framework To perform realistic simulation and modelling of large scale distributed computing systems, customised for specific HEP applications. To offer a dynamic and flexible simulation environment to be used as a design tool for large distributed systems To provide a design framework to evaluate the performance of a range of possible computer systems, as measured by their ability to provide the physicists with the requested data in the required time, and to optimise the cost.
June 2003I.C. Legrand3 A Global View for Modelling Simulation Engine Basic Components Specific Components Computing Models LAN WAN DBCPU Scheduler Job Catalog Analysis Distributed Scheduler MetaData Jobs MONITORING REAL Systems Testbeds
June 2003I.C. Legrand4 Design Considerations This Simulation framework is not intended to be a detailed simulator for basic components such as operating systems, data base servers or routers. Instead, based on realistic mathematical models and measured parameters on test bed systems for all the basic components, it aims to correctly describe the performance and limitations of large distributed systems with complex interactions.
June 2003I.C. Legrand5 Simulation Engine Simulation Engine Basic Components Specific Components Computing Models LAN WAN DBCPU Scheduler Job Catalog Analysis Distributed Scheduler MetaData Jobs MONITORING REAL Systems Testbeds
June 2003I.C. Legrand6 Design Considerations of the Simulation Engine A process oriented approach for discrete event simulation is well suited to describe concurrent running programs. “Active objects” (having an execution thread, a program counter, stack...) provide an easy way to map the structure of a set of distributed running programs into the simulation environment. The Simulation engine supports an “interrupt” scheme This allows effective & correct simulation for concurrent processes with very different time scale by using a DES approach with a continuous process flow between events
June 2003I.C. Legrand7 Tests of the Engine Processing a TOTAL of simple jobs in 1, 10, 100, 1000, 2 000, 4 000, CPUs using the same number of parallel threads more tests:
June 2003I.C. Legrand8 Basic Components Simulation Engine Basic Components Specific Components Computing Models LAN WAN DBCPU Scheduler Job Catalog Analysis Distributed Scheduler MetaData Jobs MONITORING REAL Systems Testbeds
June 2003I.C. Legrand9 Basic Components u These Basic components are capable to simulate the core functionality for general distributed computing systems. They are constructed based on the simulation engine and are using efficiently the implementation of the interrupt functionality for the active objects. u These components should be considered the basic classes from which specific components can be derived and constructed
June 2003I.C. Legrand10 Basic Components u Computing Nodes u Network Links and Routers, IO protocols u Data Containers u Servers Data Base Servers File Servers (FTP, NFS … ) u Jobs Processing Jobs FTP jobs u Scripts & Graph execution schemes u Basic Scheduler u Activities ( a time sequence of jobs )
June 2003I.C. Legrand11 Multitasking Processing Model Concurrent running tasks share resources (CPU, memory, I/O) “ Interrupt” driven scheme: For each new task or when one task is finished, an interrupt is generated and all “processing times” are recomputed. It provides: Handling of concurrent jobs with different priorities. An efficient mechanism to simulate multitask processing. An easy way to apply different load balancing schemes.
June 2003I.C. Legrand12 LAN/WAN Simulation Model Node Link Node LAN Node Link Node LAN Node Link Node LAN Internet Connections ROUTER “Interrupt” driven simulation : for each new message an interrupt is created and for all the active transfers the speed and the estimated time to complete the transfer are recalculated. Continuous Flow between events ! An efficient and realistic way to simulate concurrent transfers having different sizes / protocols.
June 2003I.C. Legrand13 Output of the simulation Simulation Engine Node DB Router User C Output Listener Filters Output Listener Filters Log Files EXEL GRAPHICS Any component in the system can generate generic results objects Any client can subscribe with a filter and will receive the results it is Interested in. VERY SIMILAR structure as in MonALISA. We will integrate soon The output of the simulation framework into MonaLISA
June 2003I.C. Legrand14 Specific Components Simulation Engine Basic Components Specific Components Computing Models LAN WAN DBCPU Scheduler Job Catalog Analysis Distributed Scheduler MetaData Jobs MONITORING REAL Systems Testbeds
June 2003I.C. Legrand15 Specific Components These Components should be derived from the basic components and must implement the specific characteristics and way they will operate. Major Parts : u Data Model u Data Flow Diagrams from Production and especially for Analysis Jobs u Scheduling / pre-allocation policies u Data Replication Strategies
June 2003I.C. Legrand16 Generic Data Container Size Event Type Event Range Access Count INSTANCE Data Model Data Model FTP Server Node DB ServerNFS Server FILEData Base Custom Data Server Network FILE META DATA Catalog Replication Catalog Export / Import
June 2003I.C. Legrand17 Data Model (2) Data Model (2) Data Container JOB META DATA Catalog Replication Catalog Data Request Data Container List Of IO Transactions Data Processing JOB Select from the options
June 2003I.C. Legrand18 Data Flow Diagrams for JOBS Processing 1 Input Output Processing 2 Processing 3 Processing 4 Output Processing 4 Output Input 10x Input and output is a collection of data. This data is described by type and range Process is described by name A fine granularity decomposition of processes which can be executed independently and the way they communicate can be very useful for optimization and parallel execution !
June 2003I.C. Legrand19 Job Scheduling Centralized Scheme CPU FARM JobScheduler CPU FARM JobScheduler Site ASite B GLOBAL Job Scheduler Dynamically loadable module
June 2003I.C. Legrand20 Job Scheduling Distributed Scheme – market model CPU FARM JobScheduler CPU FARM JobScheduler Site ASite B CPU FARM JobScheduler Site A Request COST DECISION
June 2003I.C. Legrand21 Computing Models Simulation Engine Basic Components Specific Components Computing Models LAN WAN DBCPU Scheduler Job Catalog Analysis Distributed Scheduler MetaData Jobs MONITORING REAL Systems Testbeds
June 2003I.C. Legrand22 Activities: Arrival Patterns A flexible mechanism to define the Stochastic process of how users perform data processing tasks Dynamic loading of “Activity” tasks, which are threaded objects and are controlled by the simulation scheduling mechanism Physics Activities Injecting “Jobs” Each “Activity” thread generates data processing jobs for( int k =0; k< jobs_per_group; k++) { Job job = new Job( this, Job.ANALYSIS, "TAG”, 1, events_to_process); farm.addJob(job ); // submit the job sim_hold ( 1000 ); // wait 1000 s } Regional Centre Farm Job Activity Job Activity These dynamic objects are used to model the users behavior
June 2003I.C. Legrand23 Regional Centre Model Complex Composite Object Servers Simplified topology of the Centers AB C D E
June 2003I.C. Legrand24 Monitoring Simulation Engine Basic Components Specific Components Computing Models LAN WAN DBCPU Scheduler Job Catalog Analysis Distributed Scheduler MetaData Jobs MONITORING REAL Systems Testbeds
June 2003I.C. Legrand25 Real Need for Flexible Monitoring Systems u It is important to measure & monitor the Key applications in a well defined test environment and to extract the parameters we need for modeling u Monitor the farms used today, and try to understand how they work and simulate such systems. u It requires a flexible monitoring system able to dynamically add new parameters and provide access to historical data u Interfacing monitoring tools to get the parameters we need in simulations in a nearly automatic way u MonALISA was designed and developed based on the experience with the simulation problems.
June 2003I.C. Legrand26 Input for the Data Models We need information related with all the possible data types, expected size and distribution. Which mechanism for data access will be used for activities like production and analysis : Flat files and FTP like transfer to the local disk Network file system Data Base access ( batch queries with independent threads ) Root like file system Client / Server Web Services To simulate access to “hot spots” data into the system we need a range of probabilities for such activities
June 2003I.C. Legrand27 Input for how jobs are executed u How the parallel decomposition of a job is done ? Scheduler using a Job description language, u Master / slaves model (parallel root ) u Centralized or distributed job scheduler ? u What types of policies we should consider for inter- site job scheduling ? u Which data should be replicated ? u Which are the “predefined data replication” policies u Should we consider dynamic replication / caching for (selected) data which are used more frequently ?
June 2003I.C. Legrand28 Status u The engine was tested (performance and quality) on several platforms and it is working well. u We developed all the basic components ( CPU, Servers, DB, Routers, network links, Jobs, IO Jobs) and we are now testing/debugging them. u A quite flexible output scheme for simulation is now included u Examples made with specific components for production and analysis are being tested. u A quite general model for the data catalog and data replication is under development it will be soon integrated.
June 2003I.C. Legrand29 Still to de done… u Continue the testing of Basic Components, Network servers and start modeling and real farms, Web Services, peer to peer systems …. u Improve the Documentation u Improve the graphical output, interface with MonALISA and create a service to extract simulation parameters from real- systems u Gather information from the current computing systems and future possible architectures and start building the Specific Components & Computing Models scenarios. u Include Risk Analysis into the system u Development / evaluation of different scheduling and replication strategies
June 2003I.C. Legrand30 Summary Modelling and understanding current systems, their performance and limitations, is essential for the design of the large scale distributed processing systems. This will require continuous iterations between modelling and monitoring Simulation and Modelling tools must provide the functionality to help in designing complex systems and evaluate different strategies and algorithms for the decision making units and the data flow management.