Preliminary Validation of MonacSim Youhei Morita *) KEK Computing Research Center *) on leave to CERN IT/ASD.

Slides:



Advertisements
Similar presentations
The Central Processing Unit (CPU) Understanding Computers.
Advertisements

CS 241 Spring 2007 System Programming 1 Queuing Framework for Process Management Evaluation Lecture 20 Klara Nahrstedt.
©Brooks/Cole, 2003 Chapter 5 Computer Organization.
September, 1999MONARC - Distributed System Simulation I.C. Legrand1 MONARC Models Of Networked Analysis at Regional Centers Iosif C. Legrand (CERN/CIT)
CS CS 5150 Software Engineering Lecture 19 Performance.
MONARC Simulation Framework Corina Stratan, Ciprian Dobre UPB Iosif Legrand, Harvey Newman CALTECH.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Continuous random variables Uniform and Normal distribution (Sec. 3.1, )
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
1 Software Testing and Quality Assurance Lecture 40 – Software Quality Assurance.
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 22 Performance of Computer Systems.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Chapter 5 Computer Organization ( 計算機組織 ). Distinguish between the three components of a computer hardware. List the functionality of each component.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Analysis of Simulation Results Andy Wang CIS Computer Systems Performance Analysis.
MONARC : results and open issues Laura Perini Milano.
AN INTRODUCTION TO THE OPERATIONAL ANALYSIS OF QUEUING NETWORK MODELS Peter J. Denning, Jeffrey P. Buzen, The Operational Analysis of Queueing Network.
Configuration.
Verification & Validation
July, 2000.Simulation of distributed computing systems I.C. Legrand1 MONARC Models Of Networked Analysis at Regional Centers Iosif C. Legrand (CALTECH)
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
SSS Test Results Scalability, Durability, Anomalies Todd Kordenbrock Technology Consultant Scalable Computing Division Sandia is a multiprogram.
Architectural Characterization of an IBM RS6000 S80 Server Running TPC-W Workloads Lei Yang & Shiliang Hu Computer Sciences Department, University of.
Modeling Regional Centers with MONARC Simulation Tools Modeling LHC Regional Centers with the MONARC Simulation Tools Irwin Gaines, FNAL for the MONARC.
No vember 15, 2000 MONARC Project Status Report Harvey B Newman (CIT) MONARC Project Status Report Harvey Newman California Institute.
A User-Lever Concurrency Manager Hongsheng Lu & Kai Xiao.
October, 2000.A Self Organsing NN for Job Scheduling in Distributed Systems I.C. Legrand1 Iosif C. Legrand CALTECH.
Thermal-aware Issues in Computers IMPACT Lab. Part A Overview of Thermal-related Technologies.
Resource Predictors in HEP Applications John Huth, Harvard Sebastian Grinstein, Harvard Peter Hurst, Harvard Jennifer M. Schopf, ANL/NeSC.
1 CS/COE0447 Computer Organization & Assembly Language CHAPTER 4 Assessing and Understanding Performance.
Chapter 5 Computer Organization. Distinguish between the three components of a computer hardware. List the functionality of each component. Understand.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
OS, , Part I Operating - System Structures Department of Computer Engineering, PSUWannarat Suntiamorntut.
Case Study: A Database Service CSCI 8710 September 25, 2008.
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
Static WCET Analysis vs. Measurement: What is the Right Way to Access Real-Time Task Timing? David Fleeman { Center.
1 Simulation for power overhead and cavity field estimation Shin Michizono (KEK) Performance (rf power and max. cavity MV/m 24 cav. operation.
Embedded and Real Time Systems Lecture #2 David Andrews
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
NETE4631: Network Information System Capacity Planning (2) Suronapee Phoomvuthisarn, Ph.D. /
Distributed applications monitoring at system and network level A.Brunengo (INFN- Ge), A.Ghiselli (INFN-Cnaf), L.Luminari (INFN-Roma1), L.Perini (INFN-Mi),
CMP Design Choices Finding Parameters that Impact CMP Performance Sam Koblenski and Peter McClone.
©Brooks/Cole, 2003 Chapter 5 Computer Organization.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999.
Efficient Gigabit Ethernet Switch Models for Large-Scale Simulation Dong (Kevin) Jin David Nicol Matthew Caesar University of Illinois.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
Models of Networked Analysis at Regional Centres Harvey Newman MONARC Workshop CERN May 10, 1999
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.
June 22, 1999MONARC Simulation System I.C. Legrand1 MONARC Models Of Networked Analysis at Regional Centres Distributed System Simulation Iosif C. Legrand.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
PSM, Database requirements for POOL (File catalog performance requirements) Maria Girone, IT-DB Strongly based on input from experiments: subject.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
CSE 340 Computer Architecture Summer 2016 Understanding Performance.
OPERATING SYSTEMS CS 3502 Fall 2017
The Central Processing Unit (CPU)
Modeling and Simulation (An Introduction)
Scalability to Hundreds of Clients in HEP Object Databases
New strategies of the LHC experiments to meet
Grid Canada Testbed using HEP applications
A Simulator to Study Virtual Memory Manager Behavior
by Xiang Mao and Qin Chen
Uniprocessor scheduling
The ATLAS Computing Model
Operating System Overview
Development of LHCb Computing Model F Harris
Evaluation of Objectivity/AMS on the Wide Area Network
Presentation transcript:

Preliminary Validation of MonacSim Youhei Morita *) KEK Computing Research Center *) on leave to CERN IT/ASD

Monarc 99/8/26 Y. Morita MonarcSim Validation _ Joint effort of Simulation WG and Testbeds WG _ Validate the time responses of various components in the discrete event simulation (esp. ODBMS and LAN/WAN) –measure the behaviors distributed ODBMS application –normalize and extract "standard" sets of simulation parameters from measurements –compare the results with simulation

Monarc 99/8/26 Y. Morita Local and AMS test config _ Local disk I/O speed is measured by read() and write() system calls. _ Network speed is measured by FTP.

Monarc 99/8/26 Y. Morita Measurements job execution timeAggregated CPU%

Monarc 99/8/26 Y. Morita Local job parametars _ monarc01 local - 1 job CPU : 17.4 SI95Disk Read : 207 MB/s 1 job = sec, CPU% = 99.1% _ atlobj02 local - 1 job CPU : SI95Disk Read: 31 MB/s 1 job = sec, CPU% = 94.5% _ Assumption: T job = T diskread + T process T process (monarc01) / T process (atlobj02) = / 17.4 T diskread (monarc01) / T diskread (atlobj02) = 31 / 207

Monarc 99/8/26 Y. Morita Deduced local parameters _ Calculated CPU time T process (monarc01)= sec --> CPU% 99.0% T process (atlobj02) = sec --> CPU% 94.0% _ CPU cycles per event Event size = 10.8 MB, # of events = 5 CPU(monarc01) = > Process_Time = (14.06 sec x 17.4)/ 5 events = [SI95*s] --> Input to simulation

Monarc 99/8/26 Y. Morita Other parameters _ monarc01 is 4 CPU SMP machine _ atlobj02 is 2 CPU SMP machine _ assume local test on SMP machine can be simulated with Iosif's high speed network

Monarc 99/8/26 Y. Morita Simulation results 1 job4 jobs

Monarc 99/8/26 Y. Morita Simulation results (cont'd) 32 jobs

Monarc 99/8/26 Y. Morita Jobs vs time (32jobs) SimulationMeasurement ave. = sec ave. = sec rms = 1.3 sec rms = 3.5 sec

Monarc 99/8/26 Y. Morita Summary _ simple modeling of testbed local and ams configuration is made _ 4 CPU and 2 CPU SMP machines can be reasonably simulated with Iosif's model _ simulation roughly reproduces many aspects of the testbed measurements –job profile, CPU and I/O time _ still many fine-tunings are necessary –rms in job time, network utilization, etc