Preliminary results Monarc Testbeds Working Group A.Brunengo (INFN- Ge), A.Ghiselli (INFN-Cnaf), L.Luminari (INFN-Roma1), L.Perini (INFN-Mi), S.Resconi.

Slides:



Advertisements
Similar presentations
Status of the Status of the MONARC Testbed in Padova Massimo Sgaravatto INFN Padova May Monarc Workshop.
Advertisements

Plan for Measurements Monarc General Meeting 26 July 1999 A. Brunengo - INFN Genova P. Capiluppi - INFN Bologna A. Ghiselli - INFN Cnaf L. Perini - INFN.
1 GridTorrent Framework: A High-performance Data Transfer and Data Sharing Framework for Scientific Computing.
LAN DESIGN. Functionality - the network must work with reasonable speed and reliability.
Best Practice for Performance Edward M. Kwang President.
Performance analysis and Capacity planning of Home LAN Mobile Networks Lab 4
September, 1999MONARC - Distributed System Simulation I.C. Legrand1 MONARC Models Of Networked Analysis at Regional Centers Iosif C. Legrand (CERN/CIT)
CSC 450/550 Part 3: The Medium Access Control Sublayer More Contents on the Engineering Side of Ethernet.
1 Part II Web Performance Modeling: basic concepts © 1998 Menascé & Almeida. All Rights Reserved.
High-Performance Throughput Tuning/Measurements Davide Salomoni & Steffen Luitz Presented at the PPDG Collaboration Meeting, Argonne National Lab, July.
1 Fast, Scalable Disk Imaging with Frisbee University of Utah Mike Hibler, Leigh Stoller, Jay Lepreau, Robert Ricci, Chad Barb.
Rutgers PANIC Laboratory The State University of New Jersey Self-Managing Federated Services Francisco Matias Cuenca-Acuna and Thu D. Nguyen Department.
Reduced TCP Window Size for VoIP in Legacy LAN Environments Nikolaus Färber, Bernd Girod, Balaji Prabhakar.
Introduction to the Gigabit-Ethernet by Yun Qi Source : Rivier College, CS699 Professional Seminar.
Copyright 2002 Year 2 - Chapter 4/Cisco 3 - Module 4 LAN Design By Carl Marandola.
Module 2: Planning to Install SQL Server. Overview Hardware Installation Considerations SQL Server 2000 Editions Software Installation Considerations.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Selected Topics on Databases n Multi-User Databases –more than one user processes the database at the same time n System Architectures for Multi-User Environments.
LCG 3D StatusDirk Duellmann1 LCG 3D Throughput Tests Scheduled for May - extended until end of June –Use the production database clusters at tier 1 and.
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
Chabot College Chapter 4 Review Questions Semester IIIELEC Semester III ELEC
Music Law Management Education General Library Building Science Baker Bldg Campus ATM Network CTRVAX Sun ServersAcorn Lib. Technology Internet Router Computer.
1 Distributed Systems: an Introduction G53ACC Chris Greenhalgh.
Berliner Elektronenspeicherringgesellschaft für Synchrotronstrahlung mbH (BESSY) CA Proxy Gateway Status and Plans Ralph Lange, BESSY.
Summary of 1 TB Milestone RD Schaffer Outline : u Goals and assumptions of the exercise u The hardware constraints u The basic model u What have we understood.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
6-10 Oct 2002GREX 2002, Pisa D. Verkindt, LAPP 1 Virgo Data Acquisition D. Verkindt, LAPP DAQ Purpose DAQ Architecture Data Acquisition examples Connection.
Hosting an Enterprise Financial Forecasting Application with Terminal Server Published: June 2003.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.
Copyright 2002Cisco Press: CCNA Instructor’s Manual Year 2 - Chapter 4/Cisco 3 - Module 4 LAN Design.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
TWNIC mDNS Business Plan Kenny Huang TWNIC JET Meeting, Beijing 28 August 2000.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Providing Differentiated Levels of Service in Web Content Hosting Jussara Almeida, etc... First Workshop on Internet Server Performance, 1998 Computer.
LFC Replication Tests LCG 3D Workshop Barbara Martelli.
1 Database mini workshop: reconstressing athena RECONSTRESSing: stress testing COOL reading of athena reconstruction clients Database mini workshop, CERN.
Draft-ietf-ippm-tcp-throughput-tm-04.txt 1 TCP Throughput Testing Methodology IETF 78 Maastricht Reinhard Schrage Barry Constantine.
Experiences Tuning Cluster Hosts 1GigE and 10GbE Paul Hyder Cooperative Institute for Research in Environmental Sciences, CU Boulder Cooperative Institute.
Fabric Monitoring at the INFN Tier1 Felice Rosso on behalf of INFN Tier1 Joint OSG & EGEE Operations WS, Culham (UK)
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
CSU - DCE Webmaster I Scaling Issues - Fort Collins, CO Copyright © XTR Systems, LLC Web Site Scaling Issues (or Size Really Does Matter) Instructor:
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ Koo-Min Ahn.
Types of Computers Chidambaranathan C.M. What is a Computer? A device that receives data, processes data, stores data, and produces a result.
Distributed applications monitoring at system and network level A.Brunengo (INFN- Ge), A.Ghiselli (INFN-Cnaf), L.Luminari (INFN-Roma1), L.Perini (INFN-Mi),
CSCI 465 D ata Communications and Networks Lecture 22 Martin van Bommel CSCI 465 Data Communications & Networks 1.
1 LAN Wiring, Physical Topology, and Interface Hardware.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part II System Performance Modeling: basic concepts, operational analysis (book, chap. 3)
Preliminary Validation of MonacSim Youhei Morita *) KEK Computing Research Center *) on leave to CERN IT/ASD.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
TechTarget Backup School New Customer Training exagrid.com | 1© 2014 ExaGrid Systems, Inc. Confidential Backup School Backup School
Carrier’s Web Solution. Carrier’s new web user interface –New BACnet system –Built completely on web technology –Simple to install –Unparalleled feature.
Trickles :A stateless network stack for improved Scalability, Resilience, and Flexibility Alan Shieh,Andrew C.Myers,Emin Gun Sirer Dept. of Computer Science,Cornell.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
FroNtier Stress Tests at Tier-0 Status report Luis Ramos LCG3D Workshop – September 13, 2006.
Providing Differentiated Levels of Service in Web Content Hosting J ussara Almeida, Mihaela Dabu, Anand Manikutty and Pei Cao First Workshop on Internet.
Homework 1 solutions. Question 1 Solution Q1 Question 2.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Studies of LHCb Trigger Readout Network Design Karol Hennessy University College Dublin Karol Hennessy University College Dublin.
Experiment #2: LAN Performance of PC VS. LAN Performance of Mac Ben Teichman Randy Janzen Matt McAndrews.
September 26, 2003K User's Meeting1 CCJ Usage for Belle Monte Carlo production and analysis –CPU time: 170K hours (Aug.1, 02 ~ Aug.22, 03)
Understanding and Improving Server Performance
Wireless-N Comparative Results -3rd party testing preliminary results
PC Farms & Central Data Recording
TWNIC mDNS Business Plan
GridTorrent Framework: A High-performance Data Transfer and Data Sharing Framework for Scientific Computing.
Presentation transcript:

Preliminary results Monarc Testbeds Working Group A.Brunengo (INFN- Ge), A.Ghiselli (INFN-Cnaf), L.Luminari (INFN-Roma1), L.Perini (INFN-Mi), S.Resconi (INFN-Mi), M.Sgaravatto (INFN-Pd), C.Vistoli (INFN-Cnaf)

14 September 1999Preliminary Results Configurations zATLFast++ stress tests: increasing number of concurrent jobs with read access to the Data Base zTested configurations: ySingle workstation (without AMS server) yLAN (Ethernet and Gigabit Ethernet) yWAN zMeasurements: yClient side: CPU, Memory, Wall clock time yServer side: CPU, Memory, Throughput

14 September 1999Preliminary Results Test BaseT sunlab1gsun Server Client sunlab1, gsun: Sun Ultra5, 333 MHz, 128 MB, Solaris 2.7

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results Test 2 10BaseT cmssun4vlsi06 Server Client cmssun4: Sun Ultra10, 333 MHz, 128 MB, Solaris 2.6 vlsi06: Sun SPARC20, 125 MHz, 128 MB, Solaris 2.6 (fast server, slow client)

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results Test 3 2 Mbps sunlab1monarc01 ServerClient sunlab1: Sun Ultra5, 333 MHz, 128 MB, Solaris 2.7 monarc01: Sun Enterprise X 400 MHz, 512 MB, Solaris 2.6

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results Comments: zTest 1: yClient CPU: 100 % used with only 5 jobs yServer CPU: 100 % used with 50 jobs yCrashes after 50 concurrent jobs (causes to be investigated, swap area + server problems?) yServer CPU: 100 % used (system higher than user) when client jobs start crashing

14 September 1999Preliminary Results Comments: zTest 2: yClient CPU: 80 % used with 20 jobs or more yServer CPU: 30 % used with 60 jobs yCrashes after 70 concurrent jobs (causes to be investigated, swap area on client + ?) yServer CPU: 100 % used (system higher than user) when client jobs start crashing

14 September 1999Preliminary Results Comments: zTest 3: yClient CPU: 5% used yServer CPU: 10 % used yOccasional crashes with 10 and 15 concurrent jobs

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results Comments: zWall time for 1 job with 10 concurrent jobs ytest 1: 400 sec. (1Gb/sec) x(2.5 times ethernet result) ytest 2: 1000 sec. (10Mb/sec) x(6 times 2Mb/s result) ytest 3: 6000 sec. (2Mb/sec) z1Gb/sec max. throughput only 25Mbit/sec

14 September 1999Preliminary Results Test 4 Server Clients sunlab1, gsun, cmssun4, atlsun1, atlas4: Sun Ultra5/10, 333 MHz, 128 MB vlsi06: Sun SPARC20, 125 MHz, 128 MB monarc01: Sun Enterprise 450, 4X 400 MHz, 512 MB 1000BaseT 2 Mbps 8 Mbps sunlab1 cmssun4vlsi06atlsun1atlas4 monarc01

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results

14 September 1999Preliminary Results Test 4 Summary zClient CPU: never 100 % used zServer CPU: never 100 % used zMany jobs crash: “Timeout with AMS Server” zWall clock time for workstation connected with gigabit ethernet very high (i.e. for 10 jobs >1000’; 400’ without other clients in WAN): slow clients degrade performances on fast clients ???

14 September 1999Preliminary Results Future work zTests on LAN (100BaseT) with multiple clients zTests with write access to the database zFurther tests with QoS