Test results Test definition (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma; (2) Istituto Nazionale di Fisica Nucleare, Sezione di Bologna.

Slides:



Advertisements
Similar presentations
Implementing Tableau Server in an Enterprise Environment
Advertisements

CHEP 2000, Roberto Barbera Roberto Barbera (*) GENIUS: a Web Portal for the GRID Meeting Grid.it, Bologna, (*) work in collaboration.
1 Opentest Architecture Table of Content –The Design Basic Components High-Level Test Architecture Test Flow –Services provided by each Layer Test Mgt.
Workload Management Massimo Sgaravatto INFN Padova.
Status of Globus activities within INFN (update) Massimo Sgaravatto INFN Padova for the INFN Globus group
Loupe /loop/ noun a magnifying glass used by jewelers to reveal flaws in gems. a logging and error management tool used by.NET teams to reveal flaws in.
Understanding and Managing WebSphere V5
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Introduction to HP LoadRunner Getting Familiar with LoadRunner >>>>>>>>>>>>>>>>>>>>>>
Abstract The automated multi-platform software nightly build system is a major component in the ATLAS collaborative software organization, validation and.
Framework for Automated Builds Natalia Ratnikova CHEP’03.
DIRAC Web User Interface A.Casajus (Universitat de Barcelona) M.Sapunov (CPPM Marseille) On behalf of the LHCb DIRAC Team.
About Dynamic Sites (Front End / Back End Implementations) by Janssen & Associates Affordable Website Solutions for Individuals and Small Businesses.
Tutorial 10 Adding Spry Elements and Database Functionality Dreamweaver CS3 Tutorial 101.
Testing Virtual Machine Performance Running ATLAS Software Yushu Yao Paolo Calafiura LBNL April 15,
Soc Classification level 1© Nokia Siemens Networks Keyword-Driven Automated performance testing of User Interfaces: a Case Study for the Open Element Management.
Rsv-control Marco Mambelli – Site Coordination meeting October 1, 2009.
NICOS System of Nightly Builds for Distributed Development Alexander Undrus CHEP’03.
F. Brasolin / A. De Salvo – The ATLAS benchmark suite – May, Benchmarking ATLAS applications Franco Brasolin - INFN Bologna - Alessandro.
Nightly System Growth Graphs Abstract For over 10 years of development the ATLAS Nightly Build System has evolved into a factory for automatic release.
The huge amount of resources available in the Grids, and the necessity to have the most up-to-date experimental software deployed in all the sites within.
GEM Portal and SERVOGrid for Earthquake Science PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics, Physics.
Stuart Wakefield Imperial College London Evolution of BOSS, a tool for job submission and tracking W. Bacchi, G. Codispoti, C. Grandi, INFN Bologna D.
Fermilab Distributed Monitoring System (NGOP) Progress Report J.Fromm K.Genser T.Levshina M.Mengel V.Podstavkov.
Distributed monitoring system. Why Monitor? Solve them! Identify Problems Ensure conduct Requirements Manage many computers Spot trends in the system.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks WMSMonitor: a tool to monitor gLite WMS/LB.
My Name: ATLAS Computing Meeting – NN Xxxxxx A Dynamic System for ATLAS Software Installation on OSG Sites Xin Zhao, Tadashi Maeno, Torre Wenaus.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Resource Predictors in HEP Applications John Huth, Harvard Sebastian Grinstein, Harvard Peter Hurst, Harvard Jennifer M. Schopf, ANL/NeSC.
Fast Benchmark Michele Michelotto – INFN Padova Manfred Alef – GridKa Karlsruhe 1.
Organization and Management of ATLAS Nightly Builds F. Luehring a, E. Obreshkov b, D.Quarrie c, G. Rybkine d, A. Undrus e University of Indiana, USA a,
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
Changes to CernVM-FS repository are staged on an “installation box" using a read/write file system interface. There is a dedicated installation box for.
Alex Undrus – Nightly Builds – ATLAS SW Week – Dec Preamble: Code Referencing Code Referencing is a vital service to cope with 7 million lines of.
The BaBar Prompt Reconstruction Manager: a Real Life Example of a Constructive Approach to Software Development. Francesco Safai Tehrani Istituto Nazionale.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
The Million Point PI System – PI Server 3.4 The Million Point PI System PI Server 3.4 Jon Peterson Rulik Perla Denis Vacher.
Web Design and Development. World Wide Web  World Wide Web (WWW or W3), collection of globally distributed text and multimedia documents and files 
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Tools and techniques for managing virtual machine images Andreas.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Distributed Time Series Database
EMI INFSO-RI ARC tools for revision and nightly functional tests Jozef Cernak, Marek Kocan, Eva Cernakova (P. J. Safarik University in Kosice, Kosice,
Performance Testing Test Complete. Performance testing and its sub categories Performance testing is performed, to determine how fast some aspect of a.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen Budapest
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
HEPMARK2 Consiglio di Sezione 9 Luglio 2012 Michele Michelotto - Padova.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
Clarens Toolkit Building Blocks for a Simple TeraGrid Gateway Tutorial Conrad Steenberg Julian Bunn, Matthew Graham, Joseph Jacob, Craig Miller, Roy Williams.
ATLAS Distributed Analysis DISTRIBUTED ANALYSIS JOBS WITH THE ATLAS PRODUCTION SYSTEM S. González D. Liko
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
The ALICE data quality monitoring Barthélémy von Haller CERN PH/AID For the ALICE Collaboration.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
REST API to develop application for mobile devices Mario Torrisi Dipartimento di Fisica e Astronomia – Università degli Studi.
Alessandro De Salvo Mayuko Kataoka, Arturo Sanchez Pineda,Yuri Smirnov CHEP 2015 The ATLAS Software Installation System v2 Alessandro De Salvo Mayuko Kataoka,
Using HLRmon for advanced visualization of resource usage Enrico Fattibene INFN - CNAF ISCG 2010 – Taipei March 11 th, 2010.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it
Understanding and Improving Server Performance
Workload Management Workpackage
NA61/NA49 virtualisation:
Outline Benchmarking in ATLAS Performance scaling
ATLAS Software Installation redundancy Alessandro De Salvo Alessandro
EIN 6133 Enterprise Engineering
Virtualization in the gLite Grid Middleware software process
Database Driven Websites
Alice Software Demonstration
Presentation transcript:

Test results Test definition (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma; (2) Istituto Nazionale di Fisica Nucleare, Sezione di Bologna ATLAS Benchmark – CHEP2009 Benchmarking the ATLAS software through the Kit Validation Engine Alessandro De Salvo(1), Franco Brasolin (2) The measurement of the performance of the experiment's software is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementations. ATLAS uses two tools: Kit Validation (KV) & Global Kit Validation portal (GKV): Global Kit Validation portal (GKV): -Collects machine logs & info (CPU architecture, family, speed, mem size) -Benchmark values and timings obtained from the log files using perfmon ATLAS facilities -Web interface search engine based on MySQL5 and PHP5 -Multiple search options: machine name, CPU type, custom tag, versions, username, OS type -Dynamic chart generation -Data aggregation for building tables with benchmarking performance comparisons -Basic info available to all users -Full Log & machine details available to certified users only -Allows comparisons of different sw releases, i.e. mem or CPU used ATLAS tests have been used to define a new CPU benchmark unit for the WLCG collaboration-early 2009 Charts courtesy by M. Michelotto INFN Sezione di Padova HEPiX Benchmark Group The authors represent the ATLAS collaboration in the Hepix benchmarking group since 2008 In this chart a test on a dual quad-core 2,33GHz is shown; the benchmark is performed generating 10K events and simulating 100 events. The results show a good linear scaling with the machine clock speed: about 8 events/sec per core and 64 events/sec per box Kit Validation (KV): - Installation agent sw-mgr is a shell script to install & execute KV to test the full chain ATLAS sw (Generation, Simulation, Digitization, Reconstruction) -Custom tests available through an xml file with optional automatic Web download -Send the results to GKV: using curl or an embedded Python posting engine (via http, https) with full proxy support on certified authenticated connections - Parallel tests on multiple threads -Used by the whole ATLAS collaboration for both standard and Grid sw installation - In use since 2004: 203,000 validations tested by the collaboration on 158 different CPU types (32/64 bit), 159 sw releases/versions Perfmon: monitors the different stages of a job using the hooks in the Gaudi framework and retrieves data from Auditors via dedicated tools Test definition Test execution KV External source (local file, web server,..) ATLAS Software Release CERN_benchmark_IntelXeonE5410_2.33GHz_CSCalt1_8c_8t EL.cernsmp Intel(R) Xeon(R) CPU 2.33 GHz Hostname CSC tt Generation CSC tt Simulation CSC tt Digitization CSC tt Reconstruction OVERALL (full chain events)1 Norm Skip GroupEntries NodeCPUCoresKernelTagReleaseTest Events Max VMem [MB] Max RSS [MB] Tot Time [s] Time/event [s] Events/sEvents/s (box) ATLAS BENCHMARK SUMMARY 4 records found. Refresh GKV Global KitValidation Portal CSC tt Generation CSC tt Simulation CSC tt Digitization CSC tt Reconstruction Overall (full chain events) HEPIX_CINT2000 HEPIX_CINT2006 SPEC_CINT2006 HEPIX_CINT2006RATE SPEC_CINT2006RATE HEPIX_CFP2006 HEPIX_CFP2006RATE CSC tt Reconstruction RSS Memory CSC tt Reconstruction Virtual Memory CSC tt Digitization RSS Memory CSC tt Digitization Virtual Memory CSC tt Simulation RSS Memory CSC tt Simulation Virtual Memory CSC tt Generation RSS Memory CSC tt Generation Virtual Memory ATLAS Benchmarks - memory ATLAS Benchmarks – machine comparison ATLAS Benchmarks - memory Memory Size [MB] Power Factor Machine Intel(R) Xeon(R) CPU 2.33GHz CERN_benchmark_IntelXeon E5410_2_33GHz_CSCalt1_8c_8t [group 1, release ] Intel(R) Xeon(R) CPU 2.33GHz CERN_benchmark_IntelXeon E5410_2_33GHz_CSCalt1_8c_8t [group 1, release ] CSC Z -> e e jet Generation Virtual Memory CSC Z -> e e jet Generation RSS Memory CSC Z -> e e jet Simulation Virtual Memory CSC Z -> e e jet Simulation RSS Memory CSC Z -> e e jet Digitization Virtual Memory CSC Z -> e e jet Reconstruction RSS Memory CSC Z -> e e jet Digitization RSS Memory CSC Z -> e e jet Reconstruction Virtual Memory Memory Size [MB] Intel® Core™2 Quad CPU 2.40 GHz KV_ [group 1,release hostname ] Intel® Core™2 Quad CPU 2.40 GHz KV_ [group 1,release hostname ] go Select all CPU speed normalization HEP SPEC06 Events/sec