Title of the Poster Supervised By: Prof.*********

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
A comparison between xen and kvm Andrea Chierici Riccardo Veraldi INFN-CNAF.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Planning and Designing Server Virtualisation.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
C ENTER FOR O PTICS M ANUFACTURING OPTIFAB 2003 Poster Paper Guidelines  Poster Display Area  4’ x 4’ (1.2m x 1.2m) Board Hard-copy (paper) format only.
02/09/2010 Industrial Project Course (234313) Virtualization-aware database engine Final Presentation Industrial Project Course (234313) Virtualization-aware.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
1.3 ON ENHANCING GridFTP AND GPFS PERFORMANCES A. Cavalli, C. Ciocca, L. dell’Agnello, T. Ferrari, D. Gregori, B. Martelli, A. Prosperini, P. Ricci, E.
Scientific Computing in PPD and other odds and ends Chris Brew.
RALPP Site Report HEP Sys Man, 11 th May 2012 Rob Harper.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Performance analysis comparison Andrea Chierici Virtualization tutorial Catania 1-3 dicember 2010.
Using HLRmon for advanced visualization of resource usage Enrico Fattibene INFN - CNAF ISCG 2010 – Taipei March 11 th, 2010.
CERN IT Department CH-1211 Genève 23 Switzerland Benchmarking of CPU servers Benchmarking of (CPU) servers Dr. Ulrich Schwickerath, CERN.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
INTRODUCTION TO WEB HOSTING
Compute and Storage For the Farm at Jlab
NFV Group Report --Network Functions Virtualization LIU XU →
Getting the Most out of Scientific Computing Resources
Dynamic Extension of the INFN Tier-1 on external resources
Experience of Lustre at QMUL
Getting the Most out of Scientific Computing Resources
Operating Systems Lecture 2.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Volunteer Computing for Science Gateways
Cluster Optimisation using Cgroups
Title of Poster Authors and Affiliations
WLCG Memory Requirements
Andy Wang COP 5611 Advanced Operating Systems
Service Challenge 3 CERN
The Title of the Paper, in Bold Letters
Andy Wang COP 5611 Advanced Operating Systems
Vanderbilt Tier 2 Project
Southwest Tier 2 Center Status Report
Experience of Lustre at a Tier-2 site
Luca dell’Agnello INFN-CNAF
UK GridPP Tier-1/A Centre at CLRC
Practical aspects of multi-core job submission at CERN
Статус ГРИД-кластера ИЯФ СО РАН.
PES Lessons learned from large scale LSF scalability tests
MODERN OPERATING SYSTEMS Third Edition ANDREW S
Types of Computers Mainframe/Server
Poster Presentations – Paper number ID 000
Operating Systems Lecture 2.
Preparing and writing academic posters
Andy Wang COP 5611 Advanced Operating Systems
48”x36” Research Poster Template - Title Line
Andy Wang COP 5611 Advanced Operating Systems
LO2 – Understand Computer Software
Names of the Authors, in Bold letters
Andy Wang COP 5611 Advanced Operating Systems
Poster Presentations – Paper number ID 000
Corresponding Author’s
Conclusion & Discussion Research purposes/ Research hypothesis
Presentation transcript:

Title of the Poster Supervised By: Prof.********* Students WHO DID THE STUDY UNIVERSITIES THEY ARE AFFILIATED WITH Abstract Methodology & System Setup The RAL TIER1 batch farm consists of several multicore, hyperthreading capable CPUs. Increases in the amount of memory per node combined with experiences from other sites made hyperthreading an attractive option for increasing job throughput. RAL supports all LHC VOs, with prime users being Atlas, CMS and LHCB, and a 10% of resources is devoted to non-LHC VOs The virtual cores provided by hyperthreading could double the batch farm capacity, however the amount of memory available in the batch nodes did not permit that. LHC jobs require more than 2GB RAM to run smoothly. A poster is a visual support for discussion about your work with other scientists. This poster layout gives you some guidelines for the preparation of posters for NRSC2016. This document provides a guideline for preparation of posters for the NRSC2016 and can be used as a template for the conference poster. The authors must produce the poster PowerPoint (PPT) file following this guideline or using this template, and print it in colour. The poster must be printed on One Portrait page rollup with resolution of 4750*2000 pixels. Introduction Each generation of batch farm hardware with hyperthreading capability was benchmarked with HEPSPEC, progressively increasing the number of threads up to the total number of virtual cores Benchmarks at that time were conducted using Scientific Linux 5. Scientific Linux 6 benchmarks were run later as the batch farm was set to be Results Overall, 2000 extra job slots and 9298 extra hepspec were added in the batch farm using already available hardware Average job time increases as expected, but overall job throughput increased Network/disk/power/temperature usage did not increase in a way that could negatively affect the overall throughput or require additional maneuvers Batch server was able to handle the extra job slots Of critical importance is the sharp drop in job efficiency as job slots approach the upper hyperthreading limit. This means that real world VO jobs would suffer if we went for full batchfarm HEPSPEC performance! Make Job Slots per WN Efficiency Average Job Length (mins) Standard Deviation (mins) Number of jobs Dell 12 0.9715 297 370 19065 Viglen 0.9757 320 390 23864 14 0.9719 238 326 6118 0.9767 270 341 11249 16 0.9859 343 254 6550 0.985 304 249 8756 18 0.9781 377 5014 0.9808 350 391 6263 20 0.9758 318 346 11339 0.9756 260 285 11229 22 0.9747 387 315 6317 0.9783 305 236 6307 24 0.9257 544 373 6650 0.9311 372 278 6713 System Parameters Conclusions New procurements now take into account the hyperthreading capabilities For 2012, dual 8 core CPU systems go up to 32 virtual cores Systems were procured with 128 GB RAM in order to exploit full hyperthreading capabilities Dual Gigabit links, in the future single 10 GB as they became more cost effective So far RAID0 software raid setup has proven sufficient for disk I/O Performance gains so far on par with previous generations By spending a bit extra on RAM, we save more by buying fewer nodes This also saves machine room space, cables, and power References Biography Details about each contributor and Personal Photo 33rd NATIONAL RADIO SCIENCE CONFERENCE (NRSC 2016), www.nrsc2016.aast.edu