WLCG Memory Requirements

Slides:



Advertisements
Similar presentations
Enabling Cost-Effective Resource Leases with Virtual Machines Borja Sotomayor University of Chicago Ian Foster Argonne National Laboratory/
Advertisements

Operating System.
Trends in Memory Computing. Changes in memory technology Once upon a time, computers could not store very much data. The first electronic memory storage.
High Performing Cache Hierarchies for Server Workloads
SLA-Oriented Resource Provisioning for Cloud Computing
Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
Communicating Machine Features to Batch Jobs GDB, April 6 th 2011 Originally to WLCG MB, March 8 th 2011 June 13 th 2012.
1 Virtual Memory vs. Physical Memory So far, all of a job’s virtual address space must be in physical memory However, many parts of programs are never.
Heterogeneity and Dynamicity of Clouds at Scale: Google Trace Analysis [1] 4/24/2014 Presented by: Rakesh Kumar [1 ]
Storage Management in Virtualized Cloud Environments Sankaran Sivathanu, Ling Liu, Mei Yiduo and Xing Pu Student Workshop on Frontiers of Cloud Computing,
AUTHORS: STIJN POLFLIET ET. AL. BY: ALI NIKRAVESH Studying Hardware and Software Trade-Offs for a Real-Life Web 2.0 Workload.
OPERATING SYSTEMS Lecture 3: we will explore the role of the operating system in a computer Networks and Communication Department 1.
Hyper Threading Technology. Introduction Hyper-threading is a technology developed by Intel Corporation for it’s Xeon processors with a 533 MHz system.
A Few Things You Don’t Know You Have To Worry About Yet David Boyes CMG Philadelphia, Nov 2008.
Silberschatz, Galvin and Gagne  Operating System Concepts UNIT II Operating System Services.
CENTRAL PROCESSING UNIT. CPU Does the actual processing in the computer. A single chip called a microprocessor. Composed of an arithmetic and logic unit.
Technical Reading Report Virtual Power: Coordinated Power Management in Virtualized Enterprise Environment Paper by: Ripal Nathuji & Karsten Schwan from.
Workload management, virtualisation, clouds & multicore Andrew Lahiff.
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Some interest in common solutions in various areas – Federated data, common analysis framework, dashboards, SAM, clouds, …, etc In 2013 after end of EGI-SA3.
REMINDER Check in on the COLLABORATE mobile app Best Practices for Oracle on VMware - Deep Dive Darryl Smith Chief Database Architect Distinguished Engineer.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
PACMan: Coordinated Memory Caching for Parallel Jobs Ganesh Ananthanarayanan, Ali Ghodsi, Andrew Wang, Dhruba Borthakur, Srikanth Kandula, Scott Shenker,
1© Copyright 2015 EMC Corporation. All rights reserved. NUMA(YEY) BY JACOB KUGLER.
Threads by Dr. Amin Danial Asham. References Operating System Concepts ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, and GREG GAGNE.
Best 20 jobs jobs sites.
1 Automated Power Management Through Virtualization Anne Holler, VMware Anil Kapur, VMware.
Lecture 6 The Rest of Scheduling Algorithms and The Beginning of Memory Management.
NFV Group Report --Network Functions Virtualization LIU XU →
Trends in Memory Computing.
Using the VTune Analyzer on Multithreaded Applications
Memory Management Chapter 7.
Getting the Most out of Scientific Computing Resources
Title of the Poster Supervised By: Prof.*********
Getting the Most out of Scientific Computing Resources
Buying into “Summit” under the “Condo” model
Processes and threads.
Presented by Yoon-Soo Lee
CHT Project Progress Report
Cluster Optimisation using Cgroups
FileStager test results
Cloud-Assisted VR.
Mechanism: Limited Direct Execution
Central Processing Unit- CPU
RDIG for ALICE today and in future
Unit 2.6 Data Representation Lesson 1 ‒ Numbers
Practical aspects of multi-core job submission at CERN
Architecture Background
PA an Coordinated Memory Caching for Parallel Jobs
Cloud-Assisted VR.
Chapter 2.2 : Process Scheduling
TYPES OFF OPERATING SYSTEM
Frequency Governors for Cloud Database OLTP Workloads
Many-core Software Development Platforms
Your great subtitle in this line
Chapter 2: System Structures
KISS-Tree: Smart Latch-Free In-Memory Indexing on Modern Architectures
Condor and Multi-core Scheduling
Xen Network I/O Performance Analysis and Opportunities for Improvement
Learning Techniques for AUI3703
Think of 3 things in your life that you have either bought or were given to you that you wanted more then anything. (this could be recent or from back.
Operating systems Process scheduling.
Multithreaded Programming
Cloud Computing Architecture
Objectives Describe how common characteristics of CPUs affect their performance: clock speed, cache size, number of cores Explain the purpose and give.
Entry Overrun Calculation
PSS0 Configuration Management,
CoNDA: Efficient Cache Coherence Support for Near-Data Accelerators
Exploring Multi-Core on
Presentation transcript:

WLCG Memory Requirements At the MB at the end of 2015 we agreed that the experiments did not want to require more than 2 GB of RAM per core as a default need, and for the most part will continue to require only 2 GB/core. However, there are clearly several workloads that need or benefit from larger memory machines.  There are a number of mechanisms for provisioning such: Buy machines with more than 2 GB/core.  This may be an unnecessary cost if the majority of the workloads still fit within 2 GB.  The cost is approximately 20% more to buy 4 GB/core compared to 2 GB. (actually with recent machines with 10 cores per CPU this problem is side-stepped as the proposed configurations have ~3 GB/core) Switch off hyper threading so that the real cores have twice as much memory available.  This is approximately a 20% performance reduction compared to having HT on. When a job requests more than 2 GB, allocate at execution time 2 cores to provide twice as much RAM, and accept the lower efficiency of the second core not being fully utilised. Options 1, 2 require either planning at purchase time, or up-front configuration of the machine.  Option 3 is attractive as it allows the site to be flexible in its provisioning from one job to another, and the effective loss of CPU efficiency is acceptable if this is not the majority of the workload. All 3 options require a mechanism to request the appropriate resources of the batch system (or equivalent), and to be able to provision correctly.    15 Nov 2013 Ian.Bird@cern.ch