Faucets: Efficient Utilization of Multiple Clusters

Slides:



Advertisements
Similar presentations
Building Portals to access Grid Middleware National Technical University of Athens Konstantinos Dolkas, On behalf of Andreas Menychtas.
Advertisements

The Charm++ Programming Model and NAMD Abhinav S Bhatele Department of Computer Science University of Illinois at Urbana-Champaign
Study of Hurricane and Tornado Operating Systems By Shubhanan Bakre.
Multilingual Debugging Support for Data-driven Parallel Languages Parthasarathy Ramachandran Laxmikant Kale Parallel Programming Laboratory Dept. of Computer.
Adaptive MPI Chao Huang, Orion Lawlor, L. V. Kalé Parallel Programming Lab Department of Computer Science University of Illinois at Urbana-Champaign.
1 Introduction to Load Balancing: l Definition of Distributed systems. Collection of independent loosely coupled computing resources. l Load Balancing.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
Slide 1 of 9 Presenting 24x7 Scheduler The art of computer automation Press PageDown key or click to advance.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
A Framework for Collective Personalized Communication Laxmikant V. Kale, Sameer Kumar, Krishnan Varadarajan.
Charm++ Load Balancing Framework Gengbin Zheng Parallel Programming Laboratory Department of Computer Science University of Illinois at.
MADE Mobile Agents based system for Distance Evaluation Vikram Jamwal KReSIT, IIT Bombay Guide : Prof. Sridhar Iyer.
1CPSD NSF/DARPA OPAAL Adaptive Parallelization Strategies using Data-driven Objects Laxmikant Kale First Annual Review October 1999, Iowa City.
Adaptive MPI Milind A. Bhandarkar
Supporting Multi-domain decomposition for MPI programs Laxmikant Kale Computer Science 18 May 2000 ©1999 Board of Trustees of the University of Illinois.
October 19, 2005Charm++ Workshop, Faucets Tutorial Presented by Esteban Pauli and Greg Koenig Parallel Programming Lab, UIUC.
BLU-ICE and the Distributed Control System Constraints for Software Development Strategies Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
Introduction and Features of Java. What is java? Developed by Sun Microsystems (James Gosling) A general-purpose object-oriented language Based on C/C++
Invitation to Computer Science 5 th Edition Chapter 6 An Introduction to System Software and Virtual Machine s.
Computer Emergency Notification System (CENS)
October 18, 2005 Charm++ Workshop Faucets A Framework for Developing Cluster and Grid Scheduling Solutions Presented by Esteban Pauli Parallel Programming.
A Fault Tolerant Protocol for Massively Parallel Machines Sayantan Chakravorty Laxmikant Kale University of Illinois, Urbana-Champaign.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
Computer Science Lecture 7, page 1 CS677: Distributed OS Multiprocessor Scheduling Will consider only shared memory multiprocessor Salient features: –One.
Chapter 2 Processes and Threads Introduction 2.2 Processes A Process is the execution of a Program More specifically… – A process is a program.
EXTENSIBILITY, SAFETY AND PERFORMANCE IN THE SPIN OPERATING SYSTEM
CS533 - Concepts of Operating Systems 1 The Mach System Presented by Catherine Vilhauer.
Faucets Queuing System Presented by, Sameer Kumar.
1 ©2004 Board of Trustees of the University of Illinois Computer Science Overview Laxmikant (Sanjay) Kale ©
Using Charm++ to Mask Latency in Grid Computing Applications Gregory A. Koenig Parallel Programming Laboratory Department.
Timeshared Parallel Machines Need resource management Need resource management Shrink and expand individual jobs to available sets of processors Shrink.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
Group Mission and Approach To enhance Performance and Productivity in programming complex parallel applications –Performance: scalable to thousands of.
1 Opportunities and Challenges of Modern Communication Architectures: Case Study with QsNet CAC Workshop Santa Fe, NM, 2004 Sameer Kumar* and Laxmikant.
Motivation: dynamic apps Rocket center applications: –exhibit irregular structure, dynamic behavior, and need adaptive control strategies. Geometries are.
FTC-Charm++: An In-Memory Checkpoint-Based Fault Tolerant Runtime for Charm++ and MPI Gengbin Zheng Lixia Shi Laxmikant V. Kale Parallel Programming Lab.
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
PADTAD 2008 Memory Tagging in Charm++ Filippo Gioachin Laxmikant V. Kalé Department of Computer Science University of Illinois at Urbana-Champaign.
Flexibility and Interoperability in a Parallel MD code Robert Brunner, Laxmikant Kale, Jim Phillips University of Illinois at Urbana-Champaign.
Introduction to threads
OPERATING SYSTEMS CS 3502 Fall 2017
WWW and HTTP King Fahd University of Petroleum & Minerals
Introduction to Load Balancing:
Alternatives to Mobile Agents
OPERATING SYSTEMS CS3502 Fall 2017
Pipeline Execution Environment
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Parallel Objects: Virtualization & In-Process Components
Introduction to Operating System (OS)
The Improvement of PaaS Platform ZENG Shu-Qing, Xu Jie-Bin 2010 First International Conference on Networking and Distributed Computing SQUARE.
NGS computation services: APIs and Parallel Jobs
Performance Evaluation of Adaptive MPI
Distributed Systems - Comp 655
Chapter 4 Multithreading programming
Faucets: the Charm++ Clusters Solution Tutorial
Component Frameworks:
Milind A. Bhandarkar Adaptive MPI Milind A. Bhandarkar
Integrated Runtime of Charm++ and OpenMP
Operating Systems Bina Ramamurthy CSE421 11/27/2018 B.Ramamurthy.
Introduction to Apache
Language Processors Application Domain – ideas concerning the behavior of a software. Execution Domain – Ideas implemented in Computer System. Semantic.
Multithreaded Programming
BigSim: Simulating PetaFLOPS Supercomputers
Chapter 2 Operating System Overview
Chapter 4: Threads.
Support for Adaptivity in ARMCI Using Migratable Objects
Lecture Topics: 11/1 Hand back midterms
Emulating Massively Parallel (PetaFLOPS) Machines
Presentation transcript:

Faucets: Efficient Utilization of Multiple Clusters Laxmikant Kale, Jayant DeSouza, Sameer Kumar, Sindhura Bandhakavi, Mani Potnuru Parallel Programming Laboratory Department of Computer Science University of Illinois at Urbana-Champaign http://charm.cs.uiuc.edu/ change title to match web page put up software, maybe CVS fix components diagram, separate diagram for AQS simpler scripts to configure, run 2/23/2019 Charm++ Workshop 2002

Outline Motivation, and Faucets Adaptive Jobs, and the Faucets solution the Adaptive Jobs solution Faucets Job Submission Job Monitoring Adaptive Jobs, and Performance Results Adaptive Queuing System, and Simulations and Performance Results Future Work 2/23/2019 Charm++ Workshop 2002

Motivation Demand for high end compute power, but Dispersed which machine will give me back my results quickest? Hard to use use ssh to login, ftp files, decide queue, create script, submit because of the hassle, users just submit same script to same machine even if a better alternative exists monitor a running job Low operational efficiency of existing computing systems first this, then outline 2/23/2019 Charm++ Workshop 2002

Solution 1: Faucets Motivation #1: dispersed, hard to use Central source of compute power Users Providers of compute resources User account not needed on every resource Match users and providers Market economy ? QoS requirements, contracts and bidding systems GUI or web-based interface Submission monitoring 2/23/2019 Charm++ Workshop 2002

Faucets Cluster Cluster Cluster Parallel systems need to maximize their efficiency! Faucets Job Specs Cluster Job Submission Bids File Upload Job Specs File Upload Job Id Cluster Job Id Efficiency metrics are profit and utilization Job Monitor Cluster http://charm.cs.uiuc.edu/research/faucets 2/23/2019 Charm++ Workshop 2002

Motivation #2: Inefficient Utilization Allocate A ! Conflict ! B Queued 16 Processor system Job B 8 processors Job A Job B Job A 10 processors first why inefficient, then how adaptive jobs solves also mention external “fragmentation” Current Job Schedulers can have low system utilization ! 2/23/2019 Charm++ Workshop 2002

Motivation #2, contd. Chun & Culler paper Compares FirstPrice (market-based scheduling) with PrioFIFO. Up to 2.5x improvement as degree of job parallelism increases Both have “head-of-line” blocking Adaptive jobs fix this Brent Chun and David Culler – User-centric Performance Analysis of Market-based Cluster Batch Schedulers, CCGrid 2002. 2/23/2019 Charm++ Workshop 2002

Solution 2: Adaptive Jobs Jobs that can shrink or expand the number of processors they are running on at runtime Improve system utilization and response time Properties Min_pe, related to the memory requirements of the job Max_pe, related to speedup 2/23/2019 Charm++ Workshop 2002

Adaptive Job Scheduler Scheduler can take advantage of this adaptivity Improve system utilization and response time Scheduling decisions Shrink existing jobs when a new job arrives Expand jobs to use all processors when a job finishes Processor map sent to the job Bit vector specifying which processors a job is allowed to use 00011100 (use 3 4 and 5!) Handles regular (non-adaptive) jobs 2/23/2019 Charm++ Workshop 2002

Two Adaptive Jobs 16 Processor system Job A Job B Job A Job B A Expands ! Allocate A ! Allocate B ! Shrink A B Finishes 16 Processor system Job B Min_pe = 8 Max_pe= 16 Job A Job B Job A Max_pe = 10 Min_pe = 1 2/23/2019 Charm++ Workshop 2002

Outline Motivation, and Faucets Adaptive Jobs, and the Faucets solution the Adaptive Jobs solution Faucets Job Submission Job Monitoring Adaptive Jobs, and Performance Results Adaptive Queuing System, and Simulations and Performance Results Future Work 2/23/2019 Charm++ Workshop 2002

Faucets: Job Submission 2/23/2019 Charm++ Workshop 2002

Submission Mechanism QoS requirements, contract, bidding type, number of processors memory estimated compute time or table: processors vs. compute time deadline price Authentication, security Accounting Cluster Bartering 2/23/2019 Charm++ Workshop 2002

Faucets Cluster Cluster Cluster Parallel systems need to maximize their efficiency! Faucets Job Specs Cluster Job Submission Bids File Upload Job Specs File Upload Job Id Cluster Job Id Efficiency metrics are profit and utilization Job Monitor Cluster http://charm.cs.uiuc.edu/research/faucets 2/23/2019 Charm++ Workshop 2002

Job Monitoring: Appspector 2/23/2019 Charm++ Workshop 2002

Using Appspector Charm client-server (CCS) interface User can write Default server Default Java client User can write Program code to send relevant data Java class to display data 2/23/2019 Charm++ Workshop 2002

Clusters Status View 2/23/2019 Charm++ Workshop 2002

Adaptive Jobs 2/23/2019 Charm++ Workshop 2002

Adaptive Job Framework Applications written in MPI or Charm++ Scheduler controls the processor map for each job Processor map is used by the job’s load balancer Scheduler Adaptive Application AMPI CHARM++ Loadbalancer Converse Proc. Map Use the Charm++ framework 2/23/2019 Charm++ Workshop 2002

Charm++ Charm++: Object based virtualization Program written as a large number of objects which can migrate Number of objects typically much larger than processors Load-balancer can remap objects Measurement based load balancing Charm++ is a data driven message passing language 2/23/2019 Charm++ Workshop 2002

Adaptive Charm++ Programs Charm++ program is adaptive automatically if a shrink expand enabled centralized load-balancing strategy is used Currently CommLB and RandcentLB are shrink expand enabled Compile with –module CommLB Run with +balancer CommLB 2/23/2019 Charm++ Workshop 2002

MPI Jobs How do we make MPI jobs adaptive? AMPI AMPI maps the MPI processes to user level threads which can migrate Each thread is embedded in a Charm++ object, thus allowing load balancing and shrink-expand Use the Charm++ framework 2/23/2019 Charm++ Workshop 2002

Adaptive AMPI Programs Build AMPI with an adaptive load balancing strategy Call MPI_MIGRATE() at regular intervals in each MPI process, because it will not listen to the processor map otherwise. 2/23/2019 Charm++ Workshop 2002

Performance Results for Adaptive Jobs 2/23/2019 Charm++ Workshop 2002

Shrink Expand Overhead 0.49 0.56 16 8 0.46 0.59 32 16 0.54 0.66 64 32 0.50 0.61 128 64 Expand Time (s) Shrink Time (s) Processors Performance for MD program with 10MB migrated data per processor on NCSA Platinum 2/23/2019 Charm++ Workshop 2002

Residual Processes Shrink Objects are moved from the unallocated processors to the allocated processors Leaves behind a residual process repetition, eliminate More work being done on the loadbalancer Many strategies have been implemented Obvious questions: how long does it take to shrink and expand? New call MPI Migrate 2/23/2019 Charm++ Workshop 2002

Effect of Residual Process Utilization (%) Jobs In System Performance cost (%) 2 1.98 4 1.43 8 3.24 Now we are convinced of the adaptive job implementation, how much does the system performance improve with adaptive jobs Performance on a 16 processor system Time (s) Performance of Job1 and Job2 2/23/2019 Charm++ Workshop 2002

Adaptive Queuing System 2/23/2019 Charm++ Workshop 2002

AQS Features Multithreaded Reliable and robust Tested on the cool.cs Linux cluster at PPL Supports most features of standard queuing systems Has the ability to manage adaptive jobs currently implemented in Charm++ and MPI Handles regular (non-adaptive) jobs 2/23/2019 Charm++ Workshop 2002

AQS Scheduling Strategy A library component that decides which jobs to schedule Similar to equipartitioning [N Islam et al] On job arrival and job completion All running jobs and the new one are allocated their minimum number of processors Leftover processors are shared equally subject to each job's maximum processor usage If it is not possible to allocate the new job its minimum number of processors, it is queued 2/23/2019 Charm++ Workshop 2002

Simulated Utilization 2/23/2019 Charm++ Workshop 2002

Simulated MRT 2/23/2019 Charm++ Workshop 2002

Experimental Utilization 2/23/2019 Charm++ Workshop 2002

Experimental MRT 2/23/2019 Charm++ Workshop 2002

Summary and Future Work Ease of use – Faucets Better utilization – Charm++/AMPI Adaptive Jobs Go to http://charm.cs.uiuc.edu/research/faucets to download Future Extend the system to other parallel machines Eliminate residual processes Integrate the scheduler with Globus More comprehensive QoS contracts being developed Sophisticated bidding schemes for the faucets framework Bidding schemes to include memory deadline profit etc. 2/23/2019 Charm++ Workshop 2002