Distributed Low-Latency Scheduling

Slides:



Advertisements
Similar presentations
Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.
Advertisements

Achieving Elasticity for Cloud MapReduce Jobs Khaled Salah IEEE CloudNet 2013 – San Francisco November 13, 2013.
Locality-Aware Dynamic VM Reconfiguration on MapReduce Clouds Jongse Park, Daewoo Lee, Bokyeong Kim, Jaehyuk Huh, Seungryoul Maeng.
UC Berkeley Job Scheduling for MapReduce Matei Zaharia, Dhruba Borthakur *, Joydeep Sen Sarma *, Scott Shenker, Ion Stoica 1 RAD Lab, * Facebook Inc.
Spark: Cluster Computing with Working Sets
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
Discretized Streams Fault-Tolerant Streaming Computation at Scale Matei Zaharia, Tathagata Das (TD), Haoyuan (HY) Li, Timothy Hunter, Scott Shenker, Ion.
Resource Management with YARN: YARN Past, Present and Future
Enabling High-level SLOs on Shared Storage Andrew Wang, Shivaram Venkataraman, Sara Alspaugh, Randy Katz, Ion Stoica Cake 1.
Discretized Streams An Efficient and Fault-Tolerant Model for Stream Processing on Large Clusters Matei Zaharia, Tathagata Das, Haoyuan Li, Scott Shenker,
Shark Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker Hive on Spark.
Charles Reiss *, Alexey Tumanov †, Gregory R. Ganger †, Randy H. Katz *, Michael A. Kozuch ‡ * UC Berkeley† CMU‡ Intel Labs.
Quincy: Fair Scheduling for Distributed Computing Clusters Microsoft Research Silicon Valley SOSP’09 Presented at the Big Data Reading Group by Babu Pillai.
Why static is bad! Hadoop Pregel MPI Shared cluster Today: static partitioningWant dynamic sharing.
Making Sense of Performance in Data Analytics Frameworks Kay Ousterhout, Ryan Rasti, Sylvia Ratnasamy, Scott Shenker, Byung-Gon Chun.
Making Sense of Spark Performance
UC Berkeley Improving MapReduce Performance in Heterogeneous Environments Matei Zaharia, Andy Konwinski, Anthony Joseph, Randy Katz, Ion Stoica University.
Mesos A Platform for Fine-Grained Resource Sharing in Data Centers Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy.
Matei Zaharia, Dhruba Borthakur *, Joydeep Sen Sarma *, Khaled Elmeleegy +, Scott Shenker, Ion Stoica UC Berkeley, * Facebook Inc, + Yahoo! Research Delay.
Mesos A Platform for Fine-Grained Resource Sharing in the Data Center Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony Joseph, Randy.
UC Berkeley Improving MapReduce Performance in Heterogeneous Environments Matei Zaharia, Andy Konwinski, Anthony Joseph, Randy Katz, Ion Stoica University.
UC Berkeley Improving MapReduce Performance in Heterogeneous Environments Matei Zaharia, Andy Konwinski, Anthony Joseph, Randy Katz, Ion Stoica University.
The Power of Choice in Data-Aware Cluster Scheduling
The Case for Tiny Tasks in Compute Clusters Kay Ousterhout *, Aurojit Panda *, Joshua Rosen *, Shivaram Venkataraman *, Reynold Xin *, Sylvia Ratnasamy.
Outline | Motivation| Design | Results| Status| Future
A Platform for Fine-Grained Resource Sharing in the Data Center
1 CS : Project Suggestions Ion Stoica ( September 14, 2011.
Jockey Guaranteed Job Latency in Data Parallel Clusters Andrew Ferguson, Peter Bodik, Srikanth Kandula, Eric Boutin, and Rodrigo Fonseca.
Mesos A Platform for Fine-Grained Resource Sharing in the Data Center Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony Joseph, Randy.
임규찬. 1. Abstract 2. Introduction 3. Design Goals 4. Sample-Based Scheduling for Parallel Jobs 5. Implements.
How Companies are Using Spark And where the Edge in Big Data will be Matei Zaharia.
Matei Zaharia, Dhruba Borthakur *, Joydeep Sen Sarma *, Khaled Elmeleegy +, Scott Shenker, Ion Stoica UC Berkeley, * Facebook Inc, + Yahoo! Research Fair.
Sparrow Distributed Low-Latency Spark Scheduling Kay Ousterhout, Patrick Wendell, Matei Zaharia, Ion Stoica.
Matchmaking: A New MapReduce Scheduling Technique
Resilient Distributed Datasets: A Fault- Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
DynamicMR: A Dynamic Slot Allocation Optimization Framework for MapReduce Clusters Nanyang Technological University Shanjiang Tang, Bu-Sung Lee, Bingsheng.
Scalable and Coordinated Scheduling for Cloud-Scale computing
A Platform for Fine-Grained Resource Sharing in the Data Center
Presented by Qifan Pu With many slides from Ali’s NSDI talk Ali Ghodsi, Matei Zaharia, Benjamin Hindman, Andy Konwinski, Scott Shenker, Ion Stoica.
Dominant Resource Fairness: Fair Allocation of Multiple Resource Types Ali Ghodsi, Matei Zaharia, Benjamin Hindman, Andy Konwinski, Scott Shenker, Ion.
PACMan: Coordinated Memory Caching for Parallel Jobs Ganesh Ananthanarayanan, Ali Ghodsi, Andrew Wang, Dhruba Borthakur, Srikanth Kandula, Scott Shenker,
CPU Scheduling G.Anuradha Reference : Galvin. CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time.
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
Resilient Distributed Datasets A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Performance Assurance for Large Scale Big Data Systems
COS 518: Advanced Computer Systems Lecture 13 Michael Freedman
Packing Tasks with Dependencies
Introduction to Distributed Platforms
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
Measurement-based Design
Chapter 10 Data Analytics for IoT
Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center
Spark Presentation.
Data Platform and Analytics Foundational Training
PA an Coordinated Memory Caching for Parallel Jobs
Software Engineering Introduction to Apache Hadoop Map Reduce
Introduction to Spark.
Applications SPIDAL MIDAS ABDS
EECS 582 Final Review Mosharaf Chowdhury EECS 582 – F16.
Omega: flexible, scalable schedulers for large compute clusters
CPU Scheduling G.Anuradha
COS 518: Advanced Computer Systems Lecture 14 Michael Freedman
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Zaharia, et al (2012)
Resource-Efficient and QoS-Aware Cluster Management
Cloud Computing Large-scale Resource Management
Hawk: Hybrid Datacenter Scheduling
Cloud Computing MapReduce in Heterogeneous Environments
Job-aware Scheduling in Eagle: Divide and Stick to Your Probes
Fast, Interactive, Language-Integrated Cluster Computing
COS 518: Distributed Systems Lecture 11 Mike Freedman
Presentation transcript:

Distributed Low-Latency Scheduling Sparrow Distributed Low-Latency Scheduling Kay Ousterhout, Patrick Wendell, Matei Zaharia, Ion Stoica

Sparrow schedules tasks in clusters using a decentralized, randomized approach

Sparrow schedules tasks in clusters using a decentralized, randomized approach support constraints and fair sharing and provides response times within 12% of ideal

… … … … Scheduling Setting Map Reduce/Spark/Dryad Job Task Task Task

Job Latencies Rapidly Decreasing 2012: Impala query 2010: Dremel Query 2010: In-memory Spark query 2009: Hive query 2013: Spark streaming 2004: MapReduce batch job 10 min. 10 sec. 100 ms 1 ms Job Latencies Rapidly Decreasing

Scheduling challenges: Millisecond Latency Quality Placement Fault Tolerant High Throughput

26 1.6K 160K 16M 1000 16-core machines Scheduler throughput 2012: Impala query 2010: Dremel Query 2010: In-memory Spark query 2009: Hive query 2013: Spark streaming 2004: MapReduce batch job 10 min. 10 sec. 100 ms 1 ms 26 decisions/second 1.6K decisions/second 160K decisions/second 16M decisions/second Scheduler throughput 1000 16-core machines

? Millisecond Latency Quality Placement Fault Tolerant High Throughput Today: Completely Centralized Sparrow: Completely Decentralized Less centralization Millisecond Latency Quality Placement Fault Tolerant High Throughput ✗ ✓ ? ✓ ✓ ✗ ✓ ✗ ✓

Sparrow Decentralized approach Existing randomized approaches Batch Sampling Late Binding Analytical performance evaluation Handling constraints Fairness and policy enforcement Within 12% of ideal on 100 machines

Scheduling with Sparrow Worker Scheduler Worker Job Scheduler Worker Worker Scheduler Worker Scheduler Worker

Random Job Worker Scheduler Worker Scheduler Worker Worker Scheduler

Simulated Results Omniscient: infinitely fast centralized scheduler 100-task jobs in 10,000-node cluster, exp. task durations

Per-task sampling Power of Two Choices Job Worker Scheduler Worker

Per-task sampling Power of Two Choices Job Worker Scheduler Worker

100-task jobs in 10,000-node cluster, exp. task durations Simulated Results 100-task jobs in 10,000-node cluster, exp. task durations

Response Time Grows with Tasks/Job! 70% cluster load

Per-Task Sampling Job Per-task Worker Scheduler Task 1 Worker ✓ Scheduler Task 1 Worker Job Scheduler Worker Task 2 Worker Scheduler Worker Scheduler Worker ✓

Place m tasks on the least loaded of dm slaves Per-task Sampling Batch Per-task 4 probes (d = 2) Worker ✓ Scheduler Worker Job Scheduler Worker Worker Scheduler Worker Scheduler Worker ✓ Place m tasks on the least loaded of dm slaves

Per-task versus Batch Sampling 70% cluster load

100-task jobs in 10,000-node cluster, exp. task durations Simulated Results 100-task jobs in 10,000-node cluster, exp. task durations

Queue length poor predictor of wait time 80 ms 155 ms Worker Worker 530 ms Poor performance on heterogeneous workloads

Place m tasks on the least loaded of dm slaves Late Binding 4 probes (d = 2) Worker Scheduler Worker Job Scheduler Worker Worker Scheduler Worker Scheduler Worker Place m tasks on the least loaded of dm slaves

Place m tasks on the least loaded of dm slaves Late Binding 4 probes (d = 2) Worker Scheduler Worker Job Scheduler Worker Worker Scheduler Worker Scheduler Worker Place m tasks on the least loaded of dm slaves

Place m tasks on the least loaded of dm slaves Late Binding Worke r reques ts task Worker Scheduler Worker Job Scheduler Worker Worker Scheduler Worker Scheduler Worker Place m tasks on the least loaded of dm slaves

100-task jobs in 10,000-node cluster, exp. task durations Simulated Results 100-task jobs in 10,000-node cluster, exp. task durations

What about constraints?

Restrict probed machines to those that satisfy the constraint Job Constraints Worker Scheduler Worker Job Scheduler Worker Worker Scheduler Worker Scheduler Worker Restrict probed machines to those that satisfy the constraint

Probe separately for each task Per-Task Constraints Worker Scheduler Worker Job Scheduler Worker Worker Scheduler Worker Scheduler Worker Probe separately for each task

Technique Recap Batch sampling + Late binding Constraint handling Worker Scheduler Batch sampling + Late binding Constraint handling Worker Scheduler Worker Worker Scheduler Worker Scheduler Worker

How does Sparrow perform on a real cluster?

Spark on Sparrow Sparrow Scheduler Job Worker Worker Worker Query: DAG of Stages Worker Job Worker Worker Worker

Spark on Sparrow Sparrow Scheduler Job Worker Worker Worker Query: DAG of Stages Worker Worker Worker Job Worker

Spark on Sparrow Sparrow Scheduler Job Worker Worker Worker Query: DAG of Stages Worker Worker Worker Job Worker

How does Sparrow compare to Spark’s native scheduler? 100 16-core EC2 nodes, 10 tasks/job, 10 schedulers, 80% load

TPC-H Queries: Background TPC-H: Common benchmark for analytics workloads Shark: SQL execution engine Spark: Distributed in-memory analytics framework Sparrow

100 16-core EC2 nodes, 10 schedulers, 80% load TPC-H Queries Percentiles 95 75 50 25 5 100 16-core EC2 nodes, 10 schedulers, 80% load

TPC-H Queries Within 12% of ideal Median queuing delay of 9ms 100 16-core EC2 nodes, 10 schedulers, 80% load

Timeout: 100ms Failover: 5ms Re-launch queries: 15ms Fault Tolerance ✗ Spark Client 1 Scheduler 1 Spark Client 2 Scheduler 2 Timeout: 100ms Failover: 5ms Re-launch queries: 15ms

When does Sparrow not work as well? High cluster load

Related Work Centralized task schedulers: e.g., Quincy Two level schedulers: e.g., YARN, Mesos Coarse-grained cluster schedulers: e.g., Omega Load balancing: single task

Worker Scheduler Batch sampling + Late binding Constraint handling Worker Scheduler Worker Worker Scheduler Worker Scheduler Worker Sparrows provides near-ideal job response times without global visibility www.github.com/radlab/sparrow

Backup Slides

Priorities Fair Shares Policy Enforcement Priorities Serve queues based on strict priorities Fair Shares Serve queues using weighted fair queuing Slave Slave High Priority User A (75%) Low Priority User B (25%) Can we do better without losing simplicity?