Optimizing Interactive Analytics Engines for Heterogeneous Clusters

Slides:



Advertisements
Similar presentations
Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
Advertisements

Database Architectures and the Web
SDN + Storage.
LASTor: A Low-Latency AS-Aware Tor Client
A KTEC Center of Excellence 1 Cooperative Caching for Chip Multiprocessors Jichuan Chang and Gurindar S. Sohi University of Wisconsin-Madison.
LOAD BALANCING IN A CENTRALIZED DISTRIBUTED SYSTEM BY ANILA JAGANNATHAM ELENA HARRIS.
Copyright 2007, Information Builders. Slide 1 Workload Distribution for the Enterprise Mark Nesson, Vashti Ragoonath June, 2008.
Karl Schnaitter and Neoklis Polyzotis (UC Santa Cruz) Serge Abiteboul (INRIA and University of Paris 11) Tova Milo (University of Tel Aviv) Automatic Index.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
Locality-Aware Request Distribution in Cluster-based Network Servers 1. Introduction and Motivation --- Why have this idea? 2. Strategies --- How to implement?
Multi-user Extensible Virtual Worlds Increasing complexity of objects and interactions with increasing world size, users, numbers of objects and types.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
Proteus: Power Proportional Memory Cache Cluster in Data Centers Shen Li, Shiguang Wang, Fan Yang, Shaohan Hu, Fatemeh Saremi, Tarek Abdelzaher.
1 Algorithms for Bandwidth Efficient Multicast Routing in Multi-channel Multi-radio Wireless Mesh Networks Hoang Lan Nguyen and Uyen Trang Nguyen Presenter:
Distributed Data Stores – Facebook Presented by Ben Gooding University of Arkansas – April 21, 2015.
Achieving Load Balance and Effective Caching in Clustered Web Servers Richard B. Bunt Derek L. Eager Gregory M. Oster Carey L. Williamson Department of.
A Dynamic MapReduce Scheduler for Heterogeneous Workloads Chao Tian, Haojie Zhou, Yongqiang He,Li Zha 簡報人:碩資工一甲 董耀文.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
1 The Performance Potential for Single Application Heterogeneous Systems Henry Wong* and Tor M. Aamodt § *University of Toronto § University of British.
Network Aware Resource Allocation in Distributed Clouds.
1 On the Placement of Web Server Replicas Lili Qiu, Microsoft Research Venkata N. Padmanabhan, Microsoft Research Geoffrey M. Voelker, UCSD IEEE INFOCOM’2001,
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
1 On the Placement of Web Server Replicas Lili Qiu, Microsoft Research Venkata N. Padmanabhan, Microsoft Research Geoffrey M. Voelker, UCSD IEEE INFOCOM’2001,
Database replication policies for dynamic content applications Gokul Soundararajan, Cristiana Amza, Ashvin Goel University of Toronto Presented by Ahmed.
6 December On Selfish Routing in Internet-like Environments paper by Lili Qiu, Yang Richard Yang, Yin Zhang, Scott Shenker presentation by Ed Spitznagel.
On Selfish Routing In Internet-like Environments Lili Qiu (Microsoft Research) Yang Richard Yang (Yale University) Yin Zhang (AT&T Labs – Research) Scott.
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
DynamicMR: A Dynamic Slot Allocation Optimization Framework for MapReduce Clusters Nanyang Technological University Shanjiang Tang, Bu-Sung Lee, Bingsheng.
Model 5 Long Distance Phone Calls By Benjamin Cutting
Spring 2000CS 4611 Routing Outline Algorithms Scalability.
Sunpyo Hong, Hyesoon Kim
Distributed, Self-stabilizing Placement of Replicated Resources in Emerging Networks Bong-Jun Ko, Dan Rubenstein Presented by Jason Waddle.
GPGPU Performance and Power Estimation Using Machine Learning Gene Wu – UT Austin Joseph Greathouse – AMD Research Alexander Lyashevsky – AMD Research.
E-Store: Fine-Grained Elastic Partitioning for Distributed Transaction Processing Systems Jihui Yang CS525 Advanced Distributed System March 1, 2016.
PACMan: Coordinated Memory Caching for Parallel Jobs Ganesh Ananthanarayanan, Ali Ghodsi, Andrew Wang, Dhruba Borthakur, Srikanth Kandula, Scott Shenker,
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Graphing 101.
Optimizing Distributed Actor Systems for Dynamic Interactive Services
Guangxiang Du*, Indranil Gupta
International Conference on Data Engineering (ICDE 2016)
New Workflow Scheduling Techniques Presentation: Anirban Mandal
Online parameter optimization for elastic data stream processing
Accelerating MapReduce on a Coupled CPU-GPU Architecture
Bank-aware Dynamic Cache Partitioning for Multicore Architectures
A Framework for Automatic Resource and Accuracy Management in A Cloud Environment Smita Vijayakumar.
ISP and Egress Path Selection for Multihomed Networks
Distributed Protocols Research Group
Professor Arne Thesen, University of Wisconsin-Madison
Process Scheduling B.Ramamurthy 11/18/2018.
StreamApprox Approximate Stream Analytics in Apache Spark
StreamApprox Approximate Computing for Stream Analytics
CPU Scheduling G.Anuradha
GNuggies: A Proposal for Hosting Resilient Stateless Services Using Untrusted Nodes Harshit Agarwal.
Process Scheduling B.Ramamurthy 12/5/2018.
Chapter5: CPU Scheduling
Smita Vijayakumar Qian Zhu Gagan Agrawal
AWS Cloud Computing Masaki.
CPU SCHEDULING.
Resource-Efficient and QoS-Aware Cluster Management
Pramod Bhatotia, Ruichuan Chen, Myungjin Lee
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 2/23/2019.
Resource Allocation in a Middleware for Streaming Data
How Yahoo! use to serve millions of videos from its video library.
Process Scheduling B.Ramamurthy 4/19/2019.
Process Scheduling B.Ramamurthy 4/24/2019.
Process Scheduling B.Ramamurthy 5/7/2019.
Support for Adaptivity in ARMCI Using Migratable Objects
CPU Scheduling: Basic Concepts
Fog Computing for Low Latency, Interactive Video Streaming
Presentation transcript:

Optimizing Interactive Analytics Engines for Heterogeneous Clusters Ashwini Raina MS Thesis Part of EuroSys’18 paper titled “Popular is Cheaper: Curtailing Memory Costs in Interactive Analytics Engines”

Talk Outline Chronological order of work Baseline Getafix Query Routing Segment Balancing Capacity-aware Getafix Stragglers Auto-tiering

Interactive Analytics Engines

Druid

Replication Goal Better Goal

Assumption : Equal capacity CNs Baseline Getafix S1 S2 S3 S4 6 3 2 1 Goal is to provide a load balanced assignment with the least amount of replication CN1 CN2 CN3 Capacity: 4 𝟔+𝟑+𝟐+𝟏 𝟑 Assumption : Equal capacity CNs

Best Fit provably achieves optimal replica count Baseline Getafix S1 S2 S3 S4 6 3 2 1 CN1 CN2 CN3 Best Fit provably achieves optimal replica count 1 replica 2 replicas 2 replicas Are there any side-effects of such allocation?

Baseline Getafix Tail latency 30% worse compared to Scarlett Average Query Latency Vs Replication Factor Tail (99th) Query Latency Vs Replication Factor Tail latency 30% worse compared to Scarlett

Query Routing

Building Druid diagnostics

Query Routing Load based Allocation based Minimum Load CNs piggyback “load indicators” on query responses to broker Broker routes a new query to lowest loaded CN Connection Count Each broker maintains a count of total open connections (outstanding query responses) Broker routes a new query to a CN with lowest connection count Allocation based Potion routing In each round Getafix outputs a segment to CN allocation map Each broker routes queries preserving the ratios in that allocation map

Query Routing Observations Load based Minimum Load Load information inaccurate/stale Connection Count Simple scheme works surprisingly well Allocation based Potion routing Lags segment popularity trends

Segment imbalance

Segment imbalance

Segment Balancer Greedy algorithm Reduces max memory utilization of a CN Reduces query latency Baseline Getafix 32% reduction in max memory 15% reduction in query latency Segment Balanced Getafix

Capacity-aware Getafix S1 S2 S3 S4 6 3 2 1 CN1 CN2 CN3 Equal capacity assumption does not hold in heterogeneous environments Capacity: 4 𝟔+𝟑+𝟐+𝟏 𝟑

Capacity–aware Getafix Estimates capacities of compute nodes dynamically CPU time spent on processing queries Higher CPU time implies higher capacity Does weighted allocation based on capacity Upto 23% reduction in tail latency Upto 27% reduction in memory Upto 16% reduction in makespan

55% reduction in tail latency Stragglers Capacity-awareness automatically addresses stragglers Straggling nodes report lower CPU time Classified as lower capacity nodes Get lower segment query time allocation 55% reduction in tail latency 18% reduction in memory

Time slice is one Getafix round Darker color => higher popularity Cluster auto-tiering Sysadmins manually tier the cluster Assign hot (popular) segments to powerful nodes Rule based assignment, not fully tuned to changes in popularity Laborious and costly Capacity-awareness auto-tiers the cluster Time slice is one Getafix round Darker color => higher popularity 75% tiering accuracy 80% better than baseline Getafix Baseline Getafix Getafix-H

% improvement over Getafix baseline Improvement over Scarlett Results Summary Scenario % improvement over Getafix baseline Tail latency (99th) Memory Makespan Tiering accuracy Heterogeneous Cluster 23% 27% 16% 80% Stragglers 55% 28% Did not measure Metric Improvement over Scarlett Getafix baseline Getafix-H Tail latency (99th) -30% 9% Memory 1.45-2.15X 2-3X

Lessons “A plot is worth a thousand logs” Expand your work from the core Replay-able runs => consistent results Don’t push AWS keys to github

Thank You.