Lecture 29: Distributed Systems CS 105 May 8, 2019 Slides drawn heavily from Vitaly's CS 5450, Fall 2018 https://pages.github.coecis.cornell.edu/cs5450/website/schedule.html
Conventional HPC System Compute nodes High-end processors Lots of RAM Network Specialized Very high performance Storage server RAID disk array
Conventional HPC Programming Model Programs described at a very low level detailed control of processing and scheduling Rely on a small number of software packages written by specialists limits problems and solutions methods
Typical HPC Operation Characteristics: Strengths Weaknesses long-lived processes make use of special locality hold all data in memory high-bandwidth communication Strengths High utilitization of resources Effective for many scientific applications Weaknesses Requires careful tuning of applications to resources Intolerant of any variability
HPC Fault Tolerance Checkpoint Restore after failure Periodically store state of all processes Significant I/O traffic Restore after failure Reset state to last checkpoint All intervening computation wasted Performance scaling Very sensitive to the number of failing components Wasted
Datacenters
Ideal Cluster Programming MOdel Applications written in terms of high-level operations on the data Runtime system controls scheduling, load balancing
MapReduce
MapReduce Programming Model Map computation across many data objects Aggregate results in many different ways System deals with resource allocation and availability
Example: Word Count In parallel, each worker computes word counts from individual files Collect results, wait until all finished Merge intermediate output Compute word count on merged intermediates
Parallel Map Process pieces of the dataset to generate (key, value) pairs in parallel Welcome everyone Hello everyone Welcome 1 everyone 1 Hello 1 Map Task 1 Map Task 2
Reduce Merge all intermediate values per key Welcome 1 everyone 1 Hello 1 everyone 2 Welcome 1 Hello 1
Partition Merge all intermediate values in parallel: partition keys, assign each key to one reduce task
MapReduce API
WordCount with MapReduce
WordCount with MapReduce
MapReduce Execution
Fault Tolerance in MapReduce Map worker writes intermediate output to local disk, separated by partitioning; once completed, tells master node Reduce worker told of location of map task outputs, pulls their partition’s data from each mapper, executes function across data Note: “All-to-all” shuffle between mappers and reducers Written to disk (“materialized”) before each stage
Fault Tolerance in MapReduce Master node monitors state of system If master fails, job aborts Map worker failure In-progress and completed tasks marked as idle Reduce workers notified when map task is re-executed on another map worker Reducer worker failure In-progress tasks are reset and re-executed Completed tasks had been written to global file system
Stragglers a straggler is task that takes long time to execute Bugs, flaky hardware, poor partitioning For slow map tasks, execute in parallel on second “map” worker as backup, race to complete task When done with most tasks, reschedule any remaining executing tasks Keep track of redundant executions Significantly reduces overall run time
Modern Data Processing
Apache Spark Goal 1: Extend the MapReduce model to better support two common classes of analytics apps Iterative algorithms (machine learning, graphs) Interactive data mining Goal 2: Enhance programmability Integrate into Scala programming language Allow interactive use from Scala interpreter Also support for Java, Python...
Data Flow Models Most current cluster programming models are based on acyclic data flow from stable storage to stable storage Example: MapReduce these models inefficient for applications that repeatedly reuse a working set of data Iterative algorithms (machine learning, graphs) Interactive data mining (R, Excel, Python) Map Map Map Map Map
Resilient Distributed Datasets (RDDs) Resilient distributed datasets (RDDs) are immutable, partitioned collections of objects spread across a cluster, stored in RAM or on disk Created through parallel transformations (map, filter, groupBy, join, ...) on data in stable storage Allow apps to cache working sets in memory for efficient reuse Retain the attractive properties of MapReduce Fault tolerance, data locality, scalability Actions on RDDs support many applications Count, reduce, collect, save...
Spark Operations Transformations: define a new RDD map, flatMap, filter, sample, groupByKey, sortByKey, union, join, etc. Actions: return a result to the driver program collect, reduce, count, lookupKey, save
Example: WordCount
Example: Logistic Regression Goal: find best line separating two sets of points val rdd = spark.textFile(...).map(readPoint) val data = rdd.cache() var w = Vector.random(D) for (i <- 1 to ITERATIONS) { val gradient = data.map(p => (1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x ).reduce(_ + _) w -= gradient } println("Final w: " + w) random line best-fit line
Example: Logisitic Regression
Spark Scheduler creates DAG of stages Pipelines functions within a stage Cache-aware work reuse & locality Partitioning-aware to avoid shuffles
RDD Fault Tolerance RDD maintains lineage information that can be used to reconstruct lost partitions val rdd = spark.textFile(...).map(readPoint).filter(...) File Mapped RDD Filtered RDD
Distributed Systems Summary Machines Fail If you have lots of machines, machines will fail frequently Goals: Reliability, Consistency, Scalability, Transparency Abstractions are good, as long as they don’t cost you too much
So what's the take away…