Download presentation
1
Spark Resilient Distributed Datasets:
A Fault-Tolerant Abstraction for In-Memory Cluster Computing Presentation by Antonio Lupher [Thanks to Matei for diagrams & several of the nicer slides!] October 26, 2011
2
The world today… Most current cluster programming models are based on acyclic data flow from stable storage to stable storage Map Reduce Input Output Acyclic
3
The world today… Most current cluster programming models are based on acyclic data flow from stable storage to stable storage Benefits: decide at runtime where to run tasks and automatically recover from failures Also applies to Dryad, SQL, etc Benefits: easy to do fault tolerance and
4
… but Inefficient for applications that repeatedly reuse working set of data: Iterative machine learning, graph algorithms PageRank, k-means, logistic regression, etc. Interactive data mining tools (R, Excel, Python) Multiple queries on the same subset of data Reload data from disk on each query/stage of execution Point out that both types of apps are actually quite common / desirable in data analytics
5
Goal: Keep Working Set in RAM
iteration 1 one-time processing iteration 2 iteration 3 Input Distributed memory Not necessarily all data – just the stuff you need from one computation to another iter. 1 iter. 2 Input
6
How to provide fault tolerance efficiently?
Requirements Distributed memory abstraction must be Fault-tolerant Efficient in large commodity clusters How to provide fault tolerance efficiently?
7
Requirements Existing distributed storage abstractions offer an interface based on fine-grained updates Reads and writes to cells in a table E.g. key-value stores, databases, distributed memory Have to replicate data or logs across nodes for fault tolerance Expensive for data-intensive apps, large datasets
8
Resilient Distributed Datasets (RDDs)
Immutable, partitioned collection of records Interface based on coarse-grained transformations (e.g. map, groupBy, join) Efficient fault recovery using lineage Log one operation to apply to all elements Re-compute lost partitions of dataset on failure No cost if nothing fails
9
RDDs, cont’d Control persistence (in RAM vs. on disk)
Tunable via persistence priority: user specifies which RDDs should spill to disk first Control partitioning of data Hash data to place data in convenient locations for subsequent operations Fine-grain reads
10
Implementation Spark runs on Mesos
=> share resources with Hadoop & other apps Can read from any Hadoop input source (HDFS, S3, …) Spark Hadoop MPI Mesos Node … Mention it’s designed to be fault-tolerant Though why do you need Hadoop if you have Spark? Language-integrated API in Scala ~10,000 lines of code, no changes to Scala Can use interactively from interpreter
11
Spark Operations Transformations
Create new RDD by transforming data in stable storage using data flow operators Map, filter, groupBy, etc. Lazy: don’t need to be materialized at all times Lineage information is enough to compute partitions from data in storage when needed
12
Spark Operations Actions
Return a value to application or export to storage count, collect, save, etc. Require a value to be computed from the elements in the RDD => execution plan
13
(return a result to driver program)
Spark Operations Transformations (define a new RDD) map flatMap filter sample groupByKey reduceByKey union join cogroup crossProduct mapValues sort partitionBy Actions (return a result to driver program) count collect reduce lookup save
14
RDD Representation Common interface:
Set of partitions Preferred locations for each partition List of parent RDDs Function to compute a partition given parents Optional partitioning info (order, etc.) Capture a wide range of transformations Scheduler doesn’t need to know what each op does Users can easily add new transformations Most transformations implement in ≤ 20 lines Simple common interface
15
RDD Representation Lineage & Dependencies Narrow dependencies
Each partition of parent RDD is used by at most one partition of child RDD e.g. map, filter Allow pipelined execution
16
RDD Representation Lineage & Dependencies Wide dependencies
Multiple child partitions may depend on parent RDD partition e.g. join Require data from all parent partitions & shuffle
17
Scheduler Task DAG (like Dryad) Pipelines functions within a stage Reuses previously computed data Partitioning-aware to avoid shuffles join union groupBy map Stage 3 Stage 1 Stage 2 A: B: C: D: E: F: G: NOT a modified version of Hadoop Shced examines lineage graph to build a DAG of stages to execute Each stage tries to maximize pipelined transformations with narrow dependencies Stage boundaries are the shuffle operations required for wide dependencies Shuffle/wide dependencies currently materialize records on nodes holding parent partitions (like MapReduce materializes map outputs) for fault recovery = previously computed partition
18
RDD Recovery What happens if a task fails?
Exploit coarse-grained operations Deterministic, affect all elements of collection Just re-run the task on another node if parents available Easy to regenerate RDDs given parent RDDs + lineage Avoids checkpointing and replication but you might still want to (and can) checkpoint: long lineage => expensive to recompute intermediate results may have disappeared, need to regenerate Use REPLICATE flag to persist Lineage is graph of transformations
19
Result: scaled to 1 TB data in 5-7 sec (vs 170 sec for on-disk data)
Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Base RDD Msgs. 1 Transformed RDD lines = spark.textFile(“hdfs://...”) errors = lines.filter(_.startsWith(“ERROR”)) messages = errors.map(_.split(‘\t’)(2)) messages.persist() Worker Driver results tasks Block 1 Action messages.filter(_.contains(“foo”)).count Key idea: add “variables” to the “functions” in functional programming messages.filter(_.contains(“bar”)).count Msgs. 2 . . . Msgs. 3 Block 2 Result: scaled to 1 TB data in 5-7 sec (vs 170 sec for on-disk data) Result: full-text search of Wikipedia in <1 sec (vs 20 sec for on-disk data) Block 3
20
Fault Recovery Results
These are for K-means k-means
21
Performance Outperforms Hadoop by up to 20x
Avoiding I/O and Java object [de]serialization costs Some apps see 40x speedup (Conviva) Query a 1TB dataset w/5-7 sec. latencies You write a single program similar to DryadLINQ Distributed data sets with parallel operations on them are pretty standard; the new thing is that they can be reused across ops Variables in the driver program can be used in parallel ops; accumulators useful for sending information back, cached vars are an optimization Mention cached vars useful for some workloads that won’t be shown here Mention it’s all designed to be easy to distribute in a fault-tolerant fashion
22
PageRank Results 2.4x speedup over Hadoop on 30 nodes, controlling partitions => 7.4x Linear scaling to 60 nodes
23
Behavior with Not Enough RAM
Logistic regression
24
Example: Logistic Regression
Goal: find best line separating two sets of points random initial line + + + + + + – + + – Note that dataset is reused on each gradient computation – – + + – – – – – – target
25
Logistic Regression Code
val points = spark.textFile(...).map(parsePoint).persist() var w = Vector.random(D) for (i <- 1 to ITERATIONS) { val gradient = points.map(p => (1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x ).reduce((a,b) => a+b) w -= gradient } println("Final w: " + w)
26
Logistic Regression Performance
127 s / iteration first iteration 174 s further iterations 6 s This is for a 29 GB dataset on 20 EC2 m1.xlarge machines (4 cores each)
27
More Applications EM alg. for traffic prediction (Mobile Millennium) In-memory OLAP & anomaly detection (Conviva) Twitter spam classification (Monarch) Pregel on Spark (Bagel) Alternating least squares matrix factorization Expectation maximization
28
Mobile Millennium Estimate traffic using GPS on taxis
29
Conviva GeoReport Aggregations on many keys w/ same WHERE clause
Time (hours) Aggregations on many keys w/ same WHERE clause 40× gain comes from: Not re-reading unused columns or filtered records Avoiding repeated decompression In-memory storage of deserialized objects
30
SPARK Use transformations on RDDs instead of Hadoop jobs
Cache RDDs for similar future queries Many queries re-use subsets of data Drill-down, etc. Scala makes integration with Hive (Java) easy… or easier (Cliff, Antonio, Reynold)
31
Comparisons DryadLINQ, FlumeJava
Similar language-integrated “distributed collection” API, but cannot reuse datasets efficiently across queries Piccolo, DSM, Key-value stores (e.g. RAMCloud) Fine-grained writes but more complex fault recovery Iterative MapReduce (e.g. Twister, HaLoop), Pregel Implicit data sharing for a fixed computation pattern Relational databases Lineage/provenance, logical logging, materialized views Caching systems (e.g. Nectar) Store data in files, no explicit control over what is cached Cluster prog models: collections are files on disk or ephemeral – not as efficient as RDDs Key-value stores + DSM: lower-level, Iterative MR: limited to computation pattern, not general-purpose RDBMSs: fine-grain writes, logging, replication needed Caching: Nectar – reuse intermediate results MapReduce + Dryad: similar lineage, but it’s lost after job ends
32
Comparisons: RDDs vs DSM
Concern RDDs Distr. Shared Mem. Reads Fine-grained Writes Bulk transformations Consistency Trivial (immutable) Up to app / runtime Fault recovery Fine-grained and low-overhead using lineage Requires checkpoints and program rollback Straggler mitigation Possible using speculative execution Difficult Work placement Automatic based on data locality Up to app (but runtime aims for transparency) Behavior if not enough RAM Similar to existing data flow systems Poor performance (swapping?)
33
Summary Simple & efficient model, widely applicable
Express models that previously required a new framework efficiently, i.e. same optimizations Achieve fault tolerance efficiently by providing coarse-grained operations and tracking lineage Exploit persistent in-memory storage + smart partitioning for speed
34
Thoughts: Tradeoffs No fine-grain modifications of elements in collection Not the right tool for all applications E.g. storage system for web site, web crawler, anything where you need incremental/fine-grain writes Scala-based implementation Probably won’t see Microsoft use it anytime soon But concept of RDDs is not language-specific (abstraction doesn’t even require functional language)
35
Thoughts: Influence Factors that could promote adoption
Inherent advantages in-memory = fast, RDDs = fault-tolerant Easy to use & extend Already supports MapReduce, Pregel (Bagel) Used widely at Berkeley, more projects coming soon Used at Conviva, Twitter Scala means easy integration with existing Java applications (subjective opinion) More pleasant to use than Java
36
Verdict Should spark enthusiasm in cloud crowds
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.