Download presentation
Presentation is loading. Please wait.
1
Distributed Computing with Spark
Reza Zadeh Thanks to Matei Zaharia. Tweet with #fearthespark
2
Problem Data growing faster than processing speeds Only solution is to parallelize on large clusters » Wide use in both enterprises and web industry How do we program these things?
3
Outline Data flow vs. traditional network programming Limitations of MapReduce Spark computing engine Machine Learning Example Current State of Spark Ecosystem Built-in Libraries
4
Data flow vs. traditional network programming
5
Rarely used in commodity datacenters
Traditional Network Programming Message-passing between nodes (e.g. MPI) Very difficult to do at scale: » How to split problem across nodes? Must consider network & data locality » How to deal with failures? (inevitable at scale) » Even worse: stragglers (node not failed, but slow) » Ethernet networking not fast » Have to write programs for each machine Rarely used in commodity datacenters
6
Data Flow Models Restrict the programming interface so that the system can do more automatically Express jobs as graphs of high-level operators » System picks how to split each operator into tasks and where to run each task » Run parts twice fault recovery Map Reduce Map Biggest example: MapReduce Reduce Map
7
Example MapReduce Algorithms
Matrix-vector multiplication Power iteration (e.g. PageRank) Gradient descent methods Stochastic SVD Tall skinny QR Many others!
8
Why Use a Data Flow Engine?
Ease of programming » High-level functions instead of message passing Wide deployment » More common than MPI, especially “near” data Scalability to very largest clusters » Even HPC world is now concerned about resilience Examples: Pig, Hive, Scalding, Storm
9
Limitations of MapReduce
10
Limitations of MapReduce
MapReduce is great at one-pass computation, but inefficient for multi-pass algorithms No efficient primitives for data sharing » State between steps goes to distributed file system » Slow due to replication & disk storage
11
Example: Iterative Apps
file system read file system write file system read file system write iter. 1 iter. 2 . . . Input file system read query 1 result 1 query 2 result 2 query 3 result 3 Input . . . Commonly spend 90% of time doing I/O
12
Example: PageRank Repeatedly multiply sparse matrix and vector Requires repeatedly hashing together page adjacency lists and rank vector Same file grouped over and over Neighbors (id, edges) Ranks (id, rank) … iteration 1 iteration 2 iteration 3
13
Result While MapReduce is simple, it can require
asymptotically more communication or I/O
14
Spark computing engine
15
Spark Computing Engine
Extends a programming language with a distributed collection data-structure » “Resilient distributed datasets” (RDD) Open source at Apache » Most active community in big data, with 50+ companies contributing Clean APIs in Java, Scala, Python, R
16
Resilient Distributed Datasets (RDDs)
Main idea: Resilient Distributed Datasets » Immutable collections of objects, spread across cluster » Statically typed: RDD[T] has objects of type T val sc = new SparkContext() val lines = sc.textFile("log.txt") // RDD[String] // Transform val errors = val messages using standard collection operations lines.filter(_.startsWith("ERROR")) = errors.map(_.split(‘\t’)(2)) lazily evaluated messages.saveAsTextFile("errors.txt") kicks off a computation
17
Key Idea Resilient Distributed Datasets (RDDs)
» Collections of objects across a cluster with user controlled partitioning & storage (memory, disk, ...) » Built via parallel transformations (map, filter, …) » The world only lets you make make RDDs such that they can be: Automatically rebuilt on failure
18
Python, Java, Scala, R // Scala: val lines = sc.textFile(...)
lines.filter(x => x.contains(“ERROR”)).count() // Java (better in java8!): JavaRDD<String> lines = sc.textFile(...); lines.filter(new Function<String, Boolean>() { Boolean call(String s) { return s.contains(“error”); } }).count();
19
Fault Tolerance RDDs track lineage info to rebuild lost data
file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce filter Input file
20
Fault Tolerance RDDs track lineage info to rebuild lost data
file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce filter Input file
21
Partitioning RDDs know their partitioning functions map reduce filter
file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce Known to be hash-partitioned Also known filter Input file
22
Machine Learning example
23
Logistic Regression data = spark.textFile(...).map(readPoint).cache() w = numpy.random.rand(D) for i in range(iterations): gradient = data.map(lambda p: (1 / (1 + exp(-‐p.y * w.dot(p.x)))) * p.y * p.x ).reduce(lambda a, b: a + b) w -‐= gradient print “Final w: %s” % w
24
Logistic Regression Results
4000 3500 3000 2500 2000 1500 1000 500 110 s / iteration Running Time (s) Hadoop Spark first iteration 80 s further iterations 1 s 1 Number of Iterations 30 100 GB of data on 50 m1.xlarge EC2 machines
25
% of working set in memory
Behavior with Less RAM 100 80 60 40 20 68.8 58.1 Iteration time (s) 40.7 29.7 11.5 0% 25% 50% 75% % of working set in memory 100%
26
Benefit for Users Same engine performs data extraction, model training and interactive queries Separate engines DFS read parse write DFS read train write DFS read query write … Spark DFS read parse train query DFS
27
State of the Spark ecosystem
28
Spark Community Most active open source community in big data 200+ developers, 50+ companies contributing Contributors in past year 150 100 50 Giraph Storm
29
Project Activity Activity in 2014 Commits Lines of Code Changed Spark
1600 Spark 350000 1400 300000 1200 250000 HDFS Storm MapReduce YARN 1000 200000 800 150000 600 HDFS MapReduce YARN Storm 100000 400 200 50000 Commits Lines of Code Changed Activity in 2014
30
Continuing Growth Contributors per month to Spark source: ohloh.net
31
Built-in libraries
32
Standard Library for Big Data
Python Scala Java R Big data apps lack libraries of common algorithms Spark’s generality + support for multiple languages make suitable to offer this SQL ML graph … Core it Much of future activity will be in these libraries
33
A General Platform Standard libraries included with Spark
Spark Streaming real-time MLlib machine learning Spark SQL structured GraphX graph … Spark Core
34
Machine Learning Library (MLlib)
points = context.sql(“select model = KMeans.train(points, latitude, 10) longitude from tweets”) 40 contributors in past year
35
MLlib algorithms classification: logistic regression, linear SVM, naïve Bayes, classification tree regression: generalized linear models (GLMs), regression tree collaborative filtering: alternating least squares (ALS), non-negative matrix factorization (NMF) clustering: k-means|| decomposition: SVD, PCA optimization: stochastic gradient descent, L-BFGS
36
GraphX
37
GraphX General graph processing library
Build graph using RDDs of nodes and edges Large library of graph algorithms with composable steps
38
Collaborative Filtering
GraphX Algorithms Collaborative Filtering » Alternating Least Squares » Stochastic Gradient Descent » Tensor Factorization Community Detection » Triangle-Counting » K-core Decomposition » K-Truss Structured Prediction » Loopy Belief Propagation » Max-Product Linear Programs » Gibbs Sampling Graph Analytics » PageRank » Personalized PageRank » Shortest Path » Graph Coloring Classification » Neural Networks Semi-supervised ML » Graph SSL » CoEM
39
Spark Streaming Run a streaming computation as a series of very small, deterministic batch jobs live data stream Spark Streaming Chop up the live stream into batches of X seconds Spark treats each batch of data as RDDs and processes them using RDD opera;ons Finally, the processed results of the RDD opera;ons are returned in batches batches of X seconds Spark processed results
40
Spark Streaming Run a streaming computation as a series of very small, deterministic batch jobs live data stream Spark Streaming Batch sizes as low as ½ second, latency ~ 1 second Poten;al for combining batch processing and streaming processing in the same system batches of X seconds Spark processed results
41
Spark SQL // Run SQL statements val teenagers = context.sql(
"SELECT name FROM people WHERE age >= 13 AND age <= 19") // The results of SQL queries are RDDs of Row objects val names = teenagers.map(t => "Name: " + t(0)).collect()
42
Spark SQL Enables loading & querying structured data in Spark
From Hive: c = HiveContext(sc) rows = c.sql(“select text, year from hivetable”) rows.filter(lambda r: r.year > 2013).collect() From JSON: c.jsonFile(“tweets.json”).registerAsTable(“tweets”) c.sql(“select text, user.name from tweets”) tweets.json {“text”: “hi”, “user”: { “name”: “matei”, “id”: 123 }}
43
Conclusions
44
Spark and Research Spark has all its roots in research, so we hope to keep incorporating new ideas!
45
Conclusion Data flow engines are becoming an important platform for numerical algorithms While early models like MapReduce were inefficient, new ones like Spark close this gap More info: spark.apache.org
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.