Distributed Computing with Spark

Slides:



Advertisements
Similar presentations
UC Berkeley a Spark in the cloud iterative and interactive cluster computing Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.
Advertisements

Matei Zaharia University of California, Berkeley Spark in Action Fast Big Data Analytics using Scala UC BERKELEY.
UC Berkeley Spark Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.
Spark: Cluster Computing with Working Sets
Spark Fast, Interactive, Language-Integrated Cluster Computing Wen Zhiguang
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
Spark Fast, Interactive, Language-Integrated Cluster Computing.
Matei Zaharia Large-Scale Matrix Operations Using a Data Flow Engine.
Shark Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker Hive on Spark.
Fast and Expressive Big Data Analytics with Python
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
Mesos A Platform for Fine-Grained Resource Sharing in Data Centers Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy.
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
© 2015 IBM Corporation UNIT 2: BigData Analytics with Spark and Spark Platforms 1 Shelly Garion IBM Research -- Haifa.
How Companies are Using Spark And where the Edge in Big Data will be Matei Zaharia.
Resilient Distributed Datasets: A Fault- Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Matei Zaharia Introduction to. Outline The big data problem Spark programming model User community Newest addition: DataFrames.
Matthew Winter and Ned Shawa
GraphX: Graph Analytics on Spark
Berkeley Data Analytics Stack Prof. Chi (Harold) Liu November 2015.
Matei Zaharia, in collaboration with Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Haoyuan Li, Justin Ma, Murphy McCauley, Joshua Rosen, Reynold Xin,
Spark System Background Matei Zaharia  [June HotCloud ]  Spark: Cluster Computing with Working Sets  [April NSDI.
Data Summit 2016 H104: Building Hadoop Applications Abhik Roy Database Technologies - Experian LinkedIn Profile:
Resilient Distributed Datasets A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
András Benczúr Head, “Big Data – Momentum” Research Group Big Data Analytics Institute for Computer.
Raju Subba Open Source Project: Apache Spark. Introduction Big Data Analytics Engine and it is open source Spark provides APIs in Scala, Java, Python.
Hadoop Javad Azimi May What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data. It includes:
Image taken from: slideshare
Spark: Cluster Computing with Working Sets
Berkeley Data Analytics Stack - Apache Spark
Big Data is a Big Deal!.
Presented by Peifeng Yu
PROTECT | OPTIMIZE | TRANSFORM
Sushant Ahuja, Cassio Cristovao, Sameep Mohta
About Hadoop Hadoop was one of the first popular open source big data technologies. It is a scalable fault-tolerant system for processing large datasets.
Hadoop Aakash Kag What Why How 1.
Machine Learning Library for Apache Ignite
Introduction to Spark Streaming for Real Time data analysis
ITCS-3190.
Spark.
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
An Open Source Project Commonly Used for Processing Big Data Sets
Pagerank and Betweenness centrality on Big Taxi Trajectory Graph
INTRODUCTION TO PIG, HIVE, HBASE and ZOOKEEPER
Spark Presentation.
Data Platform and Analytics Foundational Training
Hadoop Clusters Tess Fulkerson.
Apache Spark Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Aditya Waghaye October 3, 2016 CS848 – University.
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Ministry of Higher Education
Introduction to Spark.
湖南大学-信息科学与工程学院-计算机与科学系
CMPT 733, SPRING 2016 Jiannan Wang
Cse 344 May 4th – Map/Reduce.
CS110: Discussion about Spark
Apache Spark Lecture by: Faria Kalim (lead TA) CS425, UIUC
Parallel Analytic Systems
Overview of big data tools
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Zaharia, et al (2012)
Spark and Scala.
Apache Spark Lecture by: Faria Kalim (lead TA) CS425 Fall 2018 UIUC
Charles Tappert Seidenberg School of CSIS, Pace University
Spark and Scala.
Introduction to Spark.
Apache Hadoop and Spark
CMPT 733, SPRING 2017 Jiannan Wang
Fast, Interactive, Language-Integrated Cluster Computing
Big-Data Analytics with Azure HDInsight
MapReduce: Simplified Data Processing on Large Clusters
Lecture 29: Distributed Systems
Presentation transcript:

Distributed Computing with Spark Reza Zadeh Thanks to Matei Zaharia. Tweet with #fearthespark

Problem Data growing faster than processing speeds Only solution is to parallelize on large clusters » Wide use in both enterprises and web industry How do we program these things?

Outline Data flow vs. traditional network programming Limitations of MapReduce Spark computing engine Machine Learning Example Current State of Spark Ecosystem Built-in Libraries

Data flow vs. traditional network programming

Rarely used in commodity datacenters Traditional Network Programming Message-passing between nodes (e.g. MPI) Very difficult to do at scale: » How to split problem across nodes? Must consider network & data locality » How to deal with failures? (inevitable at scale) » Even worse: stragglers (node not failed, but slow) » Ethernet networking not fast » Have to write programs for each machine Rarely used in commodity datacenters

Data Flow Models Restrict the programming interface so that the system can do more automatically Express jobs as graphs of high-level operators » System picks how to split each operator into tasks and where to run each task » Run parts twice fault recovery Map Reduce Map Biggest example: MapReduce Reduce Map

Example MapReduce Algorithms Matrix-vector multiplication Power iteration (e.g. PageRank) Gradient descent methods Stochastic SVD Tall skinny QR Many others!

Why Use a Data Flow Engine? Ease of programming » High-level functions instead of message passing Wide deployment » More common than MPI, especially “near” data Scalability to very largest clusters » Even HPC world is now concerned about resilience Examples: Pig, Hive, Scalding, Storm

Limitations of MapReduce

Limitations of MapReduce MapReduce is great at one-pass computation, but inefficient for multi-pass algorithms No efficient primitives for data sharing » State between steps goes to distributed file system » Slow due to replication & disk storage

Example: Iterative Apps file system read file system write file system read file system write iter. 1 iter. 2 . . . Input file system read query 1 result 1 query 2 result 2 query 3 result 3 Input . . . Commonly spend 90% of time doing I/O

Example: PageRank Repeatedly multiply sparse matrix and vector Requires repeatedly hashing together page adjacency lists and rank vector Same file grouped over and over Neighbors (id, edges) Ranks (id, rank) … iteration 1 iteration 2 iteration 3

Result While MapReduce is simple, it can require asymptotically more communication or I/O

Spark computing engine

Spark Computing Engine Extends a programming language with a distributed collection data-structure » “Resilient distributed datasets” (RDD) Open source at Apache » Most active community in big data, with 50+ companies contributing Clean APIs in Java, Scala, Python, R

Resilient Distributed Datasets (RDDs) Main idea: Resilient Distributed Datasets » Immutable collections of objects, spread across cluster » Statically typed: RDD[T] has objects of type T val sc = new SparkContext() val lines = sc.textFile("log.txt") // RDD[String] // Transform val errors = val messages using standard collection operations lines.filter(_.startsWith("ERROR")) = errors.map(_.split(‘\t’)(2)) lazily evaluated messages.saveAsTextFile("errors.txt") kicks off a computation

Key Idea Resilient Distributed Datasets (RDDs) » Collections of objects across a cluster with user controlled partitioning & storage (memory, disk, ...) » Built via parallel transformations (map, filter, …) » The world only lets you make make RDDs such that they can be: Automatically rebuilt on failure

Python, Java, Scala, R // Scala: val lines = sc.textFile(...) lines.filter(x => x.contains(“ERROR”)).count() // Java (better in java8!): JavaRDD<String> lines = sc.textFile(...); lines.filter(new Function<String, Boolean>() { Boolean call(String s) { return s.contains(“error”); } }).count();

Fault Tolerance RDDs track lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce filter Input file

Fault Tolerance RDDs track lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce filter Input file

Partitioning RDDs know their partitioning functions map reduce filter file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce Known to be hash-partitioned Also known filter Input file

Machine Learning example

Logistic Regression data = spark.textFile(...).map(readPoint).cache() w = numpy.random.rand(D) for i in range(iterations): gradient = data.map(lambda p: (1 / (1 + exp(-­‐p.y * w.dot(p.x)))) * p.y * p.x ).reduce(lambda a, b: a + b) w -­‐= gradient print “Final w: %s” % w

Logistic Regression Results 4000 3500 3000 2500 2000 1500 1000 500 110 s / iteration Running Time (s) Hadoop Spark first iteration 80 s further iterations 1 s 1 5 10 20 Number of Iterations 30 100 GB of data on 50 m1.xlarge EC2 machines

% of working set in memory Behavior with Less RAM 100 80 60 40 20 68.8 58.1 Iteration time (s) 40.7 29.7 11.5 0% 25% 50% 75% % of working set in memory 100%

Benefit for Users Same engine performs data extraction, model training and interactive queries Separate engines DFS read parse write DFS read train write DFS read query write … Spark DFS read parse train query DFS

State of the Spark ecosystem

Spark Community Most active open source community in big data 200+ developers, 50+ companies contributing Contributors in past year 150 100 50 Giraph Storm

Project Activity Activity in 2014 Commits Lines of Code Changed Spark 1600 Spark 350000 1400 300000 1200 250000 HDFS Storm MapReduce YARN 1000 200000 800 150000 600 HDFS MapReduce YARN Storm 100000 400 200 50000 Commits Lines of Code Changed Activity in 2014

Continuing Growth Contributors per month to Spark source: ohloh.net

Built-in libraries

Standard Library for Big Data Python Scala Java R Big data apps lack libraries of common algorithms Spark’s generality + support for multiple languages make suitable to offer this SQL ML graph … Core it Much of future activity will be in these libraries

A General Platform Standard libraries included with Spark Spark Streaming real-time MLlib machine learning Spark SQL structured GraphX graph … Spark Core

Machine Learning Library (MLlib) points = context.sql(“select model = KMeans.train(points, latitude, 10) longitude from tweets”) 40 contributors in past year

MLlib algorithms classification: logistic regression, linear SVM, naïve Bayes, classification tree regression: generalized linear models (GLMs), regression tree collaborative filtering: alternating least squares (ALS), non-negative matrix factorization (NMF) clustering: k-means|| decomposition: SVD, PCA optimization: stochastic gradient descent, L-BFGS

GraphX

GraphX General graph processing library Build graph using RDDs of nodes and edges Large library of graph algorithms with composable steps

Collaborative Filtering GraphX Algorithms Collaborative Filtering » Alternating Least Squares » Stochastic Gradient Descent » Tensor Factorization Community Detection » Triangle-Counting » K-core Decomposition » K-Truss Structured Prediction » Loopy Belief Propagation » Max-Product Linear Programs » Gibbs Sampling Graph Analytics » PageRank » Personalized PageRank » Shortest Path » Graph Coloring Classification » Neural Networks Semi-supervised ML » Graph SSL » CoEM

Spark Streaming Run a streaming computation as a series of very small, deterministic batch jobs live data stream Spark Streaming Chop up the live stream into batches of X seconds Spark treats each batch of data as RDDs and processes them using RDD opera;ons Finally, the processed results of the RDD opera;ons are returned in batches batches of X seconds Spark processed results

Spark Streaming Run a streaming computation as a series of very small, deterministic batch jobs live data stream Spark Streaming Batch sizes as low as ½ second, latency ~ 1 second Poten;al for combining batch processing and streaming processing in the same system batches of X seconds Spark processed results

Spark SQL // Run SQL statements val teenagers = context.sql( "SELECT name FROM people WHERE age >= 13 AND age <= 19") // The results of SQL queries are RDDs of Row objects val names = teenagers.map(t => "Name: " + t(0)).collect()

Spark SQL Enables loading & querying structured data in Spark From Hive: c = HiveContext(sc) rows = c.sql(“select text, year from hivetable”) rows.filter(lambda r: r.year > 2013).collect() From JSON: c.jsonFile(“tweets.json”).registerAsTable(“tweets”) c.sql(“select text, user.name from tweets”) tweets.json {“text”: “hi”, “user”: { “name”: “matei”, “id”: 123 }}

Conclusions

Spark and Research Spark has all its roots in research, so we hope to keep incorporating new ideas!

Conclusion Data flow engines are becoming an important platform for numerical algorithms While early models like MapReduce were inefficient, new ones like Spark close this gap More info: spark.apache.org