Presentation is loading. Please wait.

Presentation is loading. Please wait.

PySpark Tutorial - Learn to use Apache Spark with Python

Similar presentations


Presentation on theme: "PySpark Tutorial - Learn to use Apache Spark with Python"— Presentation transcript:

1 PySpark Tutorial - Learn to use Apache Spark with Python
Everything Data CompSci 216 Spring 2017

2 Outline Apache Spark and SparkContext
Spark Resilient Distributed Datasets (RDD) Transformation and Actions in Spark RDD Partitions

3 Apache Spark and PySpark
Apache Spark is written in Scala programming language that compiles the program code into byte code for the JVM for spark big data processing. The open source community has developed a wonderful utility for spark python big data processing known as PySpark.

4 SparkContext SparkContext is the object that manages the connection to the clusters in Spark and coordinates running processes on the clusters themselves. SparkContext connects to cluster managers, which manage the actual executors that run the specific computations spark = SparkContext("local", "PythonHashTag")

5 Resilient Distributed Datasets (RDD)
An RDD is Spark's representation of a dataset that is distributed across the RAM, or memory, of lots of machines. An RDD object is essentially a collection of elements that you can use to hold lists of tuples, dictionaries, lists, etc. Lazy Evaluation : the ability to lazily evaluate code, postponing running a calculation until absolutely necessary. numPartitions = 3 lines = spark.textFile(“hw10/example.txt”, numPartitions) lines.take(5) The core data structure in Spark is an RDD, or a resilient distributed dataset. As the name suggests, In the code above, Spark didn‘t wait to load the txt file into an RDD until lines.take(5) was run. When lines = spark.textFile(“hw10/tweets_sm.txt”, numPartitions) was called, a pointer to the file was created, but only when lines.take(5) needed the file to run its logic was the text file actually read into lines. This brings the concept of actions and transformations.

6 Transformation and Actions in Spark
RDDs have actions, which return values, and transformations, which return pointers to new RDDs. RDDs’ value is only updated once that RDD is computed as part of an action take(5) is an action

7 Transformation and Actions
Spark Transformations map() flatMap() filter() mapPartitions() Spark Actions reduceByKey() collect() count() take() takeOrdered()

8 map() and flatMap() map()
map() transformation applies changes on each line of the RDD and returns the transformed RDD as iterable of iterables i.e. each line is equivalent to a iterable and the entire RDD is itself a list flatMap() This transformation apply changes to each line same as map but the return is not a iterable of iterables but it is only an iterable holding entire RDD contents.

9 map() and flatMap() examples
lines.take(2) [‘#good d#ay #’, ‘#good #weather’] words = lines.map(lambda lines: lines.split(' ')) [[‘#good’, ‘d#ay’, ’#’], [‘#good’, ‘#weather’]] words = lines. flatMap(lambda lines: lines.split(' ')) [‘#good’, ‘d#ay’, ‘#’, ‘#good’, ‘#weather’] Instead of using an anonymous function (with the lambda keyword in Python), we can also use named function anonymous function is easier for simple use Confused !!! Ok. Let’s clear this confusion with an example …

10 Filter() Filter() transformation is used to reduce the old RDD based on some condition. How to filter out hashtags from words hashtags = words.filter(lambda word: "#" in word) [‘#good’, ‘d#ay’, ‘#’, ‘#good’, ‘#weather’] which is wrong. hashtags = words.filter(lambda word: word.startswith("#")).filter(lambda word: word != "#") which is a caution point in this hw. [‘#good’, ‘#good’, ‘#weather’]

11 reduceByKey() reduceByKey(f) combines tuples with the same key using the function we specify f. hashtagsNum = hashtags.map(lambda word: (word, 1)) [(‘#good’,1), (‘#good’, 1), (‘#weather’, 1)] hashtagsCount = hashtagsNum.reduceByKey(lambda a,b: a+b) or hashtagsCount = hashtagsNum.reduceByKey(add) [(‘#good’,2), (‘#weather’, 1)]

12 RDD Partitions Map and Reduce operations can be effectively applied in parallel in apache spark by dividing the data into multiple partitions. A copy of each partition within an RDD is distributed across several workers running on different nodes of a cluster so that in case of failure of a single worker the RDD still remains available. Parallelism is the key feature of any distributed system where operations are done by dividing the data into multiple parallel partitions. The same operation is performed on the partitions simultaneously which helps achieve fast data processing with spark.

13 mapPartitions() mapPartitions(func) transformation is similar to map(), but runs separately on each partition (block) of the RDD, so func must be of type Iterator<T> => Iterator<U> when running on an RDD of type T.

14 Example-1: Sum Each Partition

15 Example-2: Find Minimum and Maximum

16

17 optional reading Reference
Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Reference


Download ppt "PySpark Tutorial - Learn to use Apache Spark with Python"

Similar presentations


Ads by Google