CS : Project Suggestions

Slides:



Advertisements
Similar presentations
Shark:SQL and Rich Analytics at Scale
Advertisements

Berkley Data Analysis Stack Shark, Bagel. 2 Previous Presentation Summary Mesos, Spark, Spark Streaming Infrastructure Storage Data Processing Application.
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
UC Berkeley Spark Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.
Spark: Cluster Computing with Working Sets
Shark Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker Hive on Spark.
Fast and Expressive Big Data Analytics with Python
Mesos A Platform for Fine-Grained Resource Sharing in Data Centers Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Hadoop & Cheetah. Key words Cluster  data center – Lots of machines thousands Node  a server in a data center – Commodity device fails very easily Slot.
Introduction To Windows Azure Cloud
M i SMob i S Mob i Store - Mobile i nternet File Storage Platform Chetna Kaur.
Efficient RDF Storage and Retrieval in Jena2 Written by: Kevin Wilkinson, Craig Sayers, Harumi Kuno, Dave Reynolds Presented by: Umer Fareed 파리드.
1 CS : Project Suggestions Ion Stoica and Ali Ghodsi ( September 14, 2015.
Resilient Distributed Datasets: A Fault- Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Matei Zaharia Introduction to. Outline The big data problem Spark programming model User community Newest addition: DataFrames.
Matthew Winter and Ned Shawa
Matei Zaharia, in collaboration with Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Haoyuan Li, Justin Ma, Murphy McCauley, Joshua Rosen, Reynold Xin,
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Apache Spark Osman AIDEL.
Ignite in Sberbank: In-Memory Data Fabric for Financial Services
Resilient Distributed Datasets A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Image taken from: slideshare
Big Data is a Big Deal!.
Information Retrieval in Practice
PROTECT | OPTIMIZE | TRANSFORM
How Alluxio (formerly Tachyon) brings a 300x performance improvement to Qunar’s streaming processing Xueyan Li (Qunar) & Chunming Li (Garena)
About Hadoop Hadoop was one of the first popular open source big data technologies. It is a scalable fault-tolerant system for processing large datasets.
Machine Learning Library for Apache Ignite
Introduction to Spark Streaming for Real Time data analysis
ITCS-3190.
Operating Systems : Overview
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
Curator: Self-Managing Storage for Enterprise Clusters
Pagerank and Betweenness centrality on Big Taxi Trajectory Graph
Applying Control Theory to Stream Processing Systems
Spark Presentation.
Scaling SQL with different approaches
Kay Ousterhout, Christopher Canel, Sylvia Ratnasamy, Scott Shenker
Data Platform and Analytics Foundational Training
PA an Coordinated Memory Caching for Parallel Jobs
PHP / MySQL Introduction
Software Architecture in Practice
Database Performance Tuning and Query Optimization
Apache Spark Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Aditya Waghaye October 3, 2016 CS848 – University.
Lecture 11: DMBS Internals
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Introduction to Spark.
Ch 15 –part 3 -design evaluation
Introduction to PIG, HIVE, HBASE & ZOOKEEPER
CMPT 733, SPRING 2016 Jiannan Wang
CS110: Discussion about Spark
Lecture 1: Multi-tier Architecture Overview
Operating Systems : Overview
Parallel Analytic Systems
Overview of big data tools
Introduction to SAP HANA
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Zaharia, et al (2012)
Spark and Scala.
Interpret the execution mode of SQL query in F1 Query paper
Operating Systems : Overview
Operating Systems : Overview
Prof. Leonardo Mostarda University of Camerino
Chapter 11 Database Performance Tuning and Query Optimization
CS639: Data Management for Data Science
CMPT 733, SPRING 2017 Jiannan Wang
Fast, Interactive, Language-Integrated Cluster Computing
CS639: Data Management for Data Science
Web Application Development Using PHP
Big Data Analytics & Spark
Map Reduce, Types, Formats and Features
Presentation transcript:

CS 294-110: Project Suggestions September 14, 2015 Ion Stoica and Ali Ghodsi (http://www.cs.berkeley.edu/~istoica/classes/cs294/15/)

Projects This is a project-oriented class Reading papers should be a means to a great project, not a goal in itself! Strongly prefer groups of two students Today, I’ll present some suggestions But, you are free to come up with your own proposal Main goal: just do a great project

Projects Many projects around Spark Local expertise Great platform to disseminate your work Short review based on log mining example to provide context

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Add “variables” to the “functions” in functional programming

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Worker Driver Add “variables” to the “functions” in functional programming

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) Worker Driver Add “variables” to the “functions” in functional programming

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Base RDD lines = spark.textFile(“hdfs://...”) Worker Driver Add “variables” to the “functions” in functional programming

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) Worker Driver Add “variables” to the “functions” in functional programming

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Transformed RDD lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) Worker Driver Add “variables” to the “functions” in functional programming

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker Driver messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker Driver Action messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker Driver Block 1 messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Block 2 Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker tasks Block 1 Driver tasks messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming tasks Worker Worker Block 2 Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker Block 1 Driver Read HDFS Block messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Worker Worker Block 2 Read HDFS Block Read HDFS Block Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Cache 1 lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker Block 1 Driver Process & Cache Data messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Cache 2 Worker Cache 3 Worker Block 2 Process & Cache Data Process & Cache Data Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Cache 1 lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker results Block 1 Driver results messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Cache 2 results Worker Cache 3 Worker Block 2 Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Cache 1 lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker Block 1 Driver messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Cache 2 Worker messages.filter(lambda s: “php” in s).count() Cache 3 Worker Block 2 Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Cache 1 lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker tasks Block 1 Driver tasks messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Cache 2 tasks Worker messages.filter(lambda s: “php” in s).count() Cache 3 Worker Block 2 Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Cache 1 lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker Block 1 Driver Process from Cache messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Cache 2 Worker messages.filter(lambda s: “php” in s).count() Cache 3 Worker Block 2 Process from Cache Process from Cache Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Cache 1 lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker results Block 1 Driver results messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Cache 2 results Worker messages.filter(lambda s: “php” in s).count() Cache 3 Worker Block 2 Block 3

Spark Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Cache 1 lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“\t”)[2]) messages.cache() Worker Block 1 Driver messages.filter(lambda s: “mysql” in s).count() Add “variables” to the “functions” in functional programming Cache 2 Worker messages.filter(lambda s: “php” in s).count() Cache 3 Cache your data  Faster Results Full-text search of Wikipedia 60GB on 20 EC2 machines 0.5 sec from mem vs. 20s for on-disk Worker Block 2 Block 3

Spark Data Sources Spark Core DataFrames ML Pipelines Spark Streaming {JSON} Data Sources Spark Core DataFrames ML Pipelines Spark Streaming Spark SQL MLlib GraphX ?

Pipeline Shuffle Problem Project Challenge Right now shuffle senders write data on storage after which the data is shuffled to receivers Shuffle often most expensive communication pattern, sometimes dominates job comp. time Project Start sending shuffle data as it is being produced Challenge How do you do recovery and speculation? Could store data as being sent, but still not easy….

Fault Tolerance & Perf. Tradeoffs Problem: Maintaining lineage in Spark provides fault recovery, but comes at performance cost E.g., hard to support super small tasks due to lineage overhead Project: Evaluate how much you can speed up Spark by ignoring fault tolerance Can generalize to other cluster computing engines Challenge What do you do for large jobs, how do you treat stragglers? Maybe a hybrid method, i.e., just don’t do lineage for small jobs? Need to figure out when a job is small…

(Eliminating) Scheduling Overhead Problem: with Spark, driver schedules every task Latency 100s ms or higher; cannot run ms queries Driver can become a bottleneck Project: Have workers perform scheduling Challenge: How do you handle faults? Maybe some hybrid solution across driver and workers?

Cost-based Optimization in SparkSQL Problem: Spark employs a rule-based Query Planner (Catalyst) Limited optimization opportunities especially when operator performance varies widely based on input data E.g., join and selection on skewed data Project: cost-based optimizer Estimate operators’ costs, and use these costs to compute the query plan

Streaming Graph Processing Problem: With GraphX, queries can be fast but updates are typically in batches (slow) Project: Incrementally update graphs Support window based graph queries Note: Discuss with Anand Iyer and Ankur Dave if interested

Streaming ML Problem: Project: Notes: Today ML algorithms typically performed on static data Cannot update model in real-time Project: Develop on-line ML algorithms that update the model continuously as new data is streamed Notes: Also contact Joey Gonzalez if interested

Beyond JVM: Using Non-Java Libraries Problem: Spark tasks are executed within JVMs Limits performance and use of non-Java popular libraries Project: General way to add support for non-Java libraries Example: use JNI to call arbitrary libraries Challenges: Define interface, shared data formats, etc Notes Contact Guanghua and Shivaram, if needed

Beyond JVM: Dynamic Code Generation Problem: Spark tasks are executed within JVMs Limits performance and use of non-Java popular libraries Project: Generate non-Java code, e.g., C++, CUDA for GPUs Challenges: API and shared data format Notes Contact Guanghua and Shivaram, if needed

Beyond JVM: Resource Management and Scheduling Problem Need to schedule processes hosting non-Java code GPU cannot be invoked by more than one process Project: Develop scheduling, and resource management algorithms Challenge: Preserve fault tolerance, straggler mitigation Notes Contact Guanghua and Shivaram, if needed

Time Series for DataFrames Insprired by Pandas and R DataFrames, Spark recently introduced DataFrames Problem Spark DataFrames don’t support time series Project: Develop and contribute distributed time series operations for Data Frames Challenge: Spark doesn’t have indexes http://pandas.pydata.org/pandas-docs/stable/timeseries.html

ACID transactions to Spark SQL Problem Spark SQL is used for Analytics and doesn’t support ACID Project: Develop and add row-level ACID tx on top of Spark SQL Challenge: Challenging to provide transactions and analytics in one system https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions

Typed Data Frames Problem Project: Challenge: DataFrames in Spark, unlike Spark RDDs, do not provide type safety Project: Develop a typed DataFrame framework for Spark Challenge: SQL-like operations are inherently dynamic (e.g. filter(“col”) and make it hard to have static typing unless fancy reflection mechanisms are used

General pipelines for Spark Problem Spark.ml provides a pipeline abstraction for ML, generalize it to cover all of Spark Project: Develop a pipeline abstraction (similar to ML pipelines) that spans all of Spark, allowing users to perform SQL operations, GraphX operations, etc

Beyond BSP Problem Project Challenge With BSP each worker executes the same code Project Can we extend Spark (or other cluster computing framework) to support non-BSP computation How much better than emulating everything with BSP? Challenge Maintain simple APIs More complex scheduling, communication patterns

Project idea: cryptography & big data (Alessandro Chiesa) As data and computations scale up to larger sizes… … can cryptography follow? One direction: zero knowledge proofs for big data

zero knowledge proofs on 1 machine Classical setting: zero knowledge proofs on 1 machine Here is the result of your computation. add crypto magic I don’t believe you. I don’t want to give you my private data. Send me a ZK proof of correctness? + generate ZK proof + verify ZK proof result & ZK proof client server

New setting for big data: zero knowledge proofs on clusters Problem: cannot generate ZK proof on 1 machine (as before) Challenge: generate the ZK proof over a cluster (e.g., using Spark) + generate ZK proof + verify ZK proof result & ZK proof client End goal: “scaling up” ZK proofs to computations on big data & explore security applications! cluster

Succinct (quick overview) Queries on compressed data Basic operations: Search: given a substring “s” return offsets of all occurrences of “s” within the input Extract: given an offset “o” and a length “l” uncompress and return “l” bytes from original file starting at “o” Count: given a substring “s” return the number of occurrences of “s” within the input Can implement key-value store on top of it

Succinct: Efficient Point Query Support Problem: Spark implementation: expensive, as always queries all workers Project: Implement Succinct on top of Tachyon (storage layer) Provide efficient key-value store lookups, i.e., lookup a single worker if key is there Note: Contact Anurag and Rachit, if interested

Succinct: External Memory Support Problem: Some data increases faster than main memory Need to execute queries on external storage (e.g., SSDs) Project: Design & implement compressed data structures for efficient external memory execution A lot of work in theory community, that could be exploited Note: Contact Anurag and Rachit, if interested

Succinct: Updates Problem: Project: Note: Current systems use a multi-store architecture Expensive to update compressed representation Project: Develop a low overhead update solution with minimal impact on memory overhead and query performance Start from multi-store architecture (see NSDI paper) Note: Contact Anurag and Rachit, if interested

Succinct: SQL Problem: Project: Note: Arbitrary sub-string search powerful but not as many workloads Project: Support SQL on top of Succinct Start from SparkSQL and Succinct Spark package? Note: Contact Anurag and Rachit, if interested

Succinct: Genomics Problem: Project: Challenges: Note: Genomics pipeline still expensive Project: Genome processing on a single machine (using compressed data) Enable queries on compressed genomes Challenges: Domain specific query optimizations Note: Contact Anurag and Rachit, if interested