Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rhea: automatic filtering for unstructured cloud storage Christos Gkantsidis, Dimitrios Vytiniotis, Orion Hodson, Dushyanth Narayanan, Florin Dinu, and.

Similar presentations


Presentation on theme: "Rhea: automatic filtering for unstructured cloud storage Christos Gkantsidis, Dimitrios Vytiniotis, Orion Hodson, Dushyanth Narayanan, Florin Dinu, and."— Presentation transcript:

1 Rhea: automatic filtering for unstructured cloud storage Christos Gkantsidis, Dimitrios Vytiniotis, Orion Hodson, Dushyanth Narayanan, Florin Dinu, and Antony Rowstron Presented by Gourav Khaneja

2 Motivation: Unstructured data Relational Databases had well-defined schema Unstructured “text” data (or loose structure): The structure of data is implicit in the application (flexibility)

3 Cluster design for data analytics Hadoop, Dryad, Map Reduce co-locate Storage and Compute

4 Elastic Cloud Amazon S3 & EC2: Amazon Elastic MapReduce Microsoft Azure Storage and computer cloud: Hadoop Scalable storage Elastic computeDC Network

5 Why separate clusters ? Security & Performance Isolation Independent Evolution (scalability & provisioning) (User) don’t pay for compute to keep data alive Scalable storage Elastic compute

6 Bottleneck Core DC bandwidth: Scarce & oversubscribe Scalable storage Elastic compute Bottleneck

7 Execute Mapper on storage ? Intuition: Mappers throw away a lot of data, but Data reduction not guaranteed Difficult to stop mappers during storage overload Storage nodes have to execute complicated logic (Hadoop system & protocol) Dependencies on runtime environment, libraries, etc

8 Solution: Rhea Filters unnecessary data at storage nodes Through static analysis of java byte code of mappers Filters are executable java code

9 Rhea: Design Storage Job Data Hadoo p Cluster Input Job Filter Generator Network Filter descriptions Filter Proxy 9 Extract row (select) & column (project) filters

10 public void map(… value …) { String[] entries = value.toString().split(“\t”); String articleName = entries[0]; String pointType = entries[1]; String geoPoint = entries[2]; if (GEO_RSS_URI.equals(pointType)) { StringTokenizer st = new StringTokenizer(geoPoint, " "); String strLat = st.nextToken(); String strLong = st.nextToken(); double lat = Double.parseDouble(strLat); double lang = Double.parseDouble(strLong); String locationKey = ……… String locationName = ……… geoLocationKey.set(locationKey); geoLocationName.set(locationName); outputCollector.collect(geoLocationKey, geoLocationName); } Row Filters

11 public void map(… value …) { String[] entries = value.toString().split(“\t”); String articleName = entries[0]; String pointType = entries[1]; String geoPoint = entries[2]; if (GEO_RSS_URI.equals(pointType)) { StringTokenizer st = new StringTokenizer(geoPoint, " "); String strLat = st.nextToken(); String strLong = st.nextToken(); double lat = Double.parseDouble(strLat); double lang = Double.parseDouble(strLong); String locationKey = ……… String locationName = ……… geoLocationKey.set(locationKey); geoLocationName.set(locationName); outputCollector.collect(geoLocationKey, geoLocationName); } 1. Label output lines.

12 public void map(… value …) { String[] entries = value.toString().split(“\t”); String articleName = entries[0]; String pointType = entries[1]; String geoPoint = entries[2]; if (GEO_RSS_URI.equals(pointType)) { StringTokenizer st = new StringTokenizer(geoPoint, " "); String strLat = st.nextToken(); String strLong = st.nextToken(); double lat = Double.parseDouble(strLat); double lang = Double.parseDouble(strLong); String locationKey = ……… String locationName = ……… geoLocationKey.set(locationKey); geoLocationName.set(locationName); outputCollector.collect(geoLocationKey, geoLocationName); } 2. Collect all control flow path that reach to output labels (loops, conditional statements creates branches in the control flow)

13 public void map(… value …) { String[] entries = value.toString().split(“\t”); String articleName = entries[0]; String pointType = entries[1]; String geoPoint = entries[2]; if (GEO_RSS_URI.equals(pointType)) { StringTokenizer st = new StringTokenizer(geoPoint, " "); String strLat = st.nextToken(); String strLong = st.nextToken(); double lat = Double.parseDouble(strLat); double lang = Double.parseDouble(strLong); String locationKey = ……… String locationName = ……… geoLocationKey.set(locationKey); geoLocationName.set(locationName); outputCollector.collect(geoLocationKey, geoLocationName); } 3. Create a flow map : For each instruction, for each variable referenced in that instruction: what instruction affects that variable.

14 public void map(… value …) { String[] entries = value.toString().split(“\t”); String articleName = entries[0]; String pointType = entries[1]; if (GEO_RSS_URI.equals(pointType)) { outputCollector.collect(geoLocationKey, geoLocationName); } 4. Keep only the statements which are reaching destination for control flow statements.

15 public void map(… value …) { String[] entries = value.toString().split(“\t”); String articleName = entries[0]; String pointType = entries[1]; if (GEO_RSS_URI.equals(pointType)) { return true; } return false; } 5. Disjunction of paths: Return true for control reaching output labels. *This is a simplified version. The actual Rhea-generated code differs in terms of variable names and condition check.

16 Column Filters StringTokenizer, String.split based on regular expressions. Can be extended to other APIs. Conservative: do not filter otherwise Replace irrelevant tokens Generate fillers dynamically

17 State machine for column filter v=value.toString() t=new StringTokenizer(t,sep)t.nextToken() T=v.split(sep) … START

18 Filter Properties Correct Isolation and safety: No system calls, I/O call etc. Fully Transparent. Thus, best effort: can be killed anytime. Stateless: less memory usage (unlike mappers) Guarantee output < input : unlike mappers Termination: proof ?

19 Evaluation: Job Selectivity Many Jobs are very selective either on rows or columns or both Normalized selectivity of example jobs Many Jobs are very selective either on rows or columns or both 30 % of data transferred

20 Job Run Time Job run time normalized to baseline execution (without Rhea) Discussion: Filter time not included.

21 Throughput of Filtering Engine OK for a 2 core machine, transmitting at full line rate of 1 Gbps Optimizations only for column filter

22 Across Datacenters: WAN is the bottleneck Similar results as for LAN For a few jobs, LAN is a bottleneck instead of WAN

23 Dollar costs Why compute cost is reduced ? Per second compute cost (instead of per dollars)

24 Discussion The example jobs might be biased towards selectivity. How does system generalize beyond Hadoop/Java (Pig, Spark, streaming) ? Experiments to study computing availability at storage nodes. Not optimal (throughput-wise, selectivity-wise). False-positive rate ? Debugging becomes harder, in case of mapper bugs.

25 Stateful Mappers Statements may modify mapper state Example: A mapper emitting every nth row Solution: Treat state accessing statements as output labels

26 Optimizations Merge control paths if all the branches lead to output labels (loops and conditions) if (GEO_RSS_URI.equals(pointType)) { … }else{ … } While(condition){ … } outputCollector.collect(geoLocationKey, geoLocationName);

27 Evaluation Input data size and run time for 9 example jobs without Rhea Out of 160 mappers, 50% (26%) gives non-trivial row (column filters)

28 DC bandwidth: Scarce & oversubscribe 631 Mbps 230 Mbps


Download ppt "Rhea: automatic filtering for unstructured cloud storage Christos Gkantsidis, Dimitrios Vytiniotis, Orion Hodson, Dushyanth Narayanan, Florin Dinu, and."

Similar presentations


Ads by Google