Presentation is loading. Please wait.

Presentation is loading. Please wait.

Avrilia Floratou (University of Wisconsin – Madison) Jignesh M. Patel (University of Wisconsin – Madison) Eugene J. Shekita (While at IBM Almaden Research.

Similar presentations


Presentation on theme: "Avrilia Floratou (University of Wisconsin – Madison) Jignesh M. Patel (University of Wisconsin – Madison) Eugene J. Shekita (While at IBM Almaden Research."— Presentation transcript:

1 Avrilia Floratou (University of Wisconsin – Madison) Jignesh M. Patel (University of Wisconsin – Madison) Eugene J. Shekita (While at IBM Almaden Research Center) Sandeep Tata (IBM Almaden Research Center) Presented by: Luyang Zhang && Yuguan Li Column-Oriented Storage Techniques for MapReduce 1

2 Motivation DatabasesMapReduce Column – Oriented Storage Performance Programmability Fault tolerance 2

3 Column-Oriented Storage 3 Benefits: Column-Oriented organizations are more efficient when an aggregate needs to be computed over many rows but only for a notably smaller subset of all columns of data. Column-Oriented organizations are more efficient when new values of a column are supplied for all rows at once. Column data is of uniform type, which provides some opportunity for storage size optimization. (e.g. Compression)

4 Questions 4 How to incorporate columnar–storage into an existing MR system (Hadoop) without changing its core parts? How can columnar-storage operate efficiently on top of a DFS (HDFS)? Is it easy to apply well-studied techniques from the database field to the Map-Reduce framework given that: It processes one tuple at a time. It does not use a restricted set of operators. It is used to process complex data types.

5 Challenges 5 In Hadoop, it is often convenient to use complex types like arrays, maps, and nested records to model data. --- which leads to a high deserialization cost and lack of effective column-oriented compression techniques. Serialization: data structure in memory bytes that can be transmitted Deserialization: bytes data structure in memory (Since hadoop is written in Java, more complex than C++ )

6 Challenges 6 Compression: Although the column data seems to be more similar and share a high compression ratio, the complex type makes some existed technologies cannot be applied to Hadoop. Programming API: Some technologies are not feasible for hand-coded mapreduce function.

7 Outline Column-Oriented Storage Lazy Tuple Construction Compression Experimental Evaluation Conclusions 7

8 Column-Oriented Storage in Hadoop 8 Main Idea: Store each column of the dataset in a separate file Problems: How can we generate roughly equal sized splits so that a job can be effectively parallelized over the cluster? How do we make sure that the corresponding values from different columns in the dataset are co-located on the same node running the map task?

9 Column-Oriented Storage in Hadoop NameAgeInfo Joe23hobbies: {tennis} friends: {Ann, Nick} David32friends: {George} John45hobbies: {tennis, golf} Smith65hobbies: {swimming} friends: {Helen} 1 st node 2 nd node Horizontally Partitioning Into split-directories NameAgeInfo Joe23hobbies: {tennis} friends: {Ann, Nick} David32friends: {George} NameAgeInfo John45hobbies:{tennis, golf} Smith65hobbies: {swimming} friends: {Helen} Name Joe David Age 23 32 Info hobbies: {tennis} friends:{Ann, Nick} friends: {George} Name John Smith Age 45 65 Info hobbies: {tennis, golf} hobbies: {swimming} friends: {Helen} Introduce new InputFormat/OutputFormat : ColumnInputFormat (CIF) ColumnOutputFormat (COF) 9 /data/2013-03-26/ /data/2013-03-26/s1 /data/2013-03-26/s2

10 ColumnInputFormat V.S RCFile Format 10 RCFile Format: Avoid Replication and Co-location problem Using Pax instead of a true column-oriented format, all columns will be packed in a single row-group as a split. Efficient I/O elimination become difficult. Metadata need additional space overhead. CIF: Need to tackle Replication and Co-location Efficient I/O elimination Consider adding a column to a dataset.

11 Replication and Co-location HDFS Replication Policy Node ANode BNode CNode D NameAgeInfo Joe23hobbies: {tennis} friends: {Ann, Nick} David32friends: {George} John45hobbies: {tennis, golf} Smith65hobbies: {swimming} friends: {Helen} Name Joe David Age 23 32 Info hobbies: {tennis} friends:{Ann, Nick} friends: {George} Name Joe David Name Joe David Age 23 32 Age 23 32 Info hobbies: {tennis} friends: {Ann,Nick} friends: {George} Info hobbies: {tennis} friends:{Ann, Nick} friends: {George} CPP Introduce a new column placement policy (CPP) Can be assigned to dfs.block.replicator.classname 11

12 Example AgeName Record if (age < 35) return name 23 32 45 30 50 Joe David John Mary Ann Map Method 23Joe 32David What if age > 35? Can we avoid reading and deserializing the name field? 12 ColumnInputForm at.setColumns(job,Age,Name)

13 Outline Column-Oriented Storage Lazy Tuple Construction Compression Experiments Conclusions 13

14 Lazy Tuple Construction Deserialization of each record field is deferred to the point where it is actually accessed, i.e. when the get() methods are called. *:Deserialize only those columns that are actually accessed in map function Mapper ( NullWritable key, Record value) { String name; int age = value.get(age); if (age < 35) name = value.get(name); } Mapper ( NullWritable key, LazyRecord value) { String name; int age = value.get(age); if (age < 35) name = value.get(name); } 14

15 LazyRecord implements Record 15 lastPos =curPos name Why do we need these: Without lastPos pointer, each nextRecord call would require all the columns to be deserialized to extract the length information to update their respective curPos pointer. age lastPos curPos skip

16 Skip List (Logical Behavior) R1R2R10R20R99 R100... R90... R1 R20R90R100... R10 Skip 100 Records Skip 10 16 R1R2R10R20R90R99 R1R10R20R90 R1R100

17 Example Age Joe Jane David Name Skip10 = 1002 Skip100 = 9017 Skip 10 = 868 … … Mary 10 rows 100 rows Skip Bytes Ann … 23 39 45 30 if (age < 35) return name … 17 John 0 1 2 102

18 Example Age hobbies: tennis friends : Ann, Nick Null friends : George Info Skip10 = 2013 Skip100 = 19400 Skip 10 = 1246 … hobbies: tennis, golf 10 rows 100 rows … … 23 39 45 30 if (age < 35) return hobbies … … 18

19 Outline Column-Oriented Storage Lazy Record Construction Compression Experiments Conclusions 19

20 Compression # Records in B1 # Records in B2 LZO/ZLIB compressed block RID : 0 - 9 LZO/ZLIB compressed block RID : 10 - 35 B1 B2 Null Skip10 = 210 Skip100 = 1709 Skip 10 = 304 … … 0: {tennis, golf} 10 rows 100 rows … Dictionary hobbies : 0 friends : 1 Compressed Blocks Dictionary Compressed Skip Lists Skip Bytes Decompress 0 : {tennis} 1 : {Ann, Nick} 1: {George} 20

21 Outline Column-Oriented Storage Lazy Record Construction Compression Experiments Conclusions 21

22 RCFile Metadata Joe, David John, Smith 23, 32 {hobbies: {tennis} friends: {Ann, Nick}}, {friends:{George}} {hobbies: {tennis, golf}}, {hobbies: {swimming} friends: {Helen}} Row Group 1 Row Group 2 NameAgeInfo Joe23hobbies: {tennis} friends: {Ann, Nick} David32friends: {George} John45hobbies: {tennis, golf} Smith65hobbies: {swimming} friends: {Helen} 45, 65 22

23 Experimental Setup 42 node cluster Each node: 2 quad-core 2.4GHz sockets 32 GB main memory four 500GB HDD Network : 1Gbit ethernet switch 23

24 Overhead of Columnar Storage Synthetic Dataset 57GB 13 columns 6 Integers, 6 Strings, 1 Map Query Select * 24 Single node experiment

25 Benefits of Column-Oriented Storage Query Projection of different columns 25 Single node experiment

26 Workload URLInfo { String url String srcUrl time fetchTime String inlink[] Map metadata Map annotations byte[] content } If( url contains ibm.com/jp ) find all the distinct encodings reported by the page Schema Query Dataset : 6.4 TB Query Selectivity : 6% 26

27 27 SEQ: 754 sec Comparison of Column-Layouts (Map phase)

28 28 3040 Comparison of Column-Layouts (Map phase)

29 Comparison of Column – Layouts (Total job) 29 SEQ: 806 sec

30 Conclusions Describe a new column-oriented binary storage format in MapReduce. Introduce skip list layout. Describe the implementation of lazy record construction. Show that lightweight dictionary compression for complex columns can be beneficial. 30

31 Comparison of Sequence Files 31

32 RCFile 32

33 Comparison of Column-Layouts LayoutData Read (GB) Map Time (sec) Map Time Ratio Total Time (sec) Total Time Ratio Seq - uncomp.64001416-1482- Seq - record3008820-889- Seq - block2848806-886- Seq - custom30407541.0x8061.0x RCFile11137021.1x7611.1x RCFile - comp1022023.7x2912.8x CIF - ZLIB3612.859.1x7710.4x CIF9612.460.8x7810.3x CIF - LZO5412.461.0x7910.2x CIF - SL759.281.9x7011.5x CIF -DCSL617.0107.8x6312.8x 33

34 Comparison of Column-Layouts 34 SEQ: 754 sec CIF – DCSL results in the highest map time speedup and improves the total job time by more than an order of magnitude (12.8X).

35 RCFile 35 SEQ: 754 sec

36 36 Comparison of Sequence Files SEQ: 754 sec


Download ppt "Avrilia Floratou (University of Wisconsin – Madison) Jignesh M. Patel (University of Wisconsin – Madison) Eugene J. Shekita (While at IBM Almaden Research."

Similar presentations


Ads by Google