Presentation is loading. Please wait.

Presentation is loading. Please wait.

Apache Kudu Zbigniew Baranowski.

Similar presentations


Presentation on theme: "Apache Kudu Zbigniew Baranowski."— Presentation transcript:

1 Apache Kudu Zbigniew Baranowski

2 Intro

3 What is KUDU? New storage engine for structured data (tables) – does not use HDFS! Columnar store Mutable (insert, update, delete) Written in C++ Apache-licensed – open source Quite new ->1.0 version recently released First commit on October 11th, 2012 …and immature?

4 KUDU tries to fill the gap
HDFS excels at Scanning of large amount of data at speed Accumulating data with high throughput HBASE (on HDFS) excels at Fast random lookups by key Making data mutable

5 Table oriented storage
A Kudu table has RDBMS-like schema Primary key (one or many columns), No secondary indexes Finite and constant number of columns (unlike HBase) Each column has a name and type boolean, int(8,16,32,64), float, double, timestamp, string, binary Horizontally partitioned (range, hash) partitions are called tablets tablets can have 3 or 5 replicas

6 Data Consistency Writing Reading
Single row mutations done atomically across all columns No multi-row ACID transactions Reading Tuneable freshness of the data read whatever is available or wait until all changes committed in WAL are available Snapshot consistency changes made during scanning are not reflected in the results point-in-time queries are possible (based on provided timestamp)

7 Classical low latency design
Kudu simplifies BigData deployment model for online analytics (low latency ingestion and access) Classical low latency design Stream Source Events Staging area Stream Source Stream Source Events Stream Source Events Flush periodically Flush immediately HDFS Big Files Indexed data Batch processing Fast data access

8 Implementing low latency with Kudu
Stream Source Events Stream Source Stream Source Events Stream Source Events Batch processing Fast data access

9 Kudu Architecture

10 Architecture overview
Master server (can be multiple masters for HA) Stores metadata - tables definitions Tablets directory (tablets locations) Coordinates the cluster reconfigurations Tablet servers (worker nodes) Writes and reads tablets Stored on local disks (no HDFS) Tracks status of tablets replicas (followers) Replicates the data to followers

11 Tables and tablets Map of table TEST: Master TabletServer1
TabletID Leader Follower1 Follower2 TEST1 TS1 TS2 TS3 TEST2 TS4 TEST3 TabletID Leader Follower1 Follower2 TEST1 TS1 TS2 TS3 TEST2 TS4 TEST3 TabletID Leader Follower1 Follower2 TEST1 TS1 TS2 TS3 TEST2 TS4 TEST3 TabletServer1 TabletServer2 TabletServer3 TabletServer4 Leader TEST1 TEST1 TEST1 Leader TEST2 TEST2 TEST2 Leader TEST3 TEST3 TEST3

12 Data changes propagation in Kudu (Raft Consensus - https://raft.github.io)
Master Get tablets locations Client Tablet server X Write (x rows) WAL Commit Tablet 1 (leader) Success Commit (ASNC) ACK Write (x rows) ACK Write (x rows) Commit(ASNC) Tablet server Y Tablet server Z WAL WAL Commit Commit Tablet 1 (follower) Tablet 1 (follower)

13 Insert into tablet (without uniqueness check)
Tablets Server MemRowSet B+tree Row: Col1,Col2, Col3 INSERT Leafs sorted by Primary Key Row1,Row2,Row3 Flush Columnar store encoded similarly to Parquet Rows sorted by PK. DiskRowSet1 (32MB) PK {min, max} PK Bloom filters Bloom filters for PK ranges. Stored in cached btree Col1 Col2 Col3 Interval tree Interval tree keeps track of PK ranges within DiskRowSets DiskRowSet2 (32MB) PK {min, max} PK Bloom filters Col1 Col2 Col3 There might be Ks of sets per tablet

14 DiskRowSet compaction
Periodical task Removes deleted rows Reduces the number of sets with overlapping PK ranges Does not create bigger DiskRowSets 32MB size for each DRS is preserved DiskRowSet1 (32MB) PK {A, G} DiskRowSet1 (32MB) PK {A, D} Compact DiskRowSet2 (32MB) PK {B, E} DiskRowSet2 (32MB) PK {E, G}

15 How columns are stored on disk (DiskRowSet)
maps row offsets to pages maps PK to row offset Column1 Values Size 256KB 32MB Page metadata Btree index Values Page metadata Values Pages are encoded with a variety of encodings, such as dictionary encoding, bitshuffle, or RLE Page metadata Values Page metadata Column2 Values Btree index PK Page metadata Btree index Values Page metadata Values Page metadata Values Page metadata Column3 Values Page metadata Btree index Values Pages can be compressed: Snappy, LZ4 or ZLib Page metadata Values Page metadata Values Page metadata

16 Kudu deployment

17 3 options for deployments
Build from source  Using RPMs 1 core rpms 2 service rpms (master and servers) One shared config file Using Cloudera manager Click, click, click, done

18 Interfacing with Kudu

19 Table access and manipulations
Operations on tables (NoSQL) insert, update, delete, scan Python, C++, Java API Integrated with Impala & Hive(SQL), MapReduce, Spark Flume sink (ingestion)

20 Manipulating Kudu tables with SQL(Impala/Hive)
Table creation CREATE TABLE `kudu_example` ( `runnumber` BIGINT, `eventnumber` BIGINT, `project` STRING, `streamname` STRING, `prodstep` STRING, `datatype` STRING, `amitag` STRING, `lumiblockn` BIGINT, `bunchid` BIGINT, ) DISTRIBUTE BY HASH (runnumber) INTO 64 BUCKETS TBLPROPERTIES( 'storage_handler' = 'com.cloudera.kudu.hive.KuduStorageHandler', 'kudu.table_name' = ‘example_table', 'kudu.master_addresses' = ‘kudu-master.cern.ch:7051', 'kudu.key_columns' = 'runnumber, eventnumber' ); DMLs insert into kudu_example values (1,30,'test',….); insert into kudu_example select * from data_parquet; update kudu_example set datatype='test' where runnumber=1; delete from kudu_example where project='test'; Queries select count(*),max(eventnumber) from kudu_example where datatype like '%AOD%‘ group by runnumber; select * from kudu_example k, parquet_table p where k.runnumber=p.runnumber ;

21 Creating table with Java
import org.kududb.* //CREATING TABLE String tableName = "my_table"; String KUDU_MASTER_NAME = "master.cern.ch" KuduClient client = new KuduClient.KuduClientBuilder(KUDU_MASTER_NAME).build(); List<ColumnSchema> columns = new ArrayList(); columns.add(new ColumnSchema.ColumnSchemaBuilder("runnumber",Type.INT64). key(true).encoding(ColumnSchema.Encoding.BIT_SHUFFLE).nullable(false).compressionAlgorithm(ColumnSchema.CompressionAlgorithm.SNAPPY).build()); columns.add(new ColumnSchema.ColumnSchemaBuilder("eventnumber",Type.INT64). key(true).encoding(ColumnSchema.Encoding.BIT_SHUFFLE).nullable(false).compressionAlgorithm(ColumnSchema.CompressionAlgorithm.SNAPPY).build()); …….. Schema schema = new Schema(columns); List<String> partColumns = new ArrayList<>(); partColumns.add("runnumber"); partColumns.add("eventnumber"); CreateTableOptions options = new CreateTableOptions().addHashPartitions(partColumns, 64).setNumReplicas(3); client.createTable(tableName, schema,options);

22 Inserting rows with Java
//INSERTING KuduTable table = client.openTable(tableName); KuduSession session = client.newSession(); Insert insert = table.newInsert(); PartialRow row = insert.getRow(); row.addLong(0, 1); row.addString(2,"test") …. session.apply(insert); //stores them in memory on client side (for batch upload) session.flush(); //sends data to Kudu ……..

23 Scanner in Java //configuring column projection List<String> projectColumns = new ArrayList<>(); projectColumns.add("runnumber"); projectColumns.add("dataType"); //setting a scan range PartialRow start = s.newPartialRow(); start.addLong("runnumber", 8); PartialRow end = s.newPartialRow(); end.addLong("runnumber",10); KuduScanner scanner = client.newScannerBuilder(table) .lowerBound(start) .exclusiveUpperBound(end) .setProjectedColumnNames(projectColumns) .build(); while (scanner.hasMoreRows()) { RowResultIterator results = scanner.nextRows(); while (results.hasNext()) { RowResult result = results.next(); System.out.println(result.getString(1)); //getting 2nd column }

24 Spark with Kudu wget spark-shell --jars kudu-spark_ jar import org.apache.kudu.spark.kudu._ // Read a table from Kudu val df = sqlContext.read.options( Map("kudu.master"->“kudu_master.cern.ch:7051“, "kudu.table" ->“kudu_table“)s).kudu // Query using the DF API... df.select(df("runnumber"),df("eventnumber"),df("db0")).filter($"runnumber"===169864).filter($"eventnumber"===1).show(); // ...or register a temporary table and use SQL df.registerTempTable("kudu_table") sqlContext.sql("select id from kudu_table where id >= 5").show() // Create a new Kudu table from a dataframe schema // NB: No rows from the dataframe are inserted into the table kuduContext.createTable("test_table", df.schema, Seq("key"), new CreateTableOptions().setNumReplicas(1)) // Insert data kuduContext.insertRows(df, "test_table")

25 Kudu Security To be done!

26 Performance (based on ATLAS EventIndex case)

27 Average row length Very good compaction ratio The same like parquet
Each row consists of 56 attributes Most of them are strings Few integers and floats

28 Insertion rates (per machine, per partition) with Impala
Average ingestion speed worse than parquet better than HBase

29 Random lookup with Impala
Good random data lookup speed Similar to Hbase

30 Data scan rate per core with a predicate on non PK column (using Impala)
Quite good data scanning speed Much better than HBase If natively supported predicates operations are used it is even faster than parquet

31 Remarks about Kudu performance
Ingestion speed depends on memory available on the server latency and throughput of device data stores WALs performance on follower servers Data access speed by index depends on on size of memory buffers (hot data are looked up within 30ms memory) Storage latency (cold data are looked up within 300ms from HDDs or CEPH) Data scan speed depends on predicate (if it can be pushed down to Kudu) number of projected columns storage throughput

32 Kudu monitoring

33 Cloudera Manager A lot of metrics are published though servers http
All collected by CM agents and can be plotted Predefined CM dashboards Monitoring of Kudu processes Workload plots CM can be also used for Kudu configuration

34 CM – Kudu host status

35 CM - Workload plots

36 CM - Resource utilisation

37 Observations & Conclusions

38 What is nice about Kudu The first one in Big Data open source world trying to combine columnar store + indexing Simple to deploy It works (almost) without problems It scales (this depends how the schema is designed) Writing, Accessing, Scanning Integrated with Big Data mainstream processing frameworks Spark, Impala, Hive, MapReduce SQL and NoSQL on the same data Gives more flexibility in optimizing schema design comparing to HBase (to levels of partitioning) Cloudera is pushing to deliver production-like quality of the software ASAP

39 What is bad about Kudu? No security (it should be added in next releases) authentication (who connected) authorization (ACLs) Raft consensus not always works as it should Too frequent tablet leader changes (sometime leader cannot be elected at all) Period without leader is quite long (sometimes never ends) This freezes updates on tables Handling disk failures you have to erase/reinitialize entire server Only one index per table No nested types (but there is a binary type) Cannot control tablet placement on servers

40 When to Kudu can be useful?
When you have structured ‘big data’ Like in a RDBMS Without complex types When sequential and random data access is required simultaneously and have to scale Data extraction and analytics at the same time Time series When low ingestion latency is needed and lambda architecture is too expensive

41 Learn more Main page: https://kudu.apache.org/
Video: Whitepaper: KUDU project: Some Java code examples: Get Cloudera Quickstart VM and test it


Download ppt "Apache Kudu Zbigniew Baranowski."

Similar presentations


Ads by Google