Download presentation
Presentation is loading. Please wait.
Published byHenry Pearson Modified over 9 years ago
1
CS 347Lecture 91 CS 347: Parallel and Distributed Data Management Notes X: BigTable, HBASE, Cassandra
2
Sources HBASE: The Definitive Guide, Lars George, O’Reilly Publishers, 2011. Cassandra: The Definitive Guide, Eben Hewitt, O’Reilly Publishers, 2011. BigTable: A Distributed Storage System for Structured Data, F. Chang et al, ACM Transactions on Computer Systems, Vol. 26, No. 2, June 2008. CS 347Lecture 92
3
Lots of Buzz Words! “Apache Cassandra is an open-source, distributed, decentralized, elastically scalable, highly available, fault-tolerant, tunably consistent, column-oriented database that bases its distribution design on Amazon’s dynamo and its data model on Google’s Big Table.” Clearly, it is buzz-word compliant!! CS 347Lecture 93
4
Basic Idea: Key-Value Store CS 347Lecture 94 Table T:
5
Basic Idea: Key-Value Store CS 347Lecture 95 Table T: keys are sorted API: –lookup(key) value –lookup(key range) values –getNext value –insert(key, value) –delete(key) Each row has timestemp Single row actions atomic (but not persistent in some systems?) No multi-key transactions No query language!
6
Fragmentation (Sharding) CS 347Lecture 96 server 1server 2server 3 use a partition vector “auto-sharding”: vector selected automatically tablet
7
Tablet Replication CS 347Lecture 97 server 3server 4server 5 primary backup Cassandra: Replication Factor (# copies) R/W Rule: One, Quorum, All Policy (e.g., Rack Unaware, Rack Aware,...) Read all copies (return fastest reply, do repairs if necessary) HBase: Does not manage replication, relies on HDFS
8
Need a “directory” Table Name: Key Server that stores key Backup servers Can be implemented as a special table. CS 347Lecture 98
9
Tablet Internals CS 347Lecture 99 memory disk Design Philosophy: Only write to disk using sequential I/O.
10
Tablet Internals CS 347Lecture 910 memory disk flush periodically (minor compaction) tablet is merge of all layers (files) disk segments immutable writes efficient; reads only efficient when all data in memory use bloom filters for reads periodically reorganize into single layer (merge, major compaction) tombstone
11
Column Family CS 347Lecture 911
12
Column Family CS 347Lecture 912 for storage, treat each row as a single “super value” API provides access to sub-values (use family:qualifier to refer to sub-values e.g., price:euros, price:dollars ) Cassandra allows “super-column”: two level nesting of columns (e.g., Column A can have sub-columns X & Y )
13
Vertical Partitions CS 347Lecture 913 can be manually implemented as
14
Vertical Partitions CS 347Lecture 914 column family good for sparse data; good for column scans not so good for tuple reads are atomic updates to row still supported? API supports actions on full table; mapped to actions on column tables API supports column “project” To decide on vertical partition, need to know access patterns
15
Failure Recovery (BigTable, HBase) CS 347Lecture 915 tablet server memory log GFS or HDFS master node spare tablet server write ahead logging ping
16
Failure recovery (Cassandra) No master node, all nodes in “cluster” equal CS 347Lecture 916 server 1 server 2 server 3
17
Failure recovery (Cassandra) No master node, all nodes in “cluster” equal CS 347Lecture 917 server 1 server 2 server 3 access any table in cluster at any server that server sends requests to other servers
18
CS 347Lecture 918 CS 347: Parallel and Distributed Data Management Notes X: MemCacheD
19
MemCacheD General-purpose distributed memory caching system Open source CS 347Lecture 919
20
What MemCacheD Is CS 347Lecture 920 data source 1 cache 1 data source 2 cache 2 data source 3 cache 3 get_object(cache 1, MyName) x put(cache 1, myName, X)
21
What MemCacheD Is CS 347Lecture 921 data source 1 cache 1 data source 2 cache 2 data source 3 cache 3 get_object(cache 1, MyName) x put(cache 1, myName, X) Can purge MyName whenever each cache is hash table of (name, value) pairs no connection
22
What MemCacheD Could Be (but ain't) CS 347Lecture 922 data source 1 cache 1 data source 2 cache 2 data source 3 cache 3 distributed cache get_object(X)
23
What MemCacheD Could Be (but ain't) CS 347Lecture 923 data source 1 cache 1 data source 2 cache 2 data source 3 cache 3 distributed cache get_object(X) x x
24
Persistence? CS 347Lecture 924 cache 1cache 2cache 3 get_object(cache 1, MyName) put(cache 1, myName, X) Can purge MyName whenever each cache is hash table of (name, value) pairs
25
Persistence? CS 347Lecture 925 cache 1cache 2cache 3 get_object(cache 1, MyName) put(cache 1, myName, X) Can purge MyName whenever each cache is hash table of (name, value) pairs DBMS Example: MemcacheDB = memcached + BerkeleyDB
26
CS 347Lecture 926 CS 347: Parallel and Distributed Data Management Notes X: ZooKeeper
27
ZooKeeper Coordination service for distributed processes Provides clients with high throughput, high availability, memory only file system CS 347Lecture 927 client ZooKeeper / ab c d e znode /a/d/e (has state)
28
ZooKeeper Servers CS 347Lecture 928 client server state replica
29
ZooKeeper Servers CS 347Lecture 929 client server state replica read
30
ZooKeeper Servers CS 347Lecture 930 client server state replica write propagate & sych writes totally ordered: used Zab algorithm
31
Failures CS 347Lecture 931 client server state replica if your server dies, just connect to a different one!
32
ZooKeeper Notes Differences with file system: –all nodes can store data –storage size limited API: insert node, read node, read children, delete node,... Can set triggers on nodes Clients and servers must know all servers ZooKeeper works as long as a majority of servers are available Writes totally ordered; read ordered w.r.t. writes CS 347Lecture 932
33
ZooKeeper Applications Metadata/configuration store Lock server Leader election Group membership Barrier Queue CS 347Lecture 933
34
CS 347Lecture 934 CS 347: Parallel and Distributed Data Management Notes X: S4
35
Material based on: CS 347Lecture 935
36
S4 Platform for processing unbounded data streams –general purpose –distributed –scalable –partially fault tolerant (whatever this means!) CS 347Lecture 936
37
Data Stream CS 347Lecture 937 Terminology: event (data record), key, attribute Question: Can an event have duplicate attributes for same key? (I think so...) Stream unbounded, generated by say user queries, purchase transactions, phone calls, sensor readings,...
38
S4 Processing Workflow CS 347Lecture 938 user specified “processing unit”
39
Inside a Processing Unit CS 347Lecture 939 PE1 PE2 PEn... key=z key=b key=a processing element
40
Example: Stream of English quotes Produce a sorted list of the top K most frequent words CS 347Lecture 940
41
CS 347Lecture 941
42
Processing Nodes CS 347Lecture 942... key=b key=a processing node... hash(key) =1 hash(key) =m
43
Dynamic Creation of PEs As a processing node sees new key attributes, it dynamically creates new PEs to handle them Think of PEs as threads CS 347Lecture 943
44
Another View of Processing Node CS 347Lecture 944
45
Failures Communication layer detects node failures and provides failover to standby nodes What happens events in transit during failure? (My guess: events are lost!) CS 347Lecture 945
46
How do we do DB operations on top of S4? Selects & projects easy! CS 347Lecture 946 What about joins?
47
What is a Stream Join? For true join, would need to store all inputs forever! Not practical... Instead, define window join: –at time t new R tuple arrives –it only joins with previous w S tuples CS 347Lecture 947 R S
48
One Idea for Window Join CS 347Lecture 948 R S code for PE: for each event e: if e.rel=R [store in Rset(last w) for s in Sset: output join(e,s) ] else... “key” is join attribute
49
One Idea for Window Join CS 347Lecture 949 R S code for PE: for each event e: if e.rel=R [store in Rset(last w) for s in Sset: output join(e,s) ] else... “key” is join attribute Is this right??? (enforcing window on a per-key value basis) Maybe add sequence numbers to events to enforce correct window?
50
Another Idea for Window Join CS 347Lecture 950 R S code for PE: for each event e: if e.rel=R [store in Rset(last w) for s in Sset: if e.C=s.C then output join(e,s) ] else if e.rel=S... All R & S events have “key=fake”; Say join key is C.
51
Another Idea for Window Join CS 347Lecture 951 R S code for PE: for each event e: if e.rel=R [store in Rset(last w) for s in Sset: if e.C=s.C then output join(e,s) ] else if e.rel=S... All R & S events have “key=fake”; Say join key is C. Entire join done in one PE; no parallelism
52
Do You Have a Better Idea for Window Join? CS 347Lecture 952 R S
53
Final Comment: Managing state of PE CS 347Lecture 953 Who manages state? S4: user does Mupet: System does Is state persistent? state
54
CS 347Lecture 954 CS 347: Parallel and Distributed Data Management Notes X: Hyracks
55
Hyracks Generalization of map-reduce Infrastructure for “big data” processing Material here based on: CS 347Lecture 955 Appeared in ICDE 2011
56
A Hyracks data object: Records partitioned across N sites Simple record schema is available (more than just key-value) CS 347Lecture 956 site 1 site 2 site 3
57
Operations CS 347Lecture 957 operator distribution rule
58
Operations (parallel execution) CS 347Lecture 958 op 1 op 2 op 3
59
Example: Hyracks Specification CS 347Lecture 959
60
Example: Hyracks Specification CS 347Lecture 960 initial partitions input & output is a set of partitions mapping to new partitions 2 replicated partitions
61
Notes Job specification can be done manually or automatically CS 347Lecture 961
62
Example: Activity Node Graph CS 347Lecture 962
63
Example: Activity Node Graph CS 347Lecture 963 sequencing constraint activities
64
Example: Parallel Instantiation CS 347Lecture 964
65
Example: Parallel Instantiation CS 347Lecture 965 stage (start after input stages finish)
66
System Architecture CS 347Lecture 966
67
Library of Operators: File reader/writers Mappers Sorters Joiners (various types0 Aggregators Can add more CS 347Lecture 967
68
Library of Connectors: N:M hash partitioner N:M hash-partitioning merger (input sorted) N:M range partitioner (with partition vector) N:M replicator 1:1 Can add more! CS 347Lecture 968
69
Hyracks Fault Tolerance: Work in progress? CS 347Lecture 969
70
Hyracks Fault Tolerance: Work in progress? CS 347Lecture 970 save output to files One approach: save partial results to persistent storage; after failure, redo all work to reconstruct missing data
71
Hyracks Fault Tolerance: Work in progress? CS 347Lecture 971 Can we do better? Maybe: each process retains previous results until no longer needed? Pi output: r1, r2, r3, r4, r5, r6, r7, r8 have "made way" to final result current output
72
CS 347Lecture 972 CS 347: Parallel and Distributed Data Management Notes X: Pregel
73
Material based on: In SIGMOD 2010 Note there is an open-source version of Pregel called GIRAPH CS 347Lecture 973
74
Pregel A computational model/infrastructure for processing large graphs Prototypical example: Page Rank CS 347Lecture 974 x e d b a f PR[i+1,x] = f(PR[i,a]/na, PR[i,b]/nb)
75
Pregel Synchronous computation in iterations In one iteration, each node: –gets messages from neighbors –computes –sends data to neighbors CS 347Lecture 975 x e d b a f PR[i+1,x] = f(PR[i,a]/na, PR[i,b]/nb)
76
Pregel vs Map-Reduce/S4/Hyracks/... In Map-Reduce, S4, Hyracks,... workflow separate from data CS 347Lecture 976 In Pregel, data (graph) drives data flow
77
Pregel Motivation Many applications require graph processing Map-Reduce and other workflow systems not a good fit for graph processing Need to run graph algorithms on many processors CS 347Lecture 977
78
Example of Graph Computation CS 347Lecture 978
79
Termination After each iteration, each vertex votes to halt or not If all vertexes vote to halt, computation terminates CS 347Lecture 979
80
Vertex Compute (simplified) Available data for iteration i: –InputMessages: { [from, value] } –OutputMessages: { [to, value] } –OutEdges: { [to, value] } –MyState: value OutEdges and MyState are remembered for next iteration CS 347Lecture 980
81
Max Computation change := false for [f,w] in InputMessages do if w > MyState.value then [ MyState.value := w change := true ] if (superstep = 1) OR change then for [t, w] in OutEdges do add [t, MyState.value] to OutputMessages else vote to halt CS 347Lecture 981
82
Page Rank Example CS 347Lecture 982
83
Page Rank Example CS 347Lecture 983 iteration count iterate thru InputMessages MyState.value shorthand: send same msg to all
84
Single-Source Shortest Paths CS 347Lecture 984
85
Architecture CS 347Lecture 985 master worker A input data 1 input data 2 worker B worker C graph has nodes a,b,c,d... sample record: [a, value]
86
Architecture CS 347Lecture 986 master worker A vertexes a, b, c input data 1 input data 2 worker B vertexes d, e worker C vertexes f, g, h partition graph and assign to workers
87
Architecture CS 347Lecture 987 master worker A vertexes a, b, c input data 1 input data 2 worker B vertexes d, e worker C vertexes f, g, h read input dataworker A forwards input values to appropriate workers
88
Architecture CS 347Lecture 988 master worker A vertexes a, b, c input data 1 input data 2 worker B vertexes d, e worker C vertexes f, g, h run superstep 1
89
Architecture CS 347Lecture 989 master worker A vertexes a, b, c input data 1 input data 2 worker B vertexes d, e worker C vertexes f, g, h at end superstep 1, send messages halt?
90
Architecture CS 347Lecture 990 master worker A vertexes a, b, c input data 1 input data 2 worker B vertexes d, e worker C vertexes f, g, h run superstep 2
91
Architecture CS 347Lecture 991 master worker A vertexes a, b, c input data 1 input data 2 worker B vertexes d, e worker C vertexes f, g, h checkpoint
92
Architecture CS 347Lecture 992 master worker A vertexes a, b, c worker B vertexes d, e worker C vertexes f, g, h checkpoint write to stable store: MyState, OutEdges, InputMessages (or OutputMessages)
93
Architecture CS 347Lecture 993 master worker A vertexes a, b, c worker B vertexes d, e if worker dies, find replacement & restart from latest checkpoint
94
Architecture CS 347Lecture 994 master worker A vertexes a, b, c input data 1 input data 2 worker B vertexes d, e worker C vertexes f, g, h
95
Interesting Challenge How best to partition graph for efficiency? CS 347Lecture 995 x e d b a f g
96
CS 347Lecture 996 CS 347: Parallel and Distributed Data Management Notes X: Kestrel
97
Kestrel Kestrel server handles a set of reliable, ordered message queues A server does not communicate with other servers (advertised as good for scalability!) CS 347Lecture 997 client server queues
98
Kestrel Kestrel server handles a set of reliable, ordered message queues A server does not communicate with other servers (advertised as good for scalability!) CS 347Lecture 998 client server queues get q put q log
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.