Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel Databases COMP3211 Advanced Databases Dr Nicholas Gibbins - 2014-2015.

Similar presentations


Presentation on theme: "Parallel Databases COMP3211 Advanced Databases Dr Nicholas Gibbins - 2014-2015."— Presentation transcript:

1 Parallel Databases COMP3211 Advanced Databases Dr Nicholas Gibbins - nmg@ecs.soton.ac.uk 2014-2015

2 Overview 2 The I/O bottleneck Parallel architectures Parallel query processing –Inter-operator parallelism –Intra-operator parallelism –Bushy parallelism Concurrency control Reliability

3 The I/O Bottleneck

4 The Memory Hierarchy, Revisited TypeCapacityLatency Registers10 1 bytes1 cycle L110 4 bytes<5 cycles L210 5 bytes5-10 cycles RAM10 9 -10 10 bytes20-30 cycles (10 -8 s) Hard Disk10 11 -10 12 bytes10 6 cycles (10 -3 s) 4

5 The I/O Bottleneck 5 Access time to secondary storage (hard disks) dominates performance of DBMSes Two approaches to addressing this: –Main memory databases (expensive!) –Parallel databases (cheaper!) Increase I/O bandwidth by spreading data across a number of disks

6 Definitions 6 Parallelism –An arrangement or state that permits several operations or tasks to be performed simultaneously rather than consecutively Parallel Databases –have the ability to split –processing of data –access to data –across multiple processors, multiple disks

7 Why Parallel Databases 7 Hardware trends Reduced elapsed time for queries Increased transaction throughput Increased scalability Better price/performance Improved application availability Access to more data In short, for better performance

8 Parallel Architectures

9 Tightly coupled Symmetric Multiprocessor (SMP) P = processor M = memory Shared Memory Architecture 9 P Global Memory PP

10 Less complex database software Limited scalability Single buffer Single database storage Software – Shared Memory 10 P Global Memory PP

11 Loosely coupled Distributed Memory Shared Disc Architecture 11 PPP MMM S

12 Avoids memory bottleneck Same page may be in more than one buffer at once – can lead to incoherence Needs global locking mechanism Single logical database storage Each processor has its own database buffer Software – Shared Disc 12 PPP MMM S

13 Massively Parallel Loosely Coupled High Speed Interconnect (between processors) Shared Nothing Architecture 13 PPP MMM

14 Each processor owns part of the data Each processor has its own database buffer One page is only in one local buffer – no buffer incoherence Needs distributed deadlock detection Needs multiphase commit protocol Needs to break SQL requests into multiple sub-requests Software - Shared Nothing 14 PPP MMM

15 Hardware vs. Software Architecture 15 It is possible to use one software strategy on a different hardware arrangement Also possible to simulate one hardware configuration on another –Virtual Shared Disk (VSD) makes an IBM SP shared nothing system look like a shared disc setup (for Oracle) From this point on, we deal only with shared nothing

16 Shared Nothing Challenges 16 Partitioning the data Keeping the partitioned data balanced Splitting up queries to get the work done Avoiding distributed deadlock Concurrency control Dealing with node failure

17 Parallel Query Processing

18 Dividing up the Work 18 Application Coordinator Process Worker Process

19 Database Software on each node 19 App1 DBMS W1W2 C1 DBMS W1W2 App2 DBMS W1W2 C2

20 Inter-Query Parallelism 20 Improves throughput Different queries/transactions execute on different processors –(largely equivalent to material in lectures on concurrency)

21 Intra-Query Parallelism 21 Improves response times (lower latency) Intra-operator (horizontal) parallelism –Operators decomposed into independent operator instances, which perform the same operation on different subsets of data Inter-operator (vertical) parallelism –Operations are overlapped –Pipeline data from one stage to the next without materialisation Bushy (independent) parallelism –Subtrees in query plan executed concurrently

22 Intra-Operator Parallelism

23 23 SQL Query Subset Queries Processor

24 Partitioning 24 Decomposition of operators relies on data being partitioned across the servers that comprise the parallel database –Access data in parallel to mitigate the I/O bottleneck Partitions should aim to spread I/O load evenly across servers Choice of partitions affords different parallel query processing approaches: –Range partitioning –Hash partitioning –Schema partitioning

25 Range Partitioning 25 A-H I-P Q-Z

26 Hash Partitioning 26 Table

27 Schema Partitioning 27 Table 1 Table 2

28 Rebalancing Data 28 Data in proper balance Data grows, performance drops Add new nodes and disc Redistribute data to new nodes

29 Intra-Operator Parallelism 29 Example query: –SELECT c1,c2 FROM t WHERE c1>5.5 Assumptions: –100,000 rows –Predicates eliminate 90% of the rows Considerations for query plans: –Data shipping –Query shipping

30 Data Shipping 30 π c1,c2 σ c1>5.5 ∪ t1t1 t2t2 t3t3 t4t4

31 Data Shipping 31 Coordinator and Worker Network Worker 25,000 tuples 10,000 tuples (c1,c2)

32 Query Shipping 32 π c1,c2 σ c1>5.5 t1t1 t2t2 t3t3 t4t4 ∪ π c1,c2 σ c1>5.5 π c1,c2 σ c1>5.5 π c1,c2 σ c1>5.5

33 Query Shipping 33 Coordinator Network Worker 2,500 tuples 10,000 tuples (c1,c2)

34 Query Shipping Benefits 34 Database operations are performed where the data are, as far as possible Network traffic is minimised For basic database operators, code developed for serial implementations can be reused In practice, mixture of query shipping and data shipping has to be employed

35 Inter-Operator Parallelism

36 36 Allows operators with a producer-consumer dependency to be executed concurrently –Results produced by producer are pipelined directly to consumer –Consumer can start before producer has produced all results –No need to materialise intermediate relations on disk (although available buffer memory is a constraint) –Best suited to single-pass operators

37 Inter-Operator Parallelism 37 time ScanJoinSort Scan Join Sort

38 Intra- + Inter-Operator Parallelism 38 time ScanJoinSort Scan Join Sort Scan Join Sort

39 The Volcano Architecture 39 Basic operators as usual: –scan, join, sort, aggregate (sum, count, average, etc) The Exchange operator –Inserted between the steps of a query to: –Pipeline results –Direct streams of data to the next step(s), redistributing as necessary Provides mechanism to support both vertical and horizontal parallelism

40 Exchange Operators 40 Example query: –SELECT county, SUM(order_item) FROM customer, order WHERE order.customer_id=customer_id GROUP BY county ORDER BY SUM(order_item)

41 Exchange Operators 41 SORT GROUP HASH JOIN SCAN CustomerOrder

42 Exchange Operators 42 EXCHANGE SCAN Customer HASH JOIN

43 Exchange Operators 43 EXCHANGE SCAN Customer HASH JOIN EXCHANGE SCAN Order

44 EXCHANGE SCAN Customer HASH JOIN EXCHANGE SCAN Order EXCHANGE GROUP SORT 44

45 Bushy Parallelism

46 46 Execute subtrees concurrently Bushy Parallelism σ π RSTU π

47 Parallel Query Processing

48 Some Parallel Queries 48 Enquiry Collocated Join Directed Join Broadcast Join Repartitioned Join Combine aspects of intra-operator and bushy parallelism

49 Orders Database 49 CUSTKEYC_NAME…C_NATION… ORDERKEYDATE…CUSTKEY… SUPPKEYS_NAME…S_NATION… SUPPKEY… CUSTOMER ORDER SUPPLIER

50 50 “How many customers live in the UK?” Enquiry/Query SCAN COUNT Slave Task CoordinatorSUM Multiple partitions of customer table Return subcounts to coordinator Return to application

51 Collocated Join 51 “Which customers placed orders in July?” UNION ORDER Tables both partitioned on CUSTKEY (the same key) and therefore corresponding entries are on the same node Requires a JOIN of CUSTOMER and ORDER SCAN JOIN CUSTOMER Return to application

52 Directed Join 52 “Which customers placed orders in July?” (tables have different keys) UNION CUSTOMER SCAN ORDER SCAN JOIN Slave Task 1Slave Task 2 Coordinator Return to application ORDER partitioned on ORDERKEY, CUSTOMER partitioned on CUSTKEY Retrieve rows from ORDER, then use ORDER.CUSTKEY to direct appropriate rows to nodes with CUSTOMER

53 53 “Which customers and suppliers are in the same country?” Broadcast Join UNION CUSTOMER SCAN SUPPLIER SCAN JOIN Slave Task 1Slave Task 2 Coordinator Return to application SUPPLIER partitioned on SUPPKEY, CUSTOMER on CUSTKEY. Join required on *_NATION Send all SUPPLIER to each CUSTOMER node BROADCAST

54 54 “Which customers and suppliers are in the same country?” Repartitioned Join UNION CUSTOMER SCAN SUPPLIER Slave Task 1 Coordinator SUPPLIER partitioned on SUPPKEY, CUSTOMER on CUSTKEY. Join required on *_NATION. Repartition both tables on *_NATION to localise and minimise the join effort SCAN Slave Task 2 JOIN Slave Task 3 Return to application

55 Concurrency Control

56 Concurrency and Parallelism 56 A single transaction may update data in several different places Multiple transactions may be using the same (distributed) tables simultaneously One or several nodes could fail Requires concurrency control and recovery across multiple nodes for: –Locking and deadlock detection –Two-phase commit to ensure ‘all or nothing’

57 Locking and Deadlocks 57 With Shared Nothing architecture, each node is responsible for locking its own data No global locking mechanism However: –T1 locks item A on Node 1 and wants item B on Node 2 –T2 locks item B on Node 2 and wants item A on Node 1 –Distributed Deadlock

58 Resolving Deadlocks 58 One approach – Timeouts Timeout T2, after wait exceeds a certain interval –Interval may need random element to avoid ‘chatter’ i.e. both transactions give up at the same time and then try again Rollback T2 to let T1 to proceed Restart T2, which can now complete

59 Resolving Deadlocks 59 More sophisticated approach (DB2) Each node maintains a local ‘wait-for’ graph Distributed deadlock detector (DDD) runs at the catalogue node for each database Periodically, all nodes send their graphs to the DDD DDD records all locks found in wait state Transaction becomes a candidate for termination if found in same lock wait state on two successive iterations

60 Reliability

61 61 We wish to preserve the ACID properties for parallelised transactions –Isolation is taken care of by 2PL protocol –Isolation implies Consistency –Durability can be taken care of node-by-node, with proper logging and recovery routines –Atomicity is the hard part. We need to commit all parts of a transaction, or abort all parts Two-phase commit protocol (2PC) is used to ensure that Atomicity is preserved

62 Two-Phase Commit (2PC) Distinguish between: –The global transaction –The local transactions into which the global transaction is decomposed Global transaction is managed by a single site, known as the coordinator Local transactions may be executed on separate sites, known as the participants 62

63 Phase 1: Voting Coordinator sends “prepare T” message to all participants Participants respond with either “vote-commit T” or “vote-abort T” Coordinator waits for participants to respond within a timeout period 63

64 Phase 2: Decision If all participants return “vote-commit T” (to commit), send “commit T” to all participants. Wait for acknowledgements within timeout period. If any participant returns “vote-abort T”, send “abort T” to all participants. Wait for acknowledgements within timeout period. When all acknowledgements received, transaction is completed. If a site does not acknowledge, resend global decision until it is acknowledged. 64

65 Normal Operation 65 C P prepare T vote-commit T commit T ack vote-commit T received from all participants

66 Logging 66 C P prepare T vote-commit T commit T ack vote-commit T received from all participants

67 Aborted Transaction 67 C P prepare T vote-commit T abort T ack vote-abort T received from at least one participant

68 Aborted Transaction 68 C P prepare T vote-abort T abort T ack P vote-abort T received from at least one participant

69 State Transitions 69 C P prepare T vote-commit T commit T ack vote-commit T received from all participants INITIAL WAIT COMMIT INITIAL READY COMMIT

70 State Transitions 70 C P prepare T vote-commit T abort T ack vote-abort T received from at least one participant INITIAL WAIT ABORT INITIAL READY ABORT

71 State Transitions 71 C P prepare T vote-abort T abort T ack P INITIAL WAIT ABORT INITIAL ABORT

72 Coordinator State Diagram 72 sent: prepare T recv: vote-abort T sent: abort T INITIAL WAIT ABORTCOMMIT recv: vote-commit T sent: commit T

73 Participant State Diagram recv: prepare T sent: vote-commit T recv: commit T send: ack INITIAL READY COMMITABORT recv: prepare T sent: vote-abort T recv: abort T send: ack 73

74 Dealing with failures 74 If the coordinator or a participant fails during the commit, two things happen: –The other sites will time out while waiting for the next message from the failed site and invoke a termination protocol –When the failed site restarts, it tries to work out the state of the commit by invoking a recovery protocol The behaviour of the sites under these protocols depends on the state they were in when the site failed

75 Termination Protocol: Coordinator Timeout in WAIT –Coordinator is waiting for participants to vote on whether they're going to commit or abort –A missing vote means that the coordinator cannot commit the global transaction –Coordinator may abort the global transaction Timeout in COMMIT/ABORT –Coordinator is waiting for participants to acknowledge successful commit or abort –Coordinator resends global decision to participants who have not acknowledged 75

76 Termination Protocol: Participant Timeout in INITIAL –Participant is waiting for a “prepare T” –May unilaterally abort the transaction after a timeout –If “prepare T” arrives after unilateral abort, either: –resend the “vote-abort T” message or –ignore (coordinator then times out in WAIT) Timeout in READY –Participant is waiting for the instruction to commit or abort – blocked without further information –Participant can contact other participants to find one that knows the decision – cooperative termination protocol 76

77 Recovery Protocol: Coordinator Failure in INITIAL –Commit not yet begun, restart commit procedure Failure in WAIT –Coordinator has sent “prepare T”, but has not yet received all vote-commit/vote-abort messages from participants –Recovery restarts commit procedure by resending “prepare T” Failure in COMMIT/ABORT –If coordinator has received all “ack” messages, complete successfully –Otherwise, terminate 77

78 Recovery Protocol: Participant Failure in INITIAL –Participant has not yet voted –Coordinator cannot have reached a decision –Participant should unilaterally abort by sending “vote-abort T” Failure in READY –Participant has voted, but doesn't know what the global decision was –Cooperative termination protocol Failure in COMMIT/ABORT –Resend “ack” message 78

79 Parallel Utilities

80 80 Ancillary operations can also exploit the parallel hardware –Parallel Data Loading/Import/Export –Parallel Index Creation –Parallel Rebalancing –Parallel Backup –Parallel Recovery


Download ppt "Parallel Databases COMP3211 Advanced Databases Dr Nicholas Gibbins - 2014-2015."

Similar presentations


Ads by Google