Static Dictionaries Collection of items. Each item is a pair.

Slides:



Advertisements
Similar presentations
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Advertisements

CS Data Structures Chapter 10 Search Structures (Selected Topics)
Chapter 7 Dynamic Programming 7.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Static Dictionaries Collection of items. Each item is a pair.  (key, element)  Pairs have different keys. Operations are:  initialize/create  get (search)
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Optimal binary search trees
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
MA/CSSE 473 Days Optimal BSTs. MA/CSSE 473 Days Student Questions? Expected Lookup time in a Binary Search Tree Optimal static Binary Search.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Chapter 5 Dynamic Programming 2001 년 5 월 24 일 충북대학교 알고리즘연구실.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Dynamic Programming Tutorial &Practice on Longest Common Sub-sequence.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
CS Data Structures Chapter 10 Search Structures.
We learnt what are forests, trees, bintrees, binary search trees. And some operation on the tree i.e. insertion, deletion, traversing What we learnt.
Computer Science: A Structured Programming Approach Using C Trees Trees are used extensively in computer science to represent algebraic formulas;
Static Dictionaries Collection of items. Each item is a pair.  (key, element)  Pairs have different keys. Operations are:  initialize/create  get (search)
Huffman’s Algorithm 11/02/ Weighted 2-tree A weighted 2-tree T is an extended binary tree with n external nodes and each of the external nodes is.
Foundation of Computing Systems
Copyright © 2005 Pearson Addison-Wesley. All rights reserved Balancing Binary Trees There are many approaches to balancing binary trees One method.
MA/CSSE 473 Day 30 Optimal BSTs. MA/CSSE 473 Day 30 Student Questions Optimal Linked Lists Expected Lookup time in a Binary Tree Optimal Binary Tree (intro)
MA/CSSE 473 Days Optimal linked lists Optimal BSTs.
Chapter 11. Chapter Summary  Introduction to trees (11.1)  Application of trees (11.2)  Tree traversal (11.3)  Spanning trees (11.4)
Lecture on Data Structures(Trees). Prepared by, Jesmin Akhter, Lecturer, IIT,JU 2 Properties of Heaps ◈ Heaps are binary trees that are ordered.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming Sequence of decisions. Problem state.
Recursive Objects (Part 4)
Chapter 5 : Trees.
MA/CSSE 473 Day 28 Optimal BSTs.
Advanced Algorithms Analysis and Design
MA/CSSE 473 Day 29 Optimal BST Conclusion
Advanced Algorithms Analysis and Design
The Greedy Method and Text Compression
Greedy Algorithms (Chap. 16)
Chapter 10 Search Structures
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Randomized Algorithms
Design and Analysis of Algorithm
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
13 Text Processing Hongfei Yan June 1, 2016.
Optimal Merging Of Runs
Static Dictionaries Collection of items. Each item is a pair.
Dynamic Programming Several problems Principle of dynamic programming
Chapter 8 Dynamic Programming.
Dynamic Programming.
Analysis & Design of Algorithms (CSCE 321)
Input: A={a1, a2, … an} – public key, S - ciphertext
(2,4) Trees /26/2018 3:48 PM (2,4) Trees (2,4) Trees
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Randomized Algorithms
Chapter 15: Dynamic Programming III
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Chapter 8 Dynamic Programming
Advanced Algorithms Analysis and Design
Greedy Algorithms TOPICS Greedy Strategy Activity Selection
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
(2,4) Trees /24/2019 7:30 PM (2,4) Trees (2,4) Trees
2019/4/10 chapter25.
Dynamic Programming II DP over Intervals
Design and Analysis of Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Advanced Analysis of Algorithms
Matrix Chain Multiplication
Dynamic Programming.
Dynamic Programming Sequence of decisions. Problem state.
Data Structures and Algorithms
Presentation transcript:

Static Dictionaries Collection of items. Each item is a pair. (key, element) Pairs have different keys. Operations are: initialize/create get (search) Each item/key/element has an estimated access frequency (or probability). Consider binary search tree only.

Example Key Probability a 0.8 b 0.1 c a < b < c a b b a c c Cost of a search equals number of key comparisons. Cost = 0.8 * 1 + 0.1 * 2 + 0.1 * 3 = 1.2 Cost = 0.8 * 2 + 0.1 * 1 + 0.1 * 2 = 1.9

Search Types Successful. Unsuccessful. b c f1 f2 f3 Successful. Search for a key that is in the dictionary. Terminates at an internal node. Unsuccessful. Search for a key that is not in the dictionary. Terminates at an external/failure node.

Internal And External Nodes f0 a b c f1 f2 f3 A binary tree with n internal nodes has n + 1 external nodes. Let s1, s2, …, sn be the internal nodes, in inorder. key(s1) < key(s2) < … < key(sn). Let key(s0) = –infinity and key(sn+1) = infinity. Let f0, f1, …, fn be the external nodes, in inorder. fi is reached iff key(si) < search key < key(si+1).

Cost Of Binary Search Tree Let pi = probability for key(si). Let qi = probability for key(si) < search key < key(si+1). Sum of ps and qs = 1. Cost of tree = S0 <= i <= n qi (level(fi) – 1) + S1<= i <= n pi * level(si) Cost = weighted path length. Note that Huffman trees minimize the weighted external path length. Now we want to minimize the sum of weighted external path length and weighted internal path length.

Brute Force Algorithm Generate all binary search trees with n internal nodes. Compute the weighted path length of each. Determine tree with minimum weighted path length. Number of trees to examine is O(4n/n1.5). Brute force approach is impractical for large n.

Dynamic Programming Keys are a1 < a2 < …< an. Let Ti j= least cost tree for ai+1, ai+2, …, aj. T0n= least cost tree for a1, a2, …, an. a3 a5 a6 a7 a4 f2 f3 f5 f4 f7 f6 T2,7 Ti j includes pi+1, pi+2, …, pj and qi, qi+1, …, qj.

Terminology Ti j= least cost tree for ai+1, ai+2, …, aj. ci j= cost of Ti j = Si <= u <= j qu (level(fu) – 1) + Si < u <= j pu * level(su). ri j= root of Ti j. wi j= weight of Ti j = sum of ps and qs in Ti j = pi+1+ pi+2+ …+ pj + qi + qi+1 + … + qj

i = j Ti j includes pi+1, pi+2, …, pj and qi, qi+1, …, qj. Ti i includes qi only. fi Tii ci i = cost of Ti i = 0. ri i = root of Ti i = 0. wi i = weight of Ti i = sum of ps and qs in Ti i = qi

i < j Ti j= least cost tree for ai+1, ai+2, …, aj. Ti j includes pi+1, pi+2, …, pj and qi, qi+1, …, qj. Let ak, i < k <= j, be in the root of Ti j. ak L R Ti j L includes pi+1, pi+2, …, pk-1 and qi, qi+1, …, qk-1. R includes pk+1, pi+2, …, pj and qk, qk+1, …, qj.

cost(L) L includes pi+1, pi+2, …, pk-1 and qi, qi+1, …, qk-1. cost(L) = weighted path length of L when viewed as a stand alone binary search tree.

Contribution To cij Ti j ci j = Si <= u <= j qu (level(fu) – 1) ak L R Ti j L ci j = Si <= u <= j qu (level(fu) – 1) + Si < u <= j pu * level(su). When L is viewed as a subtree of Ti j , the level of each node is 1 more than when L is viewed as a stand alone tree. So, contribution of L to cij is cost(L) + wi k-1.

cij Ti j Contribution of L to cij is cost(L) + wi k-1. ak L R Ti j Contribution of L to cij is cost(L) + wi k-1. Contribution of R to cij is cost(R) + wkj. cij = cost(L) + wi k-1 + cost(R) + wkj + pk = cost(L) + cost(R) + wij

cij Ti j cij = cost(L) + cost(R) + wij cost(L) = cik-1 cost(R) = ckj ak L R Ti j cij = cost(L) + cost(R) + wij cost(L) = cik-1 cost(R) = ckj cij = cik-1 + ckj + wij Don’t know k. cij = mini < k <= j{cik-1 + ckj} + wij

cij Ti j cij = mini < k <= j{cik-1 + ckj} + wij ak L R Ti j cij = mini < k <= j{cik-1 + ckj} + wij rij = k that minimizes right side.

Computation Of c0n And r0n Start with ci i = 0, ri i = 0, wi i = qi, 0 <= i <= n (zero-key trees). Use cij = mini < k <= j{cik-1 + ckj} + wij to compute cii+1, ri i+1, 0 <= i <= n – 1 (one-key trees). Now use the equation to compute cii+2, ri i+2, 0 <= i <= n – 2 (two-key trees). Now use the equation to compute cii+3, ri i+3, 0 <= i <= n – 3 (three-key trees). Continue until c0n and r0n(n-key tree) have been computed.

Computation Of c0n And r0n cij, rij, i <= j 1 2 3 4

Complexity cij = mini < k <= j{cik-1 + ckj} + wij O(n) time to compute one cij. O(n2) cijs to compute. Total time is O(n3). May be reduced to O(n2) by using cij = min ri,j-1 < k <= ri+1,j{cik-1 + ckj} + wij

Construct T0n Root is r0n. Suppose that r0n = 10. T0n T09 T10,n Construct T09 and T10,n recursively. Time is O(n).