Presentation is loading. Please wait.

Presentation is loading. Please wait.

 Greedy Algorithms. Greedy Algorithm  Greedy Algorithm - Makes locally optimal choice at each stage. - For optimization problems.  If the local optimum.

Similar presentations


Presentation on theme: " Greedy Algorithms. Greedy Algorithm  Greedy Algorithm - Makes locally optimal choice at each stage. - For optimization problems.  If the local optimum."— Presentation transcript:

1  Greedy Algorithms

2 Greedy Algorithm  Greedy Algorithm - Makes locally optimal choice at each stage. - For optimization problems.  If the local optimum is a part of the global optimum, we get the global optimum.

3 Greedy Algorithm vs Dynamic Programming Dynamic Programming Greedy Algorithm

4 Knapsack Problem n items a thief’s knapsack of size W

5 Knapsack Problem  0-1 knapsack problem - Each item must be either taken or left behind.  Fractional knapsack problem - The thief can take fractions of items.

6 Knapsack Problem $60 $100$120 $135 knapsack ($6/unit)($5/unit)($4/unit)($3/unit) n = 4, W = 50

7 Fractional Knapsack Problem $60 $100 $120 $135 knapsack ($6/unit) ($5/unit) ($4/unit) ($3/unit) Greedy algorithm: greatest value per unit 50 $240

8 0-1 Knapsack Problem $60 $100 $120 $135 optimal ($6/unit) ($5/unit) ($4/unit) ($3/unit) 50 value per unit $160 $135 $220

9 0-1 Knapsack Problem optimal 50 value per unit $160 $135 $220 Difficult to get the optimal solution with a greedy strategy. Dynamic Programming :

10 Optimal Substructure vs Subproblem Solution 48 57 4 1 3 2 T 1,5 T 1,6 T 2,5 T 2,6 T i,j : the solution of a subproblem A subproblem solution? A local optimum?

11 Greedy Algorithm vs Dynamic Programming Dynamic ProgrammingGreedy Algorithm Computes all subproblemsFind a local optimum Always finds the optimal solution May not be able to find the optimal solution Less efficientMore efficient

12 Optimal Substructure vs Subproblem Solution  Subproblem solution - From all subproblem solutions  Optimal substructure - With only constant number of parameters - Without subproblems or future choices Usually top-down Usually Bottom-up

13 Huffman Codes  A lossless data compression algorithm.  It uses variable-length code

14 Variable-Length Code Six characters : a, b, c, d, e, f How can we represent them with binary strings? abcdef Fixed-length000001010011100101

15 Variable-Length Code Six characters : a, b, c, d, e, f How can we represent them with binary strings? abcdef Variable-length0100011011 What 0010 menas? 0 0 1 0 = a a b a 0 01 0 = a d a 00 10 = c e …

16 Prefix Code Six characters : a, b, c, d, e, f How can we represent them with binary strings? abcdef Variable-length010110011111011100 No codeword is a prefix of another codeword.

17 Prefix Code Six characters : a, b, c, d, e, f How can we represent them with binary strings? abcdef Variable-length010110011111011100 No codeword is a prefix of another codeword.

18 Variable-Length Code abcdefTota l Frequency4513121695100 Fixed-length000001010011100101300 Variable-length010110011111011100224 Is it the optimal way?

19 Variable-Length Code abcdefTota l Frequency4513121695100 Fixed-length000001010011100101300 Variable-length010110011111011100224 Alternative?0110011011011101111231

20 Huffman Tree charcodefreq a00045 b00113 c01012 d01116 e1009 f1015 Fixed-length code 0 1 0 1 0 0 10101

21 Huffman Tree charcodefreq a045 b10113 c10012 d11116 e11019 f11005 Variable-length code 0 1 0 1 0 1 1 0 0 1

22 Huffman’s Algorithm 0 0 Every non-leaf node has two children. 0 Observation 1 The longest code : at least 2 characters

23 Huffman’s Algorithm 1 0 1 0 1 0 Observation 1 Observation 2 The longest code : at least 2 characters The longest 2 codes : the least frequent two characters 1 0

24 Huffman’s Algorithm Observation 1 Observation 2 Observation 3 The longest code : at least 2 characters The longest 2 codes : the least frequent two characters A non-leaf node : handled like a leaf node T T ’

25 Huffman’s Algorithm A non-leaf node : handled like a leaf node Observation 3 1 0 T T ’ 1 0 Total length in T = Total length in T ‘ + 16 * 1 + (9 + 5) * 2

26 Huffman’s Algorithm Observation 1 Observation 2 Observation 3 The longest code : at least 2 characters The longest 2 codes : the least frequent two characters A non-leaf node : handled like a leaf node T T ’

27 Huffman’s Algorithm Merging two least frequent nodes.

28 Huffman’s Algorithm Merging two least frequent nodes. 1 0

29 Huffman’s Algorithm Merging two least frequent nodes. 1 0 1 0

30 Huffman’s Algorithm Merging two least frequent nodes. 1 0 1 0 1 0

31 Huffman’s Algorithm Merging two least frequent nodes. 1 0 1 0 1 0 01

32 Huffman’s Algorithm Merging two least frequent nodes. 1 0 1 0 1 0 01 0 1 charcode a0 b101 c100 d111 e1101 f1100

33 Greedy Algorithm  Optimization Algorithms : finds a proper local optimum.  Not as powerful as Dynamic Programming, but simpler.  Greedy algorithms - Knapsack Problem - Huffman Code

34  Graphs

35 What is a graph? V = { 1, 2, 3, 4, 5 } E = { {1,2}, {1,3}, {2,3}, {2,4}, {2,5}, {3,4} } G = ( V, E )

36 Directed and Undirected Directed graphUndirected graph

37 Representations of Graphs Directed graph 1 2 3 4 5 Adjacency-list

38 Representations of Graphs Directed graphAdjacency-Matrix 1 2 3 4 5 5 3 4 2 1

39 Representations of Graphs Adjacency-Matrix Adjacency-list Adjacency ListAdjacency Matrix space|V| + |E||V| 2 Finding all edges|V| + |E||V| 2 Finding one edgenum of edges1

40 Adjacency Matrix of Graphs Directed graphAdjacency-Matrix 1 2 3 4 5 5 3 4 2 1

41 Adjacency Matrix of Graphs A = A T =

42 Weighted Graphs 1.7 0.4 2.0 -0.3 3.1 -0.2 3.6-2.1


Download ppt " Greedy Algorithms. Greedy Algorithm  Greedy Algorithm - Makes locally optimal choice at each stage. - For optimization problems.  If the local optimum."

Similar presentations


Ads by Google