Decrease and Conquer.

Slides:



Advertisements
Similar presentations
CSC 421: Algorithm Design & Analysis
Advertisements

Chapter 5 Decrease and Conquer. Homework 7 hw7 (due 3/17) hw7 (due 3/17) –page 127 question 5 –page 132 questions 5 and 6 –page 137 questions 5 and 6.
Theory of Algorithms: Decrease and Conquer James Gain and Edwin Blake {jgain | Department of Computer Science University of Cape Town.
Chapter 5: Decrease and Conquer
Chap 5: Decrease & conquer. Objectives To introduce the decrease-and-conquer mind set To show a variety of decrease-and-conquer solutions: Depth-First.
1 Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances.
CS4413 Divide-and-Conquer
Design and Analysis of Algorithms - Chapter 51 Decrease and Conquer 1. Reduce problem instance to smaller instance of the same problem 2. Solve smaller.
Design and Analysis of Algorithms - Chapter 5
Chapter 9 – Graphs A graph G=(V,E) – vertices and edges
1 Decrease-and-Conquer Approach Lecture 06 ITS033 – Programming & Algorithms Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing.
MA/CSSE 473 Day 12 Insertion Sort quick review DFS, BFS Topological Sort.
Chapter 14 Graphs. © 2004 Pearson Addison-Wesley. All rights reserved Terminology G = {V, E} A graph G consists of two sets –A set V of vertices,
UNIT-II DECREASE-AND-CONQUER ANALYSIS AND DESIGN OF ALGORITHMS CHAPTER 5:
Lecture 13 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 7.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Ver Chapter 13: Graphs Data Abstraction & Problem Solving with C++
Now, Chapter 5: Decrease and Conquer Reduce problem instance to smaller instance of the same problem and extend solution Solve smaller instance Extend.
329 3/30/98 CSE 143 Searching and Sorting [Sections 12.4, ]
MA/CSSE 473 Day 14 Strassen's Algorithm: Matrix Multiplication Decrease and Conquer DFS.
Brute Force and Exhaustive Search Brute Force and Exhaustive Search Traveling Salesman Problem Knapsack Problem Assignment Problem Selection Sort and Bubble.
MA/CSSE 473 Day 11 Knuth interview Amortization (growable Array) Brute Force Examples.
Brute Force and Exhaustive Search Brute Force and Exhaustive Search Traveling Salesman Problem Knapsack Problem Assignment Problem Selection Sort and Bubble.
Breadth-First Search (BFS)
Brute Force II.
MA/CSSE 473 Day 12 Interpolation Search Insertion Sort quick review
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Graphs Representation, BFS, DFS
CSC 421: Algorithm Design & Analysis
CSC 421: Algorithm Design & Analysis
Decrease-and-Conquer Approach
Chapter 5 Decrease-and-Conquer
Csc 2720 Instructor: Zhuojun Duan
CSC 421: Algorithm Design & Analysis
Depth-First Search.
Chapter 5 Decrease-and-Conquer
Chapter 5.
Algorithm design and Analysis
Decrease-and-Conquer
Graphs Representation, BFS, DFS
Graphs Chapter 15 explain graph-based algorithms Graph definitions
Search Related Algorithms
Decrease-and-Conquer
Lectures on Graph Algorithms: searching, testing and sorting
Graphs.
Topic: Divide and Conquer
Sorting … and Insertion Sort.
Decrease-and-Conquer
Chapter 6: Transform and Conquer
Graphs Part 2 Adjacency Matrix
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Chapter 5 Decrease-and-Conquer
Searching CLRS, Sections 9.1 – 9.3.
Decrease and Conquer Decrease and conquer technique Insertion sort
Chapter 4.
CSC 421: Algorithm Design & Analysis
Topic: Divide and Conquer
CSC 380: Design and Analysis of Algorithms
Data Structures & Algorithms
A vertex u is reachable from vertex v iff there is a path from v to u.
CSC 380: Design and Analysis of Algorithms
CSC 380: Design and Analysis of Algorithms
Chapter 14 Graphs © 2011 Pearson Addison-Wesley. All rights reserved.
Analysis and design of algorithm
CSC 421: Algorithm Design & Analysis
Advanced Sorting Methods: Shellsort
INTRODUCTION A graph G=(V,E) consists of a finite non empty set of vertices V , and a finite set of edges E which connect pairs of vertices .
Presentation transcript:

Decrease and Conquer

Basics Decrease-and-conquer algorithm works as follows: Establish the relationship between a solution to a given instance of a problem and a solution to a smaller instance of the same problem. Exploit this relationship either top down (recursively) or bottom up (without a recursion).

Decrease & Conquer variations Decrease by a constant The size of an instance is reduced by the same constant (usually one) at each iteration of the algorithm. Decrease by a constant factor The size of a problem instance is reduced by the same constant factor (usually two) on each iteration of the algorithm. Variable size decrease A size reduction pattern varies from one iteration to another

Decrease by Constant

Insertion Sort Insertion sort is based on the decrease (by one)- and-conquer approach: Provided that a smaller array A[1..n - 1] is already sorted, now sort the original array A[1..n -]. Find an appropriate position for an element A[n ] among the sorted n - 1 elements and insert it there. Right-to-left scan: Scan the sorted subarray from right to left until the first element smaller than or equal to A[n ] is encountered and then insert A[n ] right after that element.

Algorithm InsertionSort (A[1 Algorithm InsertionSort (A[1..n ]) for i ← 2 to n do v ← A[i] j ← i - 1 while j > 0 and A[j] > v do A[j + 1] ← A[j] j ← j - 1 A[j + 1] ← v

Efficiency Input size: n Basic operation: key comparison A[j] > v. The worst case occurs when the input is already an array of strictly decreasing values: W(n)=∑ ∑ 1 = n(n-1)/2=є Ө (n 2 ) Best Case: B(n)=∑ 1=n-1єӨ(n) n i-1 i=2 j=0 n i=2

Average Case for given i there can be i slots where x can be inserted, and each slot is equally probable of holding this value, thus each slot has probability 1/i, following are number of comparison for each slot slot number of Comparison i 1 i-1 2 i-2 3 . the reason for comparison at location 1 is i-1 as in this case 1st while condition is false so basic operation is not evaluated.

average number of comparisons needed to insert x is 1(1/i)+2(1/i)+………..(i-1)(1/i)+(i-1)(1/i) =1/i(1+2+…..i-1)+(i-1)(1/i) =1/i(i(i-1)/2)+(i-1)(1/i) =(i+1)/2-1/i……………………(eq1) these are number of comparisons require in each for loop iteration A(n)=∑ (i+1)/2 -1/i =1/2∑ (i+1) - ∑ 1/i A(n)=(n+4)(n-1)/4-lgn≈n 2 /4єӨ(n 2 ) n i=2 n i=2 n i=2 i-1/2+i-1/i i2-i+2i-2/2i i2+i-2/2i i+1/2-1/i total number of terms/2(first term + last term) n-1/2(3+n+1) ½ SUMMATION((n-1)(N+4)/2) (n-1)(n+4)/4 1/i= lgn

Depth First Search When given a graph, we are often interested in searching the vertices in the graph in some organized way. Depth-first search (DFS) starts visiting a graph at some arbitrary unvisited vertex. When a vertex is visited, a flag is marked to indicate that it has been visited. At each vertex v, DFS recursively visits an unvisited neighbour of v. If there are no unvisited neighbour, recursion step and backs up. It is convinenit to use stack in DFS push vertex on the stack when visited first time pop it when all of it becomes dead end

DFS set DFSnumber for all vertices to be -1 count = 0 for each unvisited vertex v (DFSnumber == -1) dfs(v, count) --- dfs(v, int &count) { set DFSnumber[v] = count++ for each w adjacent to v if DFSnumber[w] == -1 dfs(w, count) }

Stack Ordering: a16 c25 f44 d31 b53 e62 a e b f d c b e a d c f

DFS Every vertex is traversed once. For each vertex, we need to process all its neighbours. Adjacency matrix: (n^2) operations Yields two distinct ordering of vertices: preorder: as vertices are first encountered (pushed onto stack) postorder: as vertices become dead-ends (popped off stack) Applications: checking connectivity, finding connected components checking acyclicity searching state-space of problems for solution (AI)

Breadth First Search Explore graph moving across to all the neighbors of last visited vertex Similar to level-by-level tree traversals Instead of a stack, breadth-first uses queue Applications: same as DFS, but can also find paths from a vertex to all other vertices with the smallest number of edges

BFS BFS(G) count :=0 mark each vertex with 0 for each vertex v∈ V do bfs(v) ---------------------------- count := count + 1 mark v with count initialize queue with v while queue is not empty do a := front of queue for each vertex w adjacent to a do if w is marked with 0 mark w with count add w to the end of the queue remove a from the front of the queue

a b e a d c f c d e f b

Effieceincy Every vertex is traversed once. For each vertex, we need to process all its neighbours. Adjacency matrix: (n^2) operations Yields single ordering of vertices (order added/deleted from queue is the same) Applications Checking connectivity. Cycle detection. Shortest path (unweighted graph).

DFS vs BFS Data structures: stack vs. queue Implementation: recursion vs. explicit queue manipulation Complexity: same

Decrease By constant Factor

The Fake Coin Problem The fake coin problem is to detect a single fake coin from a set of coins using a balance scale. Supposing the fake coin is lighter, we can repeatedly compare one half of a set to the other half, eliminating the heavier half. This decreases the number of coins in half each iteration, so only ≈ log2 n weighings are needed.

Russian Peasant multiplication To multiply two pos. integers n · m by this method: m if n = 1 n.m = (n/2) · 2m if n is even (⌊n/2⌋ · 2m + m if n is odd

n m 50 65 25 130 12 260 +130 6 520 3 1040 1 2080 +1040 2080+(130+1040) 3250

Decrease By variable size

Interpolation Search Interpolation search (sometimes referred to as extrapolation search) is an algorithm for searching for a given key value in an indexed array that has been ordered by the values of the key. checking telephone index finding some word dictionary

Interpolation Search (Contd…) work for uniformly distributed data calculation of mid is modified as mid =low+(((x-s[low](high-low))/s(high)- s(low)) Average case: lg(lg(n)) worst Case: O(n)

Robust Interpolation Search gap=floor((high-low) ½) mid =low+(((x-s[low](high-low))/s(high)- s(low)) mid=min((high-gap),max(mid,low+gap)) index used for comparison is at least gap position away from low and high

Finding kth Smallest Element Function selection(low, high,k) if low==high selection=s[low] else partition(low,high,pp) if k==pp selection =s[pp] else if k<pp selection=selection(low,pp-1,k) selection=selection(pp+1,high,k)

procedure partition(low,high,pivotpoint) pivotitem=s[low], j=low for i=low+1 to high if s[i]<pivotitm j=j+1 exchange s[i] & s[j] pivotpoint=j exchange s[low] & s[pivotpoint]

Worst case w(n)=w(n-1)+n-1 =w(n-2)+n-2+n-1 =w(n-3)+n-3+n-2+n-1 …..

Average Case We assume that all inputs are equal likely to be pivot point ie pp=k   Input size in recursive call Number of outcomes yield that input size n 1 pp=2, pp=n-1 and k=1 or k=n 2 2, pp=3, pp=n-2 and k=1,2 or n-1, n 4 3 6 i 2(i) n-1 2(n-1)

A[n]=(sum of number of outcomes yield that input size(input size of recursive call)/sum of numbers )+complexity of partition =nA(0)+2(A(1)+4A(2)+6A(3)……+2n-1(An-1)/n+2(1+2+3+…+n- 1))+n-1 A(0)=0 = 2(A(1)+2A(2)+3A(3)……+n-1(An-1))/n+2(n(n-1)/2) +n-1 =2(A(1)+2A(2)+3(A(3)……+n-1(An-1))/n2 +n-1 n2A(n)= 2(A(1)+2A(2)+3(A(3)……+n-1(An-1))+n2( n- 1)……………….eq(1) replacing n with n-1 (n-1)2A(n-1)= 2(A(1)+2A(2)+3(A(3)……+n-2(An-2))+(n-1)2( n- 2)……………….eq(2) subtracting eq2 from eq1 A(n)= (n2-1)A(n-1)/n2+(n-1)(3n-2)/n2 for larger n A(n)≈A(n-1)+3 ≈A(n-2)+3+3 ≈A(n-i)+3i by taking A(0)=0, i=n ≈3nєӨ(n) n2A(n)-(n-1)2A(n-1)=2(n-1)A(n-1)+(n-1)(n2-(n-1)(n-2)) n2A(n) =(n-1)2A(n-1)+2(n-1)A(n-1)+(n-1)(n2-(n2-n-2n+2) n2A(n) =A(n-1)(n2-2n+1+2n-2))+(n-1)(n2-n2+3n-2) =(n2-1)A(n-1)+(n-1)(3n-2)

Selection Through Median Algorithm select(n,S,k) Select=selection(n,S,k,1,n,k) Algorithm selection(n,s,low,high,k) If(high==low) Selection=s[low] Else Partition(n,S,low,high,Pivot point) If k=pivotpoint Selection=s[pivotpoint] Else if k<pivotpoint Selection=selection(n,s,pivotpoint- 1,k) Selection=selection(n,s,pivotpoint+1,high,k)

Algorithm partition(n,s,low,highpivotpoint) arraysize=high-low+1 r=ceil(arraysize/2) for i=1 to r first=low+5i-5 last=min(low+5i-1,arraysize) T[i]=median of s[first] throughs[last] Pivotitem=select(r,T,(r+1)/2) J=low For i=lo to high If s[i]=pivotitem exchange s[i]and s[j] mark= j j=j+1 else if s[i]<pivotitem exchange s[i] nad s[j] pivotpoint=j-1 exchange s[mark] and s[poivotpoint].

Arrange the n elements into n/5 groups of 5 elements each, ignoring the at most four extra elements. (Constant time to compute bucket, linear time to put into bucket) Find the median of each group. This gives a list M of n/5 medians. (time Ө(n) if we use the same median selection algorithm as this one or hard-code it) Find the median of M. Return this as the partition element. (Call partition selection recursively using M as the input set)

number of comparisons required to partition array--n(high-low+1) number of comparisons required to find median of 5 elements is 6 for each group 4 comparison to find median element and 2 comparisons for left and right elements to be in order so comparison would be 6n/5

Recursive call to selection from partition n/5 recursive call to selection2 from selection2 7n/10 if x< median of median half of median are greater than x ½(n/5) so need to be discarded 3 elements in each group including median element are greater than x so 3n/10 elements are discarded and remaining are 7n/10.

Thus recursive call would be w(n)=w(n/5)+w(7n/10)+6n/5+n w(n) is linear in time w(n)<=c(n) w(n/5)+w(7n/10)+6n/5+n<=cn 11n/5+9cn/10<=cn 22n<=10cn-9cn 22n<=cn 22<=c w(n)<=22n Recurence tree

Euclid’s Algorithm Euclid’s s algorithm gcd(m,n)= gcd( n, m mod n) size, number, measured by the second number iterations. is based on repeated application of equality gcd( m, n) Example: gcd(80,44) = gcd(44,36) = gcd(36, 12) = gcd(12,0) = 12 One can prove that the size decreases at least by half after two consecutive iterations

m>n, input size is reduced by m mod n in each iteration, there could be two cases Case 1: if n<m/2, m mod n<n<=m/2 Case2: if n>m/2, m mod n= m-n < m/2 size is almost reduced by half so T(n)єӨ(lg n)