Nothing is particularly hard if you divide it into small jobs. Henry Ford Nothing is particularly hard if you divide it into small jobs. Henry Ford.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Quicksort
Advertisements

Algorithms Analysis Lecture 6 Quicksort. Quick Sort Divide and Conquer.
ADA: 5. Quicksort1 Objective o describe the quicksort algorithm, it's partition function, and analyse its running time under different data conditions.
Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
CS4413 Divide-and-Conquer
Stephen P. Carl - CS 2421 Recursive Sorting Algorithms Reading: Chapter 5.
Chapter 4: Divide and Conquer Master Theorem, Mergesort, Quicksort, Binary Search, Binary Trees The Design and Analysis of Algorithms.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 5.
Analysis of Algorithms CS 477/677 Sorting – Part B Instructor: George Bebis (Chapter 7)
Sorting. “Sorting” When we just say “sorting,” we mean in ascending order (smallest to largest) The algorithms are trivial to modify if we want to sort.
Quicksort CS 3358 Data Structures. Sorting II/ Slide 2 Introduction Fastest known sorting algorithm in practice * Average case: O(N log N) * Worst case:
Analysis of Quicksort. Quicksort Algorithm Given an array of n elements (e.g., integers): If array only contains one element, return Else –pick one element.
25 May Quick Sort (11.2) CSE 2011 Winter 2011.
Quicksort COMP171 Fall Sorting II/ Slide 2 Introduction * Fastest known sorting algorithm in practice * Average case: O(N log N) * Worst case: O(N.
Sorting Algorithms and Average Case Time Complexity
1 Sorting Problem: Given a sequence of elements, find a permutation such that the resulting sequence is sorted in some order. We have already seen: –Insertion.
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 4 Some of the sides are exported from different sources.
Quicksort Divide-and-Conquer. Quicksort Algorithm Given an array S of n elements (e.g., integers): If array only contains one element, return it. Else.
Sorting Chapter 9.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS 253: Algorithms Chapter 7 Mergesort Quicksort Credit: Dr. George Bebis.
Sorting21 Recursive sorting algorithms Oh no, not again!
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu.
Quicksort. Quicksort I To sort a[left...right] : 1. if left < right: 1.1. Partition a[left...right] such that: all a[left...p-1] are less than a[p], and.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Sorting III 1 An Introduction to Sorting.
Quicksort. 2 Introduction * Fastest known sorting algorithm in practice * Average case: O(N log N) * Worst case: O(N 2 ) n But, the worst case seldom.
Quicksort.
Quicksort.
Unit 061 Quick Sort csc326 Information Structures Spring 2009.
Quicksort
Chapter 7 (Part 2) Sorting Algorithms Merge Sort.
Mergesort and Quicksort Chapter 8 Kruse and Ryba.
Sorting II/ Slide 1 Lecture 24 May 15, 2011 l merge-sorting l quick-sorting.
CSE 373 Data Structures Lecture 19
Computer Algorithms Lecture 10 Quicksort Ch. 7 Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
Sorting (Part II: Divide and Conquer) CSE 373 Data Structures Lecture 14.
1 Data Structures and Algorithms Sorting. 2  Sorting is the process of arranging a list of items into a particular order  There must be some value on.
Sorting. Introduction Common problem: sort a list of values, starting from lowest to highest. List of exam scores Words of dictionary in alphabetical.
1 Designing algorithms There are many ways to design an algorithm. Insertion sort uses an incremental approach: having sorted the sub-array A[1…j - 1],
Quicksort, Mergesort, and Heapsort. Quicksort Fastest known sorting algorithm in practice  Caveats: not stable  Vulnerable to certain attacks Average.
Sorting CSIT 402 Data Structures II. 2 Sorting (Ascending Order) Input ›an array A of data records ›a key value in each data record ›a comparison function.
Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2.Solve smaller instances.
Lecture No. 04,05 Sorting.  A process that organizes a collection of data into either ascending or descending order.  Can be used as a first step for.
Divide-and-Conquer UNC Chapel HillZ. Guo. Divide-and-Conquer It’s a technique instead of an algorithm Recursive in structure – Divide the problem into.
1 Heapsort, Mergesort, and Quicksort Sections 7.5 to 7.7.
Quicksort Data Structures and Algorithms CS 244 Brent M. Dingle, Ph.D. Game Design and Development Program Department of Mathematics, Statistics, and Computer.
Sorting divide and conquer. Divide-and-conquer  a recursive design technique  solve small problem directly  divide large problem into two subproblems,
Divide and Conquer Sorting Algorithms COMP s1 Sedgewick Chapters 7 and 8.
 Design and Analysis of Algorithms تصميم وتحليل الخوارزميات (311 عال) Chapter 2 Sorting (insertion Sort, Merge Sort)
Sorting Algorithms Merge Sort Quick Sort Hairong Zhao New Jersey Institute of Technology.
PREVIOUS SORTING ALGORITHMS  BUBBLE SORT –Time Complexity: O(n 2 ) For each item, make (n –1) comparisons Gives: Comparisons = (n –1) + (n – 2)
Lecture # 6 1 Advance Analysis of Algorithms. Divide-and-Conquer Divide the problem into a number of subproblems Similar sub-problems of smaller size.
Chapter 9 Recursion © 2006 Pearson Education Inc., Upper Saddle River, NJ. All rights reserved.
Concepts of Algorithms CSC-244 Unit 17 & 18 Divide-and-conquer Algorithms Quick Sort Shahid Iqbal Lone Computer College Qassim University K.S.A.
Computer Sciences Department1. Sorting algorithm 4 Computer Sciences Department3.
Sorting. 2 The Sorting Problem Input: A sequence of n numbers a 1, a 2,..., a n Output: A permutation (reordering) a 1 ’, a 2 ’,..., a n ’ of the input.
Analysis of Algorithms CS 477/677
Quicksort 1.
Chapter 4: Divide and Conquer
Quick Sort (11.2) CSE 2011 Winter November 2018.
Quicksort Algorithm Given an array of n elements (e.g., integers):
Lecture No 6 Advance Analysis of Institute of Southern Punjab Multan
EE 312 Software Design and Implementation I
Divide and Conquer (Merge Sort)
Quicksort.
CSE 373 Data Structures and Algorithms
Quicksort.
Quicksort.
Presentation transcript:

Nothing is particularly hard if you divide it into small jobs. Henry Ford Nothing is particularly hard if you divide it into small jobs. Henry Ford

 Recursive in structure  Divide ▪ The problem into sub-problems that are similar to the original but smaller in size.  Conquer ▪ The sub-problems by solving them recursively. ▪ If they are small enough, just solve them in a straightforward manner.  Combine ▪ The solutions to create a solution to the original problem

 Divide array into two halves, recursively sort left and right halves, then merge two halves  Mergesort  To sort an array A[p.. r]:  Divide  Divide the n-element sequence to be sorted into two subsequences of n/2 elements each  Conquer  Sort the subsequences recursively using merge sort  When the size of the sequences is 1 there is nothing more to do  Combine  Merge the two sorted subsequences to produce the sorted answer.

 Divide it in two at the midpoint  Conquer each side in turn (by recursively sorting)  Merge two halves together

void Mergesort (int A[], int first, int last) { if(first < last) { int mid = (first + last)/2; Mergesort(A, first, mid); Mergesort(A, mid+1,last); Merge(A, first, mid, last); } first last mid Check for base case Divide Conquer Combine

Merge Sort – Example Original Sequence Sorted Sequence

j Merge Function A k k k k k k k i i i i i j j j j k B C void Merge(int A[], int first, int mid, int last) { int n1, n2, i, j, k; n1 = mid - first + 1; n2 = last - mid; int B[n1]; int C[n2]; for (i=0; i< n1;i++) B[i] = A[first +i]; for (j=0; j< n2;j++) C[j] = A[mid +j+1]; i = 0; j = 0; k = first; while(i< n1 && j < n2) { if( B[i] <= C[j]) { A[k] = B[i]; i= i+1; } else { A[k] = C[j]; j= j+1; } k= k+1; } if(i < n1) { while(i < n1) { A[k] = B[i]; k =k+1; i=i+1; } { if(j < n2) { while(j < n2) { A[k] = C[j]; k =k+1; j=j+1; } first midlast Declaring auxiliary arrays of size n1 and n2 Moving element to auxiliary arrays Compare two elements, move one of them to the original array constant cost. k Copying the remaining elements to original array.

j Analysis of Merge Function A k k k k k k k i i i i i j j j j k B C void Merge(int A[], int first, int mid, int last) { int n1, n2, i, j, k; n1 = mid - first + 1; n2 = last - mid; int B[n1]; int C[n2]; for (i=0; i< n1;i++) B[i] = A[first +i]; for (j=0; j< n2;j++) C[j] = A[mid + j+1]; i = 0; j = 0; k = first; while(i< n1 && j < n2) { if( B[i] <= C[j]) { A[k] = B[i]; i= i+1; } else { A[k] = C[j]; j= j+1; } k= k+1; } if(i < n1) { while(i < n1) { A[k] = B[i]; k =k+1; i=i+1; } { if(j < n2) { while(j < n2) { A[k] = C[j]; k =k+1; j=j+1; } first midlast Declaring auxiliary arrays of size n1 and n2 O(1) Moving element to auxiliary arrays O(n) O(1) Compare two elements, move one of them to the original array constant cost. k Copying the remaining elements to original array. O(n)

 So far we have seen that it takes  O(n) time to merge two subarrays of size n/2  O(n) time to merge four subarrays of size n/4 into two subarrays of size n/2  O(n) time to merge eight subarrays of size n/8 into four subarrays of size n/4  Etc.  How many levels deep do we have to proceed?  How many times can we divide an array of size n into two halves?  O(log n)

 So if our recursion goes log n levels deep......and we do O(n) work at each level......our total time is: log n * O(n)... ...or in other words, O(n log n)  For large arrays, this is much better than Bubblesort, Selection sort, or Insertion sort, all of which are O(n 2 )  Not in place  Mergesort does, however, require a “workspace” array as large as our original array (O(n) extra space)

void Mergesort (int A[], int first, int last) { if(first < last) { int mid = (first + last)/2; Mergesort(A, first, mid); Mergesort(A, mid+1,last); Merge(A, first, mid, last); } Check for base case Divide Conquer Combine Running time T(n) of Merge Sort: computing the middle takes  (1)  (1) solving 2 sub-problems takes 2T(n/2) merging n elements takes  (n) Total: T(n) =  (1) if n = 1 T(n) = 2T(n/2) +  (n) if n > 1  T(n) =  (n lg n)

 Running time of Merge Sort: T(n) =  (1) if n = 1 T(n) = 2T(n/2) +  (n) if n > 1  Rewrite the recurrence as T(n) = c if n = 1 T(n) = 2T(n/2) + cn if n > 1 c > 0: Running time for the base case and time per array element for the divide and combine steps.

For the original problem, we have a cost of cn, plus two subproblems each of size (n/2) and running time T(n/2). cn T(n/2) Each of the size n/2 problems has a cost of cn/2 plus two subproblems, each costing T(n/4). cn cn/2 T(n/4) Cost of divide and merge. Cost of sorting subproblems.

Comp 122 Continue expanding until the problem size reduces to 1. cn cn/2 cn/4 cccccc lg n cn Total : cnlgn+cn

Continue expanding until the problem size reduces to 1. cn cn/2 cn/4 cccccc Each level has total cost cn. Each time we go down one level, the number of subproblems doubles, but the cost per subproblem halves  cost per level remains the same. There are lg n + 1 levels, height is lg n. (Assuming n is a power of 2.) Can be proved by induction. Total cost = sum of costs at each level = (lg n + 1)cn = cnlgn + cn =  (n lgn).

 Partition array into items that are “small” and items that are “large”, then recursively sort the two sets  Quicksort  Divide  Partition array into left and right sub-arrays ▪ Choose an element of the array, called pivot ▪ The elements in left sub-array are all less than pivot ▪ Elements in right sub-array are all greater than pivot  Conquer  Recursively sort left and right sub-arrays  Combine  Trivial: the arrays are sorted in place  No additional work is required to combine them  The entire array is now sorted

 A key step in the Quicksort algorithm is partitioning the array  We choose some (any) number p in the array to use as a pivot  We partition the array into three parts: p numbers less than p numbers greater than or equal to p p

S select pivot value S1S1 S2S2 partition S S1S S2S2 QuickSort(S 1 ) S S is sorted QuickSort(S 2 )

void quicksort(int array[], int left, int right) { if (left < right) { int p = partition(array, left, right); quicksort(array, left, p - 1); quicksort(array, p + 1, right); } Check for base case Divide Conquer Combine

int partition(int a[], int left, int right) { int p = a[left], l = left + 1, r = right; while (l < r) { while (l < right && a[l] < p) l++; while (r > left && a[r] >= p) r--; if (l < r) { swap(a[l], a[r]); } a[left] = a[r]; a[r] = p; return r; }

We are given array of n integers to sort:

There are a number of ways to pick the pivot element. In this example, we will use the first element in the array:

Given a pivot, partition the elements of the array such that the resulting array consists of: 1. One sub-array that contains elements < pivot 2. Another sub-array that contains elements >= pivot The sub-arrays are stored in the original data array. Partitioning loops through, swapping elements below/above pivot.

pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] l r 1.while (l<r)

pivot_index = 0 l r 1.while (l<r) 2.{while (l < right && a[l] < p ) l ++; [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r 1.while (l<r) 2.{while (l < right && a[l] < p ) l ++; [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r 1.while (l<r) 2.{while (l < right && a[l] < p ) l ++; [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r 1.while(l<r) 2.{while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r 1.while(l<r) 2.{while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) } [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) } [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) } [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) } [0] [1] [2] [3] [4] [5] [6] [7] [8]

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) }

pivot_index = 0 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) } a[left] = a[r]; a[r] = p; return r;

pivot_index = 4 l r [0] [1] [2] [3] [4] [5] [6] [7] [8] 1.while (l < r) 2.{ while (l < right && a[l] < p) l ++; 3. while (r > left &&a[r] >= p) r--; 4. if ( l < r ) swap ( a[l], a[r] ) } a[left] = a[r]; a[r] = p; return r;

<= pivot> pivot

< data[pivot]> = data[pivot] [0] [1] [2] [3] [4] [5] [6] [7] [8]

Example of partitioning choose pivot: search: swap: search: swap: search: swap: search: (left > right) swap with pivot:

int partition(int a[], int left, int right) { int p = a[left], l = left + 1, r = right; while (l < r) { while (l < right && a[l] < p) l++; while (r > left && a[r] >= p) r--; if (l < r) { swap(a[l], a[r]); } a[left] = a[r]; a[r] = p; return r; } O(n)

void quicksort(int array[], int left, int right) { if (left < right) { int p = partition(array, left, right); quicksort(array, left, p - 1); quicksort(array, p + 1, right); } Check for base case Divide Conquer Combine  (1)  (n)  (0)

 Suppose each partition operation divides the array almost exactly in half  Then the depth of the recursion in log 2 n.  At each level of the recursion, all the partitions at that level do work that is linear in n.  O(log 2 n) * O(n) = O(n log 2 n)  Hence in the average case, quicksort has time complexity O(n log 2 n)

void quicksort(int array[], int left, int right) { if (left < right) { int p = partition(array, left, right); quicksort(array, left, p - 1); quicksort(array, p + 1, right); } Check for base case Divide Conquer Combine  (1)  (n)  (0) Total: T(n) =  (1) if n = 1 T(n) = 2T(n/2) +  (n) if n > 1  T(n) =  (n lg n) 2T(n/2)

 Best-case partitioning  Partitioning produces two regions of size n/2  Recurrence: q=n/2 T(n) = 2T(n/2) +  (n) T(n) =  (nlgn) (Master theorem)

 In the worst case, partitioning always divides the size n array into these three parts:  A length one part, containing the pivot itself  A length zero part, and  A length n-1 part, containing everything else  We don’t recur on the zero-length part  Recurring on the length n-1 part requires (in the worst case) recurring to depth n.

 Worst-case partitioning  One region has zero element and the other has n – 1 elements  Maximally unbalanced  Recurrence: q=1 T(n) = T(n – 1) = T(0) +  (n) T(1) =  (1) T(n) = T(n – 1) + n n n - 1 n - 2 n n n n - 1 n - 2 n  (n 2 ) When does the worst case happen?

void quicksort(int array[], int left, int right) { if (left < right) { int p = partition(array, left, right); quicksort(array, left, p - 1); quicksort(array, p + 1, right); } Check for base case Divide Conquer Combine  (1)  (n)  (0) Total: T(n) =  (1) if n = 1 T(n) = T(n-1)  (n) if n > 1  T(n) =  (n 2 ) T(n-1)+T(0) =T(n-1)

 Assume first element is chosen as pivot.  Assume we get array that is already in order: pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index

1.While data[too_big_index] <= data[pivot] ++too_big_index 2.While data[too_small_index] > data[pivot] --too_small_index 3.If too_big_index < too_small_index swap data[too_big_index] and data[too_small_index] 4.While too_small_index > too_big_index, go to 1. 5.Swap data[too_small_index] and data[pivot_index] pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_indextoo_small_index

1.While data[too_big_index] <= data[pivot] ++too_big_index 2.While data[too_small_index] > data[pivot] --too_small_index 3.If too_big_index < too_small_index swap data[too_big_index] and data[too_small_index] 4.While too_small_index > too_big_index, go to 1. 5.Swap data[too_small_index] and data[pivot_index] pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index

1.While data[too_big_index] <= data[pivot] ++too_big_index 2.While data[too_small_index] > data[pivot] --too_small_index 3.If too_big_index < too_small_index swap data[too_big_index] and data[too_small_index] 4.While too_small_index > too_big_index, go to 1. 5.Swap data[too_small_index] and data[pivot_index] pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_indextoo_small_index

1.While data[too_big_index] <= data[pivot] ++too_big_index 2.While data[too_small_index] > data[pivot] --too_small_index 3.If too_big_index < too_small_index swap data[too_big_index] and data[too_small_index] 4.While too_small_index > too_big_index, go to 1. 5.Swap data[too_small_index] and data[pivot_index] pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_indextoo_small_index

1.While data[too_big_index] <= data[pivot] ++too_big_index 2.While data[too_small_index] > data[pivot] --too_small_index 3.If too_big_index < too_small_index swap data[too_big_index] and data[too_small_index] 4.While too_small_index > too_big_index, go to 1. 5.Swap data[too_small_index] and data[pivot_index] pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_indextoo_small_index

1.While data[too_big_index] <= data[pivot] ++too_big_index 2.While data[too_small_index] > data[pivot] --too_small_index 3.If too_big_index < too_small_index swap data[too_big_index] and data[too_small_index] 4.While too_small_index > too_big_index, go to 1. 5.Swap data[too_small_index] and data[pivot_index] pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_indextoo_small_index

1.While data[too_big_index] <= data[pivot] ++too_big_index 2.While data[too_small_index] > data[pivot] --too_small_index 3.If too_big_index < too_small_index swap data[too_big_index] and data[too_small_index] 4.While too_small_index > too_big_index, go to 1. 5.Swap data[too_small_index] and data[pivot_index] pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] > data[pivot]<= data[pivot]

 9-to-1 proportional split Q(n) = Q(9n/10) + Q(n/10) + n

 Assume  Each of the sizes for S1 is equally likely  This assumption is valid for our pivoting (median-of-three) strategy  On average, the running time is O(N log N)

 Assume that keys are random, uniformly distributed.  Best case and average case running time: O(n log n)  Worst case running time?  Recursion: 1.Partition splits array in two sub-arrays: one sub-array of size 0 the other sub-array of size n-1 2.Quicksort each sub-array  Depth of recursion tree? O(n)  Number of accesses per partition? O(n)

 Assume that keys are random, uniformly distributed.  Best case running time: O(n log n)  Average case running time: O(n log n)  Worst case running time: O(n 2 )!!! What can we do to avoid worst case?

Pick median value of three elements from data array: data[0], data[n/2], and data[n-1]. Use this median value as pivot Median of 0, 6, 8 is 6. Pivot is 6 Choose the pivot as the median of three

 One implementation (there are others)  median3 finds pivot and sorts left, center, right ▪ Median3 takes the median of leftmost, middle, and rightmost elements ▪ An alternative is to choose the pivot randomly (need a random number generator; “expensive”) ▪ Another alternative is to choose the first element (but can be very bad. Why?)  Swap pivot with the first element

 Best case: split in the middle — Θ( n log n)  Worst case: sorted array! — Θ( n 2 )  Average case: random arrays — Θ( n log n)  Memory requirement?  In-place sorting algorithm  Considered as the method of choice for internal sorting for large files (n ≥ 10000)

 Best case: split in the middle — Θ( n log n)  Worst case: sorted array! — Θ( n 2 )  Average case: random arrays — Θ( n log n)  Considered as the method of choice for internal sorting for large files (n ≥ 10000)  Improvements:  better pivot selection: median of three partitioning avoids worst case in sorted files  switch to insertion sort on small subfiles  elimination of recursion these combine to 20-25% improvement

 For very small arrays, quicksort does not perform as well as insertion sort  how small depends on many factors, such as the time spent making a recursive call, the compiler, etc  Do not use quicksort recursively for small arrays  Instead, use a sorting algorithm that is efficient for small arrays, such as insertion sort

 Not stable because of long distance swapping.  No iterative version (without using a stack).  Pure quicksort not good for small arrays.  “In-place”, but uses auxiliary storage because of recursive call (O(logn) space).  O(n log n) average case performance, but O(n 2 ) worst case performance.