SORTING Sorting is the process of arranging the elements in some logical order. Sorting are classified into following categories: External sorting: deals.

Slides:



Advertisements
Similar presentations
Garfield AP Computer Science
Advertisements

Quick Sort, Shell Sort, Counting Sort, Radix Sort AND Bucket Sort
CSCE 3110 Data Structures & Algorithm Analysis
1 Merge Sort Review of Sorting Merge Sort. 2 Sorting Algorithms Selection Sort uses a priority queue P implemented with an unsorted sequence: –Phase 1:
DIVIDE AND CONQUER APPROACH. General Method Works on the approach of dividing a given problem into smaller sub problems (ideally of same size).  Divide.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 5.
Sorting Algorithms. Motivation Example: Phone Book Searching Example: Phone Book Searching If the phone book was in random order, we would probably never.
Data Structures Data Structures Topic #13. Today’s Agenda Sorting Algorithms: Recursive –mergesort –quicksort As we learn about each sorting algorithm,
Sorting Algorithms and Average Case Time Complexity
Sorting Chapter Sorting Consider list x 1, x 2, x 3, … x n We seek to arrange the elements of the list in order –Ascending or descending Some O(n.
CHAPTER 11 Sorting.
Sorting Chapter 10.
1 Data Structures and Algorithms Sorting. 2  Sorting is the process of arranging a list of items into a particular order  There must be some value on.
Sorting HKOI Training Team (Advanced)
CHAPTER 09 Compiled by: Dr. Mohammad Omar Alhawarat Sorting & Searching.
Fall 2013 Instructor: Reza Entezari-Maleki Sharif University of Technology 1 Fundamentals of Programming Session 17 These.
CSCE 3110 Data Structures & Algorithm Analysis Sorting (I) Reading: Chap.7, Weiss.
HKOI 2006 Intermediate Training Searching and Sorting 1/4/2006.
Sorting. Pseudocode of Insertion Sort Insertion Sort To sort array A[0..n-1], sort A[0..n-2] recursively and then insert A[n-1] in its proper place among.
Sorting CS 105 See Chapter 14 of Horstmann text. Sorting Slide 2 The Sorting problem Input: a collection S of n elements that can be ordered Output: the.
Sorting Chapter Sorting Consider list x 1, x 2, x 3, … x n We seek to arrange the elements of the list in order –Ascending or descending Some O(n.
Sorting CS 110: Data Structures and Algorithms First Semester,
DATA AND FILE STRUCTURE USING C MCA110 M K Pachariya Id. Department of Computer Application, Galgotias.
1 Sorting (Bubble Sort, Insertion Sort, Selection Sort)
Review 1 Selection Sort Selection Sort Algorithm Time Complexity Best case Average case Worst case Examples.
Lecture No. 04,05 Sorting.  A process that organizes a collection of data into either ascending or descending order.  Can be used as a first step for.
Sorting and Searching by Dr P.Padmanabham Professor (CSE)&Director
1 Searching and Sorting Searching algorithms with simple arrays Sorting algorithms with simple arrays –Selection Sort –Insertion Sort –Bubble Sort –Quick.
Chapter 9 Sorting. The efficiency of data handling can often be increased if the data are sorted according to some criteria of order. The first step is.
1 Chapter 8 Sorting. 2 OBJECTIVE Introduces: Sorting Concept Sorting Types Sorting Implementation Techniques.
Nirmalya Roy School of Electrical Engineering and Computer Science Washington State University Cpt S 122 – Data Structures Sorting.
PREVIOUS SORTING ALGORITHMS  BUBBLE SORT –Time Complexity: O(n 2 ) For each item, make (n –1) comparisons Gives: Comparisons = (n –1) + (n – 2)
Chapter 9: Sorting1 Sorting & Searching Ch. # 9. Chapter 9: Sorting2 Chapter Outline  What is sorting and complexity of sorting  Different types of.
INTRO2CS Tirgul 8 1. Searching and Sorting  Tips for debugging  Binary search  Sorting algorithms:  Bogo sort  Bubble sort  Quick sort and maybe.
Liang, Introduction to Java Programming, Tenth Edition, (c) 2013 Pearson Education, Inc. All rights reserved. 1 Chapter 23 Sorting.
Algorithm Design Techniques, Greedy Method – Knapsack Problem, Job Sequencing, Divide and Conquer Method – Quick Sort, Finding Maximum and Minimum, Dynamic.
Chapter 23 Sorting Jung Soo (Sue) Lim Cal State LA.
Advanced Sorting.
Advanced Sorting 7 2  9 4   2   4   7
Prof. U V THETE Dept. of Computer Science YMA
Chapter 11 Sorting Acknowledgement: These slides are adapted from slides provided with Data Structures and Algorithms in C++, Goodrich, Tamassia and Mount.
Sorting.
Analysis of Algorithms CS 477/677
Sorting Mr. Jacobs.
Sorting.
Description Given a linear collection of items x1, x2, x3,….,xn
Algorithm Design Methods
Sorting Chapter 13 Nyhoff, ADTs, Data Structures and Problem Solving with C++, Second Edition, © 2005 Pearson Education, Inc. All rights reserved
Quicksort and Mergesort
Chapter 4: Divide and Conquer
Advanced Sorting Methods: Shellsort
Unit-2 Divide and Conquer
Sorting Algorithms Ellysa N. Kosinaya.
Algorithms and Data Structures Lecture III
Data Structures Review Session
Lecture No 6 Advance Analysis of Institute of Southern Punjab Multan
Sub-Quadratic Sorting Algorithms
Algorithm Efficiency and Sorting
Analysis of Algorithms
CSE 373 Data Structures and Algorithms
Searching/Sorting/Searching
CSCE 3110 Data Structures & Algorithm Analysis
CSCE 3110 Data Structures & Algorithm Analysis
Design and Analysis of Algorithms
CSCE 3110 Data Structures & Algorithm Analysis
Algorithm Efficiency and Sorting
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Advanced Sorting Methods: Shellsort
Algorithms and Data Structures Lecture II
Sorting Popular algorithms:
Presentation transcript:

SORTING Sorting is the process of arranging the elements in some logical order. Sorting are classified into following categories: External sorting: deals with sorting of the data stored in data files. This method is used when the volume of data is very large and cannot be held in computer main memory. Internal sorting: deals with sorting the data held in memory of the computer

SORTING METHODS Bubble sort Selection sort Insertion sort Bucket sort Merge sort Quick sort Heap sort Tree sort Shell sort

BUBBLE SORT It requires n-1 passes to sort an array. In each pass every element a[i] is compared with a[i+1], for i=0 to (n-k), where k is the pass number and if they are out of order i. e. if a[i]>a[i+1], they are swapped. This will cause the largest element move up or bubble up. Thus after the end of the first pass the largest element in the array will be placed in the nth position and on each successive pass, the next largest element is placed at position (n-1),(n-2)….,2 respectively

BUBBLE SORT Pass1. Step1. if a[0]>a[1] then swap a[0] and a[1]. Stepn-1. if a[n-2]>a[n-1] then swap a[n-2] and a[n-1]. Pass2. Stepn-2. if a[n-3]>a[n-2] then swap a[n-3] and a[n-2].

BUBBLE SORT . Pass k. Step1. if a[0]>a[1] then swap a[0] and a[1]. Step n-k. if a[n-k+1]>a[n-k] then swap a[n-k+1] and a[n-k]. Pass n-1 Step 1 if a[0]>a[1] then swap a[0] and a[1].

BUBBLE SORT Example: 12 40 3 2 15 12 40 3 2 15 12 3 40 2 15 12 3 2 40 15 12 3 2 15 40 12 40 3 2 15 Pass 1 Given array

BUBBLE SORT 3 12 2 15 40 3 2 12 15 40 3 2 12 15 40 Pass 2

BUBBLE SORT Pass 3 pass 4 2 3 12 15 40 2 3 12 15 40 2 3 12 15 40

ALGORITHM for(int x=0; x<n; x++) { for(int y=0; y<n-1; y++) if(array[y]>array[y+1]) int temp = array[y+1]; array[y+1] = array[y]; array[y] = temp; }

ANALYSIS OF BUBBLE SORT First pass require n-1 comparison Second pass requires n-2 comparison Kth pass requires n-k comparisons Last pass requires only one comparison Therefore total comparisons are: F(n)=(n-1)+(n-2)+……+(n-k)+…3+2+1 =n(n-1)/2 =O(n2)

SELECTION SORT The selection sort also requires (n-1) passes to sort an array. In the first pass, find the smallest element from elements a[0], a[1], a[2],….., a[n-1] and swap with the first element, i.e. a[0]. In the second pass, find the smallest element from elements a[1], a[2], a[3].. a[n-1] and swap with a[1] and so on.

SELECTION SORT Pass1. Find the location loc of the smallest element in the entire array, i.e. a[0],[1],a[2]…a[n-1] Interchange a[0] & a[loc]. Then a[0] is trivially sorted. Pass2. Find the location loc of the smallest element in the entire array, i.e. a[1],a[2]…a[n-1] Interchange a[1] & a[loc]. Then a[0], a[1] are sorted. Passk. Find the location loc of the smallest element in the entire array, i.e. a[k],a[k+1],a[k+2]…a[n-1] Interchange a[k] & a[loc]. Then a[0],a[1],a[2],…a[k] are sorted. Passn-1. Find the location loc of the smaller of the element a[n-2],a[n-1] Interchange a[n-2] & a[loc]. Then elements a[0],a[1],a[2]….a[n-1].

EXAMPLE Given array: 20 35 40 100 3 10 15 a[0] a[1] a[2] a[3] a[4] Pass 1: a[0] a[1] a[2] a[3] a[4] a[5] a[6] 20 35 40 100 3 10 15 Loc=4 Interchange elements a[0] & a[4] i.e. 20 and 3

SELECTION SORT a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 35 40 100 20 10 15 Pass 2 Loc=5 Interchange elements a[1] & a[5] i.e. 35 and 10 a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 10 40 100 20 35 15 Pass3 Loc=6 Interchange elements a[2] & a[6] i.e. 40 and 15 a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 10 15 100 20 35 40 Loc=4 Pass 4 Interchange elements a[3] & a[4] i.e. 100 and 20

SELECTION SORT a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 10 15 20 100 35 40 Pass 5 Loc=5 Interchange elements a[4] & a[5] i.e. 100 and 35 a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 10 15 20 35 100 40 Pass 6 Loc=6 Interchange elements a[5] & a[6] i.e. 100 and 40 a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 10 15 20 35 40 100

ALGORITHM Smallestelement(a,n,k,loc) Here a is linear array of size n. this sub algorithm finds the location loc of smallest element among a[k-1],a[k+1],a[k+2]…a[n-1]. Temporary variable small is used to hold the current smllest element nd j is used loop control variable. Begin set small=a[k-1] set loc=k-1 for j=k to (n-1) by 1 do if(a[j]<small) then set small = a[j] set loc=j endif endfor end

ALGORITHM Selectionsort(a,n) Here a is the linear array with n elements in memory. This algorithm sorts elements into ascending order. It uses a temporary variable temp to facilitate the exchange of two values and variable I is used loop control variable Begin for i=1 to (n-1) by 1 do call smllest element(a,n,I,loc) set temp=a[i-1] set a[i-1]=a[loc] set a[loc]=temp endfor end

ANALYSIS OF SELECTION SORT First pass require n-1 comparison to find the location loc of smallest element Second pass requires n-2 comparison Kth pass requires n-k comparisons Last pass requires only one comparison Therefore total comparisons are: F(n)=(n-1)+(n-2)+……+(n-k)+…3+2+1 =n(n-1)/2 =O(n2)

INSERTION SORT This algorithm is very popular with bridge players when they sort their cards. In this procedure, we pick up a particular value and then insert it at the appropriate place in the sorted sub list. This algorithm also requires n-1 passes

INSERTION SORT Pass1: a[1] is inserted either before or after a[0] so that a[0] and a[1] are sorted. Pass2: a[2] is inserted either before a[0] or between a[0] and a[1] or after a[1] so that the elements a[0], a[1], a[2] are sorted. Pass3: a[3] is inserted either before a[0] or between a[0] and a[1] or between a[1] and a[2] or after a[2] so that the elements a[0], a[1], a[2], a[3] are sorted. Passk: a[k] is inserted in proper place in the sorted sub array a[0], a[1], a[2],…a[k-1] so that the elements a[0], a[1], a[2],…a[k-1],a[k] are sorted. Passn-1: a[n-1] is inserted in proper place in the sorted sub array a[0], a[1], a[2],…a[n-2] so that the elements a[0], a[1], a[2],…a[n-1] are sorted.

EXAMPLE Given array: 35 20 40 100 3 10 15 a[0] a[1] a[2] a[3] a[4] Pass 1: a[0] a[1] a[2] a[3] a[4] a[5] a[6] 35 20 40 100 3 10 15 Since a[1]< a[0] insert element a[1] before a[0]

INSERTION SORT a[0] a[1] a[2] a[3] a[4] a[5] a[6] 20 35 40 100 3 10 15 Pass 2 Since a[2]>a[1] no action is performed a[0] a[1] a[2] a[3] a[4] a[5] a[6] 20 35 40 100 3 10 15 Pass3 Since a[3]>a[2] no action is performed a[0] a[1] a[2] a[3] a[4] a[5] a[6] 20 35 40 100 3 10 15 Pass 4 Since a[4] is less than a[3], a[2], a[1] as well as a[0] therefore insert a[4] before a[0]

INSERTION SORT a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 20 35 40 100 10 15 Pass 5 Since a[5] is less than a[4], a[3], a[2] as well as a[1] therefore insert a[5] before a[1] a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 10 20 35 40 100 15 Pass 6 Since a[6] is less than a[5], a[4], a[3] as well as a[2] therefore insert a[6] before a[2] a[0] a[1] a[2] a[3] a[4] a[5] a[6] 3 10 15 20 35 40 100

ALGORITHM insertionsort(a,n) Here a is the linear array with n elements in memory. This algorithm sorts elements into ascending order. It uses a temporary variable temp to facilitate the exchange of two values and variable j and k are used loop control variables. Begin for k=1 to (n-1) by 1 do set temp=a[k] set a[j]=k-1 while((temp<a[j]) and j>=0) do set a[j+1]=a[j] set j=j-1 endwhile set a[j+1]=temp endfor end

ANALYSIS OF INSERTION SORT The worst case performance occurs when the elements of the input array are in descending order First pass require 1 comparison to find the location loc of smallest element Second pass requires 2 comparison Kth pass requires k-1 comparisons Last pass requires (n-1) comparison Therefore total comparisons are: F(n)=1+2+3+…..+(n-k)+….+(n-3)+(n-2)+(n-1) =n(n-1)/2 =O(n2)

Bucket/Radix sort This is used by most of the people when sorting a list of names in alphabetical order. The procedure is: First, the names are grouped according to the first letter, thus the names are arranged in 26 classes, one for each letter of alphabet. first class consists of those names that begin with letter A, the second class consists of those names that begins with letter B, and so on. Next, the names are grouped according to the second letter. After this step, the list of name will be sorted on first two letter. This process is continued for number of times depending on the length of the names with maximum letters.

Bucket/Radix sort Since there are 26 letter of alphabet, we make use of 26 buckets, one for each letter of the alphabet. After grouping these names according to their specific letter, we collect them according to order of bucket. This new list becomes input for the next pass i.e to separate them on the next letter from left. To sort decimal number where base (radix) is 10, we need 10 buckets are numbered from 0-9. Unlike sorting names, decimal numbers are sorted from right to left i.e. first on unit digits, then on ten digit and so on.

example 321, 150, 235, 65, 573, 789, 928, 542 321 150 235 65 573 789 928 542 1 2 3 4 5 6 7 8 9 Input Need 10 buckets

example 321, 150, 235, 65, 573, 789, 928, 542 321 150 235 065 573 789 928 542 065 150 321 542 573 235 928 789 1 2 3 4 5 6 7 8 9 Input Pass 1

150 321 542 573 235 065 928 789 928 321 235 542 150 065 573 789 1 2 3 4 5 6 7 8 9 Input Pass 2

321 928 235 542 150 065 573 789 573 065 150 235 321 542 789 928 1 2 3 4 5 6 7 8 9 Input Pass 3

Bucket/Radix sort After pass three, when the numbers are collected, they are in following order 65, 150, 235, 321, 542, 573, 789, 928 thus the numbers are sorted

Algorithm Bucketsort(a,n) Here a is linear array of integer with n elements, the variable digitcount is used to store the number of digits in the largest number in order to control the number of passes to be performed. Begin find the largest number of the array set digitcount=no. of digits of the largest no. for pass=1 to digitcount by 1 do initialize buckets for i=1 to n-1 by 1 do set digit=obtain digit no. pass of a[i] put a[i] in bucket no. digit increment bucket count for bucket no. digit endfor collect all the numbers from buckets in order end

void bucket(int a[],int n) { int bucket[10][20], buckcount[10]; int i, j, k, r, digitcount=0, divisor=1, largest, passno; larget=a[0]; for(i=1; i<n; i++) /* Find the larget no*/ if(a[i]>largest) largest =a[i]; } while(largest>0) /* Find no of digits of largest no.*/ digitcount++; larget/=10; for(passno=0;passno<digicount;passno++) { for(k=0;k<10;k++) buckcount[k]=0; /*inintialize bucket count */ for( i=0;i<n;i++) r=(a[i]/divisor)%10; bucket[r][buckcount[r]++]=a[i]; } i=0; /* Collect elements from bucket */ for(j=0;j<buckcount[k];j++) a[i++]=bucket[k][j]; divisor*=10;

Analysis of bucket sort Let us suppose the number of digits in the largest element of the given array is S. the number of passes to be performed is n. Then , the umber of comparisons, f(n), needed to sort the given array are f(n)=<=n*s*10 ----here 10 base of decimal number Though, s is independent of n , but if s=n, then in worst case. f(n)=O(n2) But on the other hand, if s=log10n, then f(n)=O(nlog10n) Thus from the above discussion , we conclude that bucket sort performs well only when the number of digits in the elements are very small.

Merge Sort

Divide and Conquer Divide-and-conquer method for algorithm design: Divide: If the input size is too large to deal with in a straightforward manner, divide the problem into two or more disjoint subproblems Conquer: Use divide and conquer recursively to solve the subproblems Combine: Take the solutions to the subproblems and “merge” these solutions into a solution for the original problem

Merge Sort Algorithm Divide: If S has at least two elements (nothing needs to be done if S has zero or one elements), remove all the elements from S and put them into two sequences, S1 and S2 , each containing about half of the elements of S. (i.e. S1 contains the first én/2ù elements and S2 contains the remaining ën/2û elements). Conquer: Sort sequences S1 and S2 using Merge Sort. Combine: Put back the elements into S by merging the sorted sequences S1 and S2 into one sorted sequence

Merge Sort: Algorithm Merge-Sort(A, p, r) if p < r then q¬(p+r)/2 Merge-Sort(A, p, q) Merge-Sort(A, q+1, r) Merge(A, p, q, r) Merge(A, p, q, r) Take the smallest of the two topmost elements of sequences A[p..q] and A[q+1..r] and put into the resulting sequence. Repeat this, until both sequences are empty. Copy the resulting sequence into A[p..r].

MergeSort (Example) - 1

MergeSort (Example) - 2

MergeSort (Example) - 3

MergeSort (Example) - 4

MergeSort (Example) - 5

MergeSort (Example) - 6

MergeSort (Example) - 7

MergeSort (Example) - 8

MergeSort (Example) - 9

MergeSort (Example) - 10

MergeSort (Example) - 11

MergeSort (Example) - 12

MergeSort (Example) - 13

MergeSort (Example) - 14

MergeSort (Example) - 15

MergeSort (Example) - 16

MergeSort (Example) - 17

MergeSort (Example) - 18

MergeSort (Example) - 19

MergeSort (Example) - 20

MergeSort (Example) - 21

MergeSort (Example) - 22

Merge Sort Revisited To sort n numbers Strategy if n=1 done! recursively sort 2 lists of numbers ën/2û and én/2ù elements merge 2 sorted lists in Q(n) time Strategy break problem into similar (smaller) subproblems recursively solve subproblems combine solutions to answer

Analysis of merge Sort The major work is done in the merge procedure, which is an O(n) operation. Merge procedure is called from merge sort procedure after the array is divided into two halves, and each halves has been sorted. In each recursive calls, one for the left half and one for the right half, array is divided into four segments. At each level number of segments doubles, therefore the total division are log2n , hence total number of comparisons f(n)= n*log2n =O(nlog2n)

Another divide-and-conquer sorting algorihm Quick-Sort Another divide-and-conquer sorting algorihm To understand quick-sort, let’s look at a high-level description of the algorithm 1) Divide : If the sequence S has 2 or more elements, select an element x from S to be your pivot. Any arbitrary element, like the last, will do. Remove all the elements of S and divide them into 3 sequences: L, holds S’s elements less than x E, holds S’s elements equal to x G, holds S’s elements greater than x 2) Recurse: Recursively sort L and G 3) Conquer: Finally, to put elements back into S in order, first inserts the elements of L, then those of E, and those of G. Here are some diagrams....

Idea of Quick Sort 1) Select: pick an element 2) Divide: rearrange elements so that x goes to its final position E 3) Recurse and Conquer: recursively sort

In-Place Quick-Sort Divide step: l scans the sequence from the left, and r from the right. A swap is performed when l is at an element larger than the pivot and r is at one smaller than the pivot.

In Place Quick Sort (cont’d) A final swap with the pivot completes the divide step

Algorithm quicksort(a , l, r) Begin If(l<r) then splitarray(a,l,r,loc) //partition the array quicksort(a,l,loc-1) //recursively sort left sub array quicksort(a,loc+1,r) //recursively sort right sub array endif end

Analysis of the quick sort To find the location of the element that splits the array into two sections is an O(n) operation, because every element in the array is compared to the dividing element. After the division each section is examined separately. If the array is split approximately in half (which is no usually), then there will be log2n splits. therefore the total comparisons are: f(n)=n*log2n = O(nlog2n)

Tree Sort Tree sort method, in order to sort an array of size n in ascending order, works in following phases: Build a binary search tree by using n calls to insert operation Print elements using inorder traversal.

Tree Sort Array : 50, 60, 40, 45, 31, 75, 53, 65 Solution: When these elements are inserted in binary search tree one by one, the final binary search tree looks like in next slide;

Given Array: 50, 60, 40, 45, 31, 75, 53, 65 50 40 60 53 75 31 45 65 Inorder traversal produces the following listing of elements: 31, 40, 45, 50, 53, 60, 65, 75

Tree Sort v/s Quick Sort In tree sort , the first item is inserted into the root of the tree and all subsequent elements are partitioned to the left or right depending on their relation to the first element. This is analogous to quick sort, if the first element in the array is used as the pivot element for partitioned. Further in tree sort, second element becomes the root of the sub tree. It becomes the pivot element to partition all subsequent elements in that subtree. This is analogous to quick sort for partitioning of one of the sublist.

Analysis of Tree Sort All the comparisons for tree sort are done during insert() calls. The insert() function does the same number of comparisons as quick sort. Therefore , tree sort has the same running time as quick sort, i.e. average case complexity is O(nlogn) and worst case complexity is O(n2)

Tree Sort It does not require that all the elements to be present in the array at the beginning of sorting Elements can be added gradually as they become available It also works on a linked structure that allows easier insertions and deletions than a contiguous list

Shell sort Invented by Donald Shell in 1959, the shell sort is the most efficient of the O(n2) class of sorting algorithms. Of course shell sort is also the most complex of the O(n2) algorithms. This algorithm is similar to bubble sort in the sense it also moves elements by exchanges. It begins by comparing elements that are at a distance d. With this the elements that are quite away from their place will move rapidly than the simple bubble sort. In each pass, the value of d is reduced to half i.e. di+1=(di+1)/2 In each pass, each element is compared with element that is located d position away from it, and exchange is made if required. The next iteration starts with new value of d. The algorithm terminates when d=1

Example 12,9,-10,22,2,35,40 Starting value of d=n/2=7/2=3 12 9 -10 22 Given array Pass 1

Example Starting value of di+1=(di+1)/2=(3+1)/2=2 12 2 -10 22 9 35 40 Pass 2

Example Starting value of di+1=(di+1)/2=(2+1)/2=1 -10 2 9 22 12 35 40 Pass 3

Analysis of shell sort It is difficult to predict the complexity of shell sort, as it is very difficult to show the effect of one pass on another pass. One thing is very clear that if the new distance d is computed using above formula, then the number of passes will approximately be log2d since d=1 will complete the sort. Empirical studies have shown that the worst case complexity of shell sort is O(n2)