Timing sorts in java

Sorting Algorithms in Java

Sorting data means arranging it in a certain order, often in an array-like data structure. You can use various ordering criteria, common ones being sorting numbers from least to greatest or vice-versa, or sorting strings lexicographically. You can even define your own criteria, and we’ll go into practical ways of doing that by the end of this article.

If you’re interested in how sorting works, we’ll cover various algorithms, from inefficient but intuitive solutions, to efficient algorithms which are actually implemented in Java and other languages.

There are various sorting algorithms, and they’re not all equally efficient. We’ll be analyzing their time complexity in order to compare them and see which ones perform the best.

The list of algorithms you’ll learn here is by no means exhaustive, but we have compiled some of the most common and most efficient ones to help you get started,

Note: This article will not be dealing with concurrent sorting, since it’s meant for beginners.

Bubble Sort

Explanation

Bubble sort works by swapping adjacent elements if they’re not in the desired order. This process repeats from the beginning of the array until all elements are in order.

We know that all elements are in order when we manage to do the whole iteration without swapping at all — then all elements we compared were in the desired order with their adjacent elements, and by extension, the whole array.

Here are the steps for sorting an array of numbers from least to greatest:

  • 4 2 1 5 3: The first two elements are in the wrong order, so we swap them.
  • 2 4 1 5 3: The second two elements are in the wrong order too, so we swap.
  • 2 1 4 5 3: These two are in the right order, 4 < 5, so we leave them alone.
  • 2 1 4 5 3: Another swap.
  • 2 1 4 3 5: Here’s the resulting array after one iteration.

Because at least one swap occurred during the first pass (there were actually three), we need to go through the whole array again and repeat the same process.

By repeating this process, until no more swaps are made, we’ll have a sorted array.

The reason this algorithm is called Bubble Sort is because the numbers kind of «bubble up» to the «surface.» If you go through our example again, following a particular number (4 is a great example), you’ll see it slowly moving to the right during the process.

All numbers move to their respective places bit by bit, left to right, like bubbles slowly rising from a body of water.

If you’d like to read a detailed, dedicated article for Bubble Sort, we’ve got you covered!

Implementation

We’re going to implement Bubble Sort in a similar way we’ve laid it out in words. Our function enters a while loop in which it goes through the entire array swapping as needed.

We assume the array is sorted, but if we’re proven wrong while sorting (if a swap happens), we go through another iteration. The while-loop then keeps going until we manage to pass through the entire array without swapping:

public static void bubbleSort(int[] a) < boolean sorted = false; int temp; while(!sorted) < sorted = true; for (int i = 0; i < array.length - 1; i++) < if (a[i] > a[i+1]) < temp = a[i]; a[i] = a[i+1]; a[i+1] = temp; sorted = false; > > > > 

When using this algorithm we have to be careful how we state our swap condition.

Читайте также:  Css embedded font color

For example, if I had used a[i] >= a[i+1] it could have ended up with an infinite loop, because for equal elements this relation would always be true , and hence always swap them.

Time Complexity

To figure out time complexity of Bubble Sort, we need to look at the worst possible scenario. What’s the maximum number of times we need to pass through the whole array before we’ve sorted it? Consider the following example:

In the first iteration, 5 will «bubble up to the surface,» but the rest of the elements would stay in descending order. We would have to do one iteration for each element except 1, and then another iteration to check that everything is in order, so a total of 5 iterations.

Expand this to any array of n elements, and that means you need to do n iterations. Looking at the code, that would mean that our while loop can run the maximum of n times.

Each of those n times we’re iterating through the whole array (for-loop in the code), meaning the worst case time complexity would be O(n^2).

Note: The time complexity would always be O(n^2) if it weren’t for the sorted boolean check, which terminates the algorithm if there aren’t any swaps within the inner loop — which means that the array is sorted.

Insertion Sort

Explanation

The idea behind Insertion Sort is dividing the array into the sorted and unsorted subarrays.

The sorted part is of length 1 at the beginning and is corresponding to the first (left-most) element in the array. We iterate through the array and during each iteration, we expand the sorted portion of the array by one element.

Upon expanding, we place the new element into its proper place within the sorted subarray. We do this by shifting all of the elements to the right until we encounter the first element we don’t have to shift.

For example, if in the following array the bolded part is sorted in an ascending order, the following happens:

  • 3 5 7 8 4 2 1 9 6: We take 4 and remember that that’s what we need to insert. Since 8 > 4, we shift.
  • 3 5 7 x 8 2 1 9 6: Where the value of x is not of crucial importance, since it will be overwritten immediately (either by 4 if it’s its appropriate place or by 7 if we shift). Since 7 > 4, we shift.
  • 3 5 x 7 8 2 1 9 6
  • 3 x 5 7 8 2 1 9 6
  • 3 4 5 7 8 2 1 9 6

After this process, the sorted portion was expanded by one element, we now have five rather than four elements. Each iteration does this and by the end we’ll have the whole array sorted.

If you’d like to read a detailed, dedicated article for Insertion Sort, we’ve got you covered!

Implementation

public static void insertionSort(int[] array) < for (int i = 1; i < array.length; i++) < int current = array[i]; int j = i - 1; while(j >= 0 && current < array[j]) < array[j+1] = array[j]; j--; > // at this point we've exited, so j is either -1 // or it's at the first element where current >= a[j] array[j+1] = current; > > 

Time Complexity

Again, we have to look at the worst case scenario for our algorithm, and it will again be the example where the whole array is descending.

Читайте также:  Михаил Максимов — преподаватель информатики

This is because in each iteration, we’ll have to move the whole sorted list by one, which is O(n). We have to do this for each element in every array, which means it’s going to be bounded by O(n^2).

Selection Sort

Explanation

Selection Sort also divides the array into a sorted and unsorted subarray. Though, this time, the sorted subarray is formed by inserting the minimum element of the unsorted subarray at the end of the sorted array, by swapping:

Implementation

In each iteration, we assume that the first unsorted element is the minimum and iterate through the rest to see if there’s a smaller element.

Once we find the current minimum of the unsorted part of the array, we swap it with the first element and consider it a part of the sorted array:

public static void selectionSort(int[] array) < for (int i = 0; i < array.length; i++) < int min = array[i]; int minId = i; for (int j = i+1; j < array.length; j++) < if (array[j] < min) < min = array[j]; minId = j; >> // swapping int temp = array[i]; array[i] = min; array[minId] = temp; > > 

Time Complexity

Finding the minimum is O(n) for the length of array because we have to check all of the elements. We have to find the minimum for each element of the array, making the whole process bounded by O(n^2).

Merge Sort

Explanation

Merge Sort uses recursion to solve the problem of sorting more efficiently than algorithms previously presented, and in particular it uses a divide and conquer approach.

Using both of these concepts, we’ll break the whole array down into two subarrays and then:

  1. Sort the left half of the array (recursively)
  2. Sort the right half of the array (recursively)
  3. Merge the solutions

This tree is meant to represent how the recursive calls work. The arrays marked with the down arrow are the ones we call the function for, while we’re merging the up arrow ones going back up. So you follow the down arrow to the bottom of the tree, and then go back up and merge.

In our example, we have the array 3 5 3 2 1 , so we divide it into 3 5 4 and 2 1 . To sort them, we further divide them into their components. Once we’ve reached the bottom, we start merging up and sorting them as we go.

If you’d like to read a detailed, dedicated article for Merge Sort, we’ve got you covered!

Implementation

The core function works pretty much as laid out in the explanation. We’re just passing indexes left and right which are indexes of the left-most and right-most element of the subarray we want to sort. Initially, these should be 0 and array.length-1 , depending on implementation.

The base of our recursion ensures we exit when we’ve finished, or when right and left meet each other. We find a midpoint mid , and sort subarrays left and right of it recursively, ultimately merging our solutions.

If you remember our tree graphic, you might wonder why don’t we create two new smaller arrays and pass them on instead. This is because on really long arrays, this would cause huge memory consumption for something that’s essentially unnecessary.

Читайте также:  Python file open parameters

Merge Sort already doesn’t work in-place because of the merge step, and this would only serve to worsen its memory efficiency. The logic of our tree of recursion otherwise stays the same, though, we just have to follow the indexes we’re using:

public static void mergeSort(int[] array, int left, int right) < if (right return; int mid = (left+right)/2; mergeSort(array, left, mid); mergeSort(array, mid+1, right); merge(array, left, mid, right); > 

To merge the sorted subarrays into one, we’ll need to calculate the length of each and make temporary arrays to copy them into, so we could freely change our main array.

After copying, we go through the resulting array and assign it the current minimum. Because our subarrays are sorted, we just chose the lesser of the two elements that haven’t been chosen so far, and move the iterator for that subarray forward:

 void merge(int[] array, int left, int mid, int right) < // calculating lengths int lengthLeft = mid - left + 1; int lengthRight = right - mid; // creating temporary subarrays int leftArray[] = new int [lengthLeft]; int rightArray[] = new int [lengthRight]; // copying our sorted subarrays into temporaries for (int i = 0; i < lengthLeft; i++) leftArray[i] = array[left+i]; for (int i = 0; i < lengthRight; i++) rightArray[i] = array[mid+i+1]; // iterators containing current index of temp subarrays int leftIndex = 0; int rightIndex = 0; // copying from leftArray and rightArray back into array for (int i = left; i < right + 1; i++) < // if there are still uncopied elements in R and L, copy minimum of the two if (leftIndex < lengthLeft && rightIndex < lengthRight) < if (leftArray[leftIndex] < rightArray[rightIndex]) < array[i] = leftArray[leftIndex]; leftIndex++; >else < array[i] = rightArray[rightIndex]; rightIndex++; >> // if all the elements have been copied from rightArray, copy the rest of leftArray else if (leftIndex < lengthLeft) < array[i] = leftArray[leftIndex]; leftIndex++; >// if all the elements have been copied from leftArray, copy the rest of rightArray else if (rightIndex < lengthRight) < array[i] = rightArray[rightIndex]; rightIndex++; >> > 

Time Complexity

If we want to derive the complexity of recursive algorithms, we’re going to have to get a little bit mathy.

The Master Theorem is used to figure out time complexity of recursive algorithms. For non-recursive algorithms, we could usually write the precise time complexity as some sort of an equation, and then we use Big-O Notation to sort them into classes of similarly-behaving algorithms.

The problem with recursive algorithms is that that same equation would look something like this:

The equation itself is recursive! In this equation, a tells us how many times we call the recursion, and b tells us into how many parts our problem is divided. In this case that may seem like an unimportant distinction because they’re equal for mergesort, but for some problems they may not be.

The rest of the equation is complexity of merging all of those solutions into one at the end. The Master Theorem solves this equation for us:

If T(n) is runtime of the algorithm when sorting an array of the length n , Merge Sort would run twice for arrays that are half the length of the original array.

So if we have a=2 , b=2 . The merge step takes O(n) memory, so k=1 . This means the equation for Merge Sort would look as follows:

Источник

Оцените статью