Understanding why selection sort has quadratic time complexity
Selection Sort is a simple sorting algorithm that repeatedly finds the minimum (or maximum) element from the unsorted portion and places it at the beginning (or end). The algorithm maintains two subarrays: one sorted and one unsorted.
Key Characteristics:
Think of it like finding the smallest toy in a box, putting it first, then finding the next smallest from the remaining toys, and so on!
Scan through the unsorted portion of the array to find the minimum element. This requires comparing each element with the current minimum.
Swap the minimum element with the first element of the unsorted portion. This places the minimum in its correct sorted position.
Move the boundary between sorted and unsorted portions one position to the right. Repeat the process for the remaining unsorted elements.
For an array of n elements:
Code Example:
// Selection Sort - O(n²) always
function selectionSort(arr) {
const n = arr.length;
// One by one move boundary of unsorted subarray
for (let i = 0; i < n - 1; i++) {
// Find minimum element in unsorted array
let minIndex = i;
for (let j = i + 1; j < n; j++) {
if (arr[j] < arr[minIndex]) {
minIndex = j;
}
}
// Swap minimum with first element
if (minIndex !== i) {
[arr[i], arr[minIndex]] = [arr[minIndex], arr[i]];
}
}
return arr;
}Selection sort uses two nested loops:
• Outer loop: n-1 passes (one less than array size)
• Inner loop: compare n-i-1 elements to find minimum in pass i
• Total comparisons: (n-1) + (n-2) + ... + 1 = n(n-1)/2
Unlike insertion sort, selection sort always performs the same number of comparisons regardless of input order:
Selection sort makes at most n-1 swaps (one per pass):
Selection sort is simple but always O(n²), even for sorted data. Unlike insertion sort, it doesn't benefit from nearly sorted input. It's mainly used for educational purposes or when writes are extremely expensive.
| Algorithm | Best | Average | Worst | Swaps | Stable |
|---|---|---|---|---|---|
| Selection Sort | O(n²) | O(n²) | O(n²) | O(n) | No |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(n²) | Yes |
| Bubble Sort | O(n) | O(n²) | O(n²) | O(n²) | Yes |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(n log n) | No |
Selection sort's minimal swaps (O(n)) make it useful when writes to memory are expensive, such as with flash memory or EEPROM where each write operation has a cost. However, this is extremely rare in modern applications.
Selection sort is commonly taught in computer science courses because it demonstrates the concept of finding minimum/maximum and the idea of maintaining sorted and unsorted portions. Its simplicity makes it easy to understand.
Learning selection sort helps understand why O(n²) algorithms are inefficient. Its consistent O(n²) performance (unlike insertion sort) clearly demonstrates quadratic growth.
When you need a sorting algorithm that's easy to implement from scratch and correctness is more important than performance (e.g., in embedded systems with very small arrays), selection sort might be considered, though insertion sort is usually better.
Never use selection sort for large arrays or in production code! For any real application, use built-in sort functions (which use O(n log n) algorithms) or implement efficient sorting algorithms. Selection sort is simply too slow.
Always performs n(n-1)/2 comparisons regardless of input order - no best case optimization
Makes at most n-1 swaps - useful when writes are expensive, though this is rare
Unlike insertion sort, doesn't benefit from nearly sorted data - always O(n²)
For any real application, use quicksort, merge sort, heap sort, or built-in sort functions