Merge Sort
Split the array in half, sort each half (recursively), then merge them back into one sorted whole. log₂(n) levels, n work per level, total O(n log n) — guaranteed.
Divide, conquer, merge
- comparisons
- 0
- result
- —
- 0
def merge_sort(arr): - 1
if len(arr) <= 1: return arr - 2
mid = len(arr) // 2 - 3
left = merge_sort(arr[:mid]) - 4
right = merge_sort(arr[mid:]) - 5
return merge(left, right) - 6
- 7
def merge(a, b): - 8
out, i, j = [], 0, 0 - 9
while i < len(a) and j < len(b): - 10
if a[i] <= b[j]: out.append(a[i]); i += 1 - 11
else: out.append(b[j]); j += 1 - 12
return out + a[i:] + b[j:]
What you just watched
Merge sort is the canonical divide-and-conquer algorithm. The recursion splits the work into halves until each half is trivially sorted (size 0 or 1). Then the merging climbs back up: combining two sorted halves into one sorted whole takes one linear pass.
Unlike bubble/insertion/selection sort, merge sort's big-O is O(n log n) in every case — best, worst, average, all the same. The price: it needs O(n) auxiliary memory for the merge buffer (the sliced halves above). Quicksort trades memory for speed — same big-O on average, smaller constants, no extra space, but a slow worst case.