Mostly I had just never personally implemented a quicksort. I first googled "quicksort explained" which bought up this AWESOME video that extremely clearly explains the algorithm. So I implemented that.
But, then I found this page which shows a bunch of algorthms and it has them detailed very tersely. Their quicksort is nothing like the one in the video above. Their's moves a random `pivot` through the array going only to the right whereas the one above uses the leftmost element as its `pivot`, takes it out of the array and uses left and right pointers to move stuff before and after the pivot finally inserting the pivot in the space found.
The LR one appears to be much faster than the right only one. I also don't quite get the point of choosing a random pivot. I suppose it's to try to avoid the worst case which I think is an inversely sorted array and starting from element #1. I'm sure somewhere on the net explains why a random pivot is best for that other algorithm. I'm a little worried there's a bug in my implementations of the non LR quicksort because it takes so long on the "equal" array example (top right).
Anyway, I added a bunch more including the quicksort 3 way partition.
I also added cycle times but they probably make no sense currently. It's
basically 2 cycles for a load, 3 for a store, 1 for a compare though a compare
includes a load so it's 3 cycles. But cache hits are not currently taken into
account nor are a few array accesses in the algos. Maybe the only thing you
care about is array accesses and cache misses in which case I should remove the
various set, cmp, copy, swap stuff and just add have get/put and count cycles
and try to guess cache hits. For example right now
array[i] = array[j] is considered 1 load and 1 store so that's 5 cycles.
if array[i] > value is considered 3 cycles. Etc.. The visualization shows 10 cycles worth per
Anyway, if you missed the link at the top the visualization is here.