[PD] To divide a number in random parts
brbrofsvl at gmail.com
Sat Mar 5 05:02:53 CET 2011
This all sounds about right -- I made [list-shellsort] more as a
pedagogical exercise for my students than as a model of speed or
efficiency (I did a "quicksort" as well that didn't end up in
list-abs). I'm pretty sure that whatever gains come from the algorithm
are obliterated by "swapping" two values in a list -- I think it does
bunch of splits to isolate values at two indices and then recopies the
entire list TWICE in order to replace the two values. [list-swap] is
ridiculously inefficient compared with the array/table analogue -- at
one point I had sped up the whole thing vastly by dumping the list to
a table first, sorting it all in place, and then dumping it back out
to a list -- but then it uses fewer of the list abstractions and is
less of an opportunity to show how the abstractions work "in action."
And it's a good reminder for how slow list manipulations can be.
I remember a year or so ago I made a vanilla [list-s2l] that used a
bunch of makefilename/sprintf shenanigans to slowly tease apart a
symbol into constituent parts, with delimiting:
one of the best exercises in constrained patching I've ever done, and
fun for proving that some things are actually possible in vanilla that
you think wouldn't be, but really stupid with comparison to the
efficiency of something coded in C for actual use. Yet, I suspect that
people go ahead and use [list-sort] all the time.
On Fri, Mar 4, 2011 at 10:29 PM, Mathieu Bouchard <matju at artengine.ca> wrote:
> On Fri, 4 Mar 2011, Matt Barber wrote:
>> Check out [list-sort] for short lists, [list-shellsort] for much longer
>> ones (I don't remember at what point the shellsort starts beating the other
>> one -- maybe if the list has 50 or more entries; but at any rate they do the
>> same thing).
> It vastly depends on the implementation, the interpreter it runs in, and the
> computer that the interpreter runs on.
> However, from a short test of running time, it looks like [list-shellsort]
> runs much slower than what its theory calls the worst case. It probably
> takes O(n) time to perform stuff that usually takes O(1), probably because
> it avoids using [tabread] and such.
> Actually, on my old computer, [list-shellsort] sorts an already-sorted list
> of size 1000 in 280 ms, and of size 4000 in 5530 ms. That's a 20-fold
> difference, whereas O(n^2) would be 16-fold, and shellsort's theoretical
> worst-case would be 8-fold, that is, O(n^1½).
> [#sort] runs vastly faster than that. For the already-sorted list of size
> 1000, it does it in something like 0,22 ms, whereas for size 4000 it does it
> in about 0,90 ms. This means over a thousand times faster than
>> Also, if you're going to be doing something like this a ton in real
>> time with long lists, it might be more productive to do all this
>> manipulation with tables
> Yes, you would be able to get speeds that are consistently about 10-20 times
> slower than [#sort] for any table size, which would be a vast improvement
> over [list-shellsort].
More information about the Pd-list