Mathieu Bouchard matju at artengine.ca
Wed Jul 2 13:14:01 CEST 2008

```On Sat, 28 Jun 2008, Matt Barber wrote:

> a1 = 0.5f * (c - a);
> a3 = 0.5f * (d - a) + 1.5f * (b - c);
> a2 = a - b + a1 - a3;
>
> *out++ =  ((a3 * frac + a2) * frac + a1) * frac + b;
>
> 10 +'s 6 *'s

If you compute twice the value of a1,a2,a3 and later multiply by 0.5, you
end up with a multiplication of (a-b) by 2 that you can optimise by
turning it into a single addition. In that case, 11 +'s 5 *'s, or 12 +'s 4
*'s if you also do the same with the multiplication by 3.0. It matters
only if the CPU computes addition faster (not sure if that's still the
case), or if redefining Pd samples to some weird type (floats with way too
many bits, and such).

I wonder what's the lowest possible number of operations. Some possible
formulas wouldn't even have ((a3 * frac + a2) * frac + a1) * frac, but I
wonder whether they can be any shorter. In any case, you need at least
three multiplications ;)

_ _ __ ___ _____ ________ _____________ _____________________ ...
| Mathieu Bouchard - tél:+1.514.383.3801, Montréal, Québec
```