[PD] efficient approximation of trig functions for hi pass formula (was: could vanilla borrow iemlib's hi pass filter recipe?)

Claude Heiland-Allen claude at mathr.co.uk
Fri Oct 21 17:07:17 CEST 2016


Hi,

On 19/10/16 15:25, Jonathan Wilkes via Pd-list wrote:
> When implemented in C, which approach takes the least amount of time 
> to read, reason about, and fully comprehend?

Agreed, as changing filter frequency at message rate is probably a
relatively cold code path, I vote for something like:

double w = 2 * pi * fmin(fmax(fc / SR, 0), 0.5);
double k = (1 - sin(w)) / cos(w);

Maybe t_sample would be better than double, but then you'd need C99
type-generic maths or other similar macro tricks (or C++) to call sin or
sinf as appropriate.

>  From: katja <katjavetter at gmail.com>
>  Sent: Wednesday, October 19, 2016 9:06 AM
>  Subject: [PD] efficient approximation of trig functions for hi pass formula (was: could vanilla borrow iemlib's hi pass filter recipe?)
>    
> Changing the thread title to reflect the new approach. Extract of the
> original thread;
> 
> - I suggested using iemlib's hi pass filter recipe to improve
> frequency response of [hip~]
> - Christof Ressi pointed to formula in
> http://www.arpchord.com/pdf/coeffs_first_order_filters_0p1.pdf
> - this formula calculates feedback coefficient k = (1 - sin(a)) /
> cos(a) where a = 2 * pi * fc / SR
> - the filter implementation is y[n] = (x[n] - x[n-1]) * (1 + k) / 2
> +  k * y[n-1]
> - following convention in d_filter.c (and pd tilde classes in
> general), trig functions should best be approximated
> - Cyrille provided libre office linear regression result for (1-sin(x))/cos(x)
> 
> Thanks for the useful infos and discussion. My 'math coach' suggested
> using odd powers of -(x-pi/2) in an approximation polynomial for
> (1-sin(x))/cos(x).

Yes that's sensible, due to the odd symmetry.

> The best accuracy/performance balance I could get
> is with this 5th degree polynomial:
> 
> (-(x-pi/2))*0.4908 - (x-pi/2)^3*0.04575 - (x-pi/2)^5*0.00541
> 
> Using this approximation in the filter formula, response at cutoff
> frequency is -3 dB with +/-0.06 dB accuracy in the required range 0 <
> x < pi. It can be efficiently implemented in C, analogous to an
> approximation Miller uses in [bp~]. So that is what I'll try next.

I went up to 9th degree polynomial in my graphical analysis below.

> Attached patch hip~-models.pd illustrates and compares filter recipes
> using vanilla objects:
> 
> - current implementation, most efficient, accuracy +/- 3 dB
> - implementation with trig functions, least efficient, accuracy +/- 0.01 dB
> - implementation with approximation for trig functions, efficient,
> accuracy +/- 0.06 dB
> 
> A note on efficiency: coefficients in [hip~] are only recalculated
> when cutoff frequency is changed. How important is performance for a
> function rarely called? I'm much aware of the motto 'never optimize
> early', yet I spent much time on finding a fast approximation, for
> several reasons: it's a nice math challenge, instructive for cases
> where performance matters more, and I want to respect Miller's code
> efficiency when proposing a change.

Yes a fun challenge.

Here's a graph, it shows that polynomial approximations do quite poorly
for this problem, even when increasing the degree:
https://mathr.co.uk/misc/2016-10-21_filter_coefficient_calculation_accuracy.png
(logarithmic frequency scale, if SR is 48kHz it's from ~11Hz to 24kHz)

I used this Haskell to generate the data file for curve fitting
https://mathr.co.uk/misc/2016-10-21_filter_coefficient_calculation_accuracy.hs
and this GNUPlot to fit the curves and plot the graph:
https://mathr.co.uk/misc/2016-10-21_filter_coefficient_calculation_accuracy.gnuplot

Having said that the polynomial approximations do poorly, they're
probably accurate enough in practice, I suspect more serious problems
would be likely to occur from using single precision in the recursive
feedback path in the filter, see for example:
https://lists.puredata.info/pipermail/pd-list/2010-08/082104.html

> Today pd is even deployed on
> embedded devices so the frugal coding approach is still relevant.
> After 20 years.

In my tests last year and revisited in July I found a 9th degreee odd
polynomial was both more accurate and more efficient than the tabfudge
stuff for linear interpolated cosine table lookup on every single
architecture I have available to me (from pentium-4 to raspberry pi 3
via amd64), when compiled with -march=native on the target machine:
https://mathr.co.uk/blog/2016-07-20_approximating_cosine_update.html
https://mathr.co.uk/blog/2015-04-21_approximating_cosine.html


Claude
-- 
https://mathr.co.uk




More information about the Pd-list mailing list