[PD] 4-point interpolation changes timbre depending on sample rate

Charles Z Henry czhenry at gmail.com
Thu Jun 10 21:45:38 CEST 2021


On Tue, May 4, 2021 at 11:25 AM Dan Wilcox <danomatika at gmail.com> wrote:
>
> Sure but there tends to be a point to where it makes sense. The discussion seems to indicate this. Also, I suppose would have separate objects for sound file types, [wavfile], [aifffile], etc but a unified object clearly made sense in that context. As I mentioned before, it seems like there is a set of "tried & true" algorithms for interpolation...

I'm still looking for a single best review on the subject, but it
seems like most methods are either:
1. piecewise polynomials
2. windowed sinc functions

You'd think polynomials would be a totally solved implementation---but
a few searches turn up relevant, current research [1-3].
Singla and Singh compared window functions with a simple construction
that performs better at separating the signals from the noise [1].
I'd really like to read the springer article by Zaitsev and Khzmalyan
which has the abstract opening line: "The problem of synthesis of
low-order polynomial window functions with an arbitrarily specified
decay rate of spectral lobes (optimal with respect to minimization of
the maximum side lobe on a specified segment of the frequency axis) is
solved." [3]  It's from June 2021.

For the window choice, there's Bartlett, Hanning, Hamming, Lanczos,
Blackman-Harris, ..., ...
If you multiply a C-N function by sinc truncated at an integer, you
get a C-(N+1) function.  The maximum rate of attenuation increases as
you go to higher degrees of smoothness.
So, there's a very clear reason why you find variations in performance
based on window choice.
The windowed sinc function is
C0:  rectangular window
C1: Lanczos, Bartlett
C2: Hann
C3: Blackman
C4: Blackman-Harris
(and others.... so many others)

The other parameter you can control is the length of the window.  All
I can offer is a rule of thumb, use minimum length (2*N+2) for
interpolator functions that have N-degrees of smoothness.  I'm still
working on this area

And check out Sample Rate Converter comparisons at infinitewave [4],
to get a look at how much variation there is in performance from one
software's implementation to another.

I have yet a third approach that I'm studying: truncated sinc plus
polynomial correctors.  I think I can make a family of functions that
converges to the ideal performance faster than windowed sinc functions
as the length and degree become large.  There would be a 4-point,
6-point, 8-point, etc...

All of these are functions that are too expensive to calculate at
runtime for a fundamental, highly used object like delread~ or
tabread4~.  So, a table based approach is warranted but has
trade-offs.  You could choose to calculate the tables at compilation
time, start-up, or as needed at runtime.

Max and Clemens have made the first implementation and found a
relevant test case.  Waveguide synthesis is one where the errors get
compounded as the signal gets resampled again and again passing
through the delay.  I'd bet there are a series of ever more badly
behaved waveguide problems that could be tamed by using successively
higher-degree-and-length methods

[1] http://code.eng.buffalo.edu/jrnl/Window
[2] https://pdfs.semanticscholar.org/2196/5128dd419fd8d0117c97cb23b62724d7c56b.pdf
[3] https://link.springer.com/article/10.1134/S1064226921050107
[4] https://src.infinitewave.ca/





More information about the Pd-list mailing list