# [PD] ipoke~ ?

Matt Barber brbrofsvl at gmail.com
Fri Jun 15 03:34:15 CEST 2012

```> But... today I realized why approach B could not work at all for an
> object which takes float indexes as arguments for writing, like you
> would expect from [tabwrite4~], [ipoke~] or any variable speed writer:
> for each perform loop, you get N (=blocksize) signal values and
> equally many index values, so it would be logical to iterate over N
> input samples, but in contrast, it would be very complicated to
> iterate over the output samples and couple these to index values. In
> fact, it would require yet another interpolation. Approach B would
> only work fine for an object which has a fixed resampling factor,
> settable via message. And then, the question how to synchronize it
> with a [tabread4~] is still open.

If it used the same interpolator as tabread4~, you could in principle
do approach B -- you'd need a struct that held on the the last samples
of the previous block, and offset it by a sample.

So, let's say you have a blocksize of 4, the first block of incoming
signal is [-0.3, 0.4, 0.6, -0.8], and the index block is [0.2, 1.4,
3.0, 5.8]. The way this could work would be to imagine a previous
signal block of [0, 0, 0, 0]. Put the "last 0" of that block at index
0.2 and the -0.3 at index 1.4. This crosses sample 1, so you find out
where that sample sits as a fraction of the difference between those
two indices (in this case 0.66666), use [0, 0, -0.3, 0.4] as the four
points for interpolation between 0 and -0.3, writing sample one as
though you were reading from a table with those four points at 0.66666
between the 0 and the -0.3 (so far so good?).

Then you put 0.4 at index 3.0. Now your interpolation points are [0,
-0.3, 0.4, 0.6] to interpolate between -0.3 and 0.4. Index 2 occurs
0.375 between these samples so you run the interpolation function for
that fractional index and write sample at index 2, and then you go
ahead and write the 0.4 to index 3.

Finally, you put 0.6 at index 5.8. You're interpolating between 0.4
and 0.6, and the points are [-0.3, 0.4, 0.6, -0.8]. Index 4 occurs
0.357143 between the two samples and index 5 occurs 0.714286 between,
so you run the interpolator twice for those fractional indices, write
those samples.

Then you save 0.4, 0.6 and -0.8 (the last the samples of the current
block of incoming signal), and 5.8 (the last written index) for the
next block. When you have the next block you'll have enough info to
interpolate between 0.6 and -0.8 from the last block and between -0.8
and the first sample of this one (these steps were actually implied
the first time around), and then you're good to go for the next four
samples.

If I haven't forgotten a step, the same principle ought to work for
any blocksize 4 or larger, and you'd need specialized policies for
blocksizes of 1 or 2.

Sorry for the length, but sometimes detailed examples can be helpful
to get things straight.

>
> It's weird how puzzling the task of fractional speed writing is,
> compared to fractional speed reading.

If you think that's puzzling, try fractional speed dating.

> Better focus on approach A
> (adding a fractionally delayed kernel into the array for each input
> sample). Approach A does not in itself impose a preferred kernel type
> or length, so different options could be offered to the user, varying
> in performance and precision aspects. Each kernel length, if it is
> fixed (and zero-phase), would imply a known delay, so the user can
> reckon with it. As I see it, calculating the resampling factor for
> normalization purposes need not be spoiled by numerical disasters, as
> each difference is found from two consecutive input index values,
> there is no autonomous cumulative effect.

Sometimes first-difference for that differentiation is a little
fraught. It's kind of the same issue if you wanted to incorporate
antialiasing into [tabread4~] -- you need a policy for calculating the
"speed" through the table, and first-difference might not be quite
accurate enough.

Matt

```