[PD] Delay Bug still buggy in 0.47-0
christof.ressi at gmx.at
Mon May 9 23:56:34 CEST 2016
What's called "maximum delay time" is actually the buffer size. If both the reading and writing objects work at a blocksize of 64, to get a delay of zero you still need a buffer of 64 samples (let's assume the delay line is sorted: one object writes 64 samples into the ringbuffer and the other objects reads these 64 samples - no delay happening yet). To get a maximum delay of 64 samples, you have to increase the size of the ringbuffer for another 64 samples, so you end up with 128 samples.
The rule is: maximum delay = buffer size - block size.
To hide this from the user, Pd internally adds 64 samples to the delay time you specify for [delwrite~], so the user will think that buffer size = max. delay time. Here's the relevant part of the code:
static void sigdelwrite_updatesr (t_sigdelwrite *x, t_float sr) /* added by Mathieu Bouchard */
int nsamps = x->x_deltime * sr * (t_float)(0.001f);
if (nsamps < 1) nsamps = 1;
nsamps += ((- nsamps) & (SAMPBLK - 1));
nsamps += DEFDELVS;
where DEFDELVS is defined as 64.
Of course this only works for block sizes of 64 samples. In your case, where the block size is larger, the user has to know about this detail and compensate by making the delay line larger accordingly.
There are at least two solutions to get around this issue:
a) make it clear that the argument for [delwrite~] is the *buffersize* and that max. delay = buffer size - block size.(Probably keep the additional 64 samples in the allocation as it guarantees backward compatibility and doesn't cause any harm.)
b) let [delwrite~] check the block size of each [delwrite~] and [vd~] associated with it. Not so easy I guess, since the block size of any reading object can change dynamically - should that always trigger a reallocation of the delay line? Probably not...
To me, a) seems the more sound solution. Set the buffer size once and for all and don't care about the reading objects. If a user deals with an advanced topic such as delay lines at different block sizes, he/she should be able to figure out how to choose the right size for [delwrite~], given that he/she is provided with enough information in the help file. I personally like it when everything is as transparent as possible.
OTOH, [delwrite~] already somehow cares for overlap and oversampling (as you noted). But I don't know if it's only when DSP is turned on. Haven't tested that yet.
After all it's basically a design question, I won't even try to answer :-).
I hope I could at least give you an explanation for your observations.
Gesendet: Montag, 09. Mai 2016 um 18:57 Uhr
Von: "Alexandre Torres Porres" <porres at gmail.com>
An: "Miller Puckette" <msp at ucsd.edu>
Cc: "pd-lista puredata" <pd-list at iem.at>
Betreff: [PD] Delay Bug still buggy in 0.47-0
2016-05-03 23:06 GMT-03:00 Miller Puckette <msp at ucsd.edu>:OK... I think this is now fixed (just hoping it doesn't make for new
Howdy Miller, I'm testing the new 0.47 version and I see that the bug is "partially" fixed. I had sent 2 patches describing the bugs and how we couldn't reach the actual maximum specified delay length. One of these patches I had sent, the one with [delread~] and a block size of 64 samples is indeed ok now, but the other one, with vd~ and a block of 2048 and an overlap of 4 still behaves badly in the exact same way.
So apparently there's more to it when it comes to different block sizes. And I have some patches where I use delay for spectral processing, so this ruins everything as I'm using block sizes longer than 64...
We have actually discussed these issues before on this list under different (but related) threads. I was browsing through the discussion history to see what else i could find. I found this patch under a thread started by Christof Ressi ("[PD] [vd~] VS [delread~] - different delay limit!"). The patch was showing how both objects had different maximum limits. This is in fact fixed in the new release, but I was working more on this patch and added a few more things to study how things are working and why I was still getting the same problems.
It turns out that is seem the delay size is always hard coded to a maximum block size of 64. If you have block sizes smaller than 64, it is fine, the maximum delay line is indeed the amount specified. But if you have larger block sizes, the maximum delay line is actually less than specified.
Let me give you an example with a given sample rate of 44.2khz. A delay line of 100 ms (4410 samples) and a block size of 4096 (92.8798 ms in 44.1kz) has a maximum delay line of around 378 samples (I'm trying to calculate this on the patch, but I get a slightly different value).
The 378 samples comes from the whole delay line (4410 samples) minus the block size (4096) plus a block size of 64 samples (4410 - 4096 + 64 = 378) - so, that's about 8.6 ms instead of 100 ms!
An overlap of 4 does upsample the delay line in 4x, so the length goes from 100 ms to 400 ms (and is 17640 samples long instead of the original 4410). A block size of 4096 will give us a max delay length of about 308.5 ms, so again the formula is (1760 - 4096 + 64 = 13608 samples [308.5 ms]).
You can check this with the attached patch.
So, problem is to make the delay line aware of the running block size - and I have no idea of the challenges involved.
_______________________________________________ Pd-list at lists.iem.at mailing list UNSUBSCRIBE and account-management -> https://lists.puredata.info/listinfo/pd-list[https://lists.puredata.info/listinfo/pd-list]
More information about the Pd-list