[PD] s~ & r~ with block size other than 64?

Matt Barber brbrofsvl at gmail.com
Sun Feb 28 01:52:42 CET 2016


Can you point to an example of this? I don't think it would work for
partitioned convolutions, e.g. for reverb, where we need the linear
convolution.
I was always asking myself why FFT convolution in Pd is usally done without
zero-padding and with a sqared hann-window. theoretically, ommiting
zero-padding leads to circular convolution, but the squared hann-window
seems to prevent audible artifacts. However, having less delay using your
[tabsend~] trick could be reason to prefer 'classical' overlap-add. What's
your point on this, which method do you actually prefer? And would there a
point in using a hann-window after zero-padding? Theoretically it shouldn't
be necessary.


Gesendet: Samstag, 27. Februar 2016 um 14:42 Uhr
Von: "Matt Barber" <brbrofsvl at gmail.com>
An: "Christof Ressi" <christof.ressi at gmx.at>
Cc: "Alexandre Torres Porres" <porres at gmail.com>, "i go bananas" <
hard.off at gmail.com>, "pd-list at lists.iem.at" <pd-list at lists.iem.at>
Betreff: Re: Re: Re: [PD] s~ & r~ with block size other than 64?

> It would allow you to do things like partitioned convolution without any
delay, since the convolution of two 64-sample windows fills a 128-sample
window.

sounds more like the classic overlap-add-method. can you explain more?




​OK, forget partitioning and imagine that your impulse response is 50
samples.​ You want to convolve it with whatever is coming in from [adc~],
which is blocked at 64. The problem is that the convolution of a 64-sample
input and a 50-sample IR is 64+50-1=113 samples long; it has to be done
with a 128-pt FFT with zero-padded inputs. This means you'll also need an
overlap of 2, since you'll need a 128-pt FFT for every 64 samples of input.
Using [inlet~] makes the zero-padding tricky, and you'll also get a block
delay. Using [tabsend~] and [tabreceive~] zero-pads for you, and also lets
you do it with no block delay. The logic for partitioned convolution is the
same; it just requires more windows and extra delay, and some tricks for
efficiency: pre-calculate the IR FFTs, delay and sum in the frequency
domain so you only need one IFFT, use differently sized windows to take
advantage of FFT efficiency for larger windows, etc.




Gesendet: Samstag, 27. Februar 2016 um 06:01 Uhr
Von: "Matt Barber" <brbrofsvl at gmail.com>
An: "Christof Ressi" <christof.ressi at gmx.at[christof.ressi at gmx.at]>
Cc: "Alexandre Torres Porres" <porres at gmail.com[porres at gmail.com]>, "i go
bananas" <hard.off at gmail.com[hard.off at gmail.com]>, "pd-list at lists.iem.at[
pd-list at lists.iem.at]" <pd-list at lists.iem.at[pd-list at lists.iem.at]>
Betreff: Re: Re: [PD] s~ & r~ with block size other than 64?

You have to be careful reblocking with [tabsend~] and [tabreceive~] though,
because of what happens with blocking and block delay. Hopefully this isn't
too obvious to explain.

You know the regular situation: suppose you write into the [inlet~] of a
subpatch that is blocked at 128 from a parent blocked at 64, and then back
out an [outlet~] into the parent patch. When you start dsp, for the first
parent block the first 64 samples go in, but nothing comes out because the
subpatch needs to collect 128 samples before it sends anything out. On the
second parent block, 64 more samples go in, the subpatch can do its
calculations on its 128-sample vector(s), and start output immediately,
beginning with the first block of input from the parent patch. So
everything is delayed by one block in this case, or in general by N_s - N_p
where N_s is the subpatch's block size and N_p is the parent's.

Now, suppose instead you have an array of size 128 called "depot." From the
block-64 parent you [tabsend~] a signal to depot, and you make sure your
signal is calculated prior to anything in the subpatch using the [inlet~]
trick. [tabsend~ depot] will write the first 64 samples of depot every
block, leaving the last 64 untouched. Then inside the block-128 subpatch
you [tabreceive~ depot] and send it out to the parent through an [outlet~].
What will happen? When you start dsp, during the parent's first block
[tabsend~ depot] writes the first block of samples to depot. Nothing
happens in the subpatch because 128 samples haven't passed yet. Then on the
parent's second block, [tabsend~ depot] writes the second block of samples
to the first 64 samples of depot. 128 samples have passed, so the subpatch
can do its thing. [tabreceive~ depot] receives the whole array, starting
with the 64 samples just written in by the second parent block, so on
output, those 64 samples come out with no block delay. However, since the
first parent block's samples were overwritten in depot by the second
block's samples, every other block from the parent will be lost in the
subpatch. However, if you set the subpatch to overlap by 2 (or generally
N_s/N_p), the [tabsend~]/[tabreceive~] pair actually allows you to reblock
with no block delay and no lost samples, but with the CPU penalty and the
general hassle of dealing with overlapping. It would allow you to do things
like partitioned convolution without any delay, since the convolution of
two 64-sample windows fills a 128-sample window.

So, knowing this, what do you think would happen if you put the [tabsend~]
in the subpatch and the [tabreceive~] in the parent and don't overlap in
the subpatch? What if you do overlap in the subpatch?

NB - overlapping does not affect the block delay of normal
[input~]/[output~].

I now realize I should have just built a patch to illustrate all this. Next
time. :)

Matt

On Fri, Feb 26, 2016 at 1:49 PM, Christof Ressi <christof.ressi at gmx.at[
christof.ressi at gmx.at]> wrote:Thanks Matt for diggin in!

> In principle it wouldn't be too hard to let them be any block size so
long as they're the same size,

What puzzles me is that I *can* actually send audio from one subpatch and
receive it indifferent subpatches for blocksizes greater (but not less)
than 64, but only if all the blocksizes match and - this is really weird -
there's no more than 1 [r~] per subpatch. I guess you'd call that an
"unsupported feature" :-p. I don't use it, however, and I wouldn't
recommend other people to use it. So let's keep it a secret.

After all we have [tabsend~] and [tabreceive]. I was just curious about the
technical details.

Gesendet: Freitag, 26. Februar 2016 um 17:48 Uhr
Von: "Matt Barber" <brbrofsvl at gmail.com[brbrofsvl at gmail.com][
brbrofsvl at gmail.com[brbrofsvl at gmail.com]]>
An: "Christof Ressi" <christof.ressi at gmx.at[christof.ressi at gmx.at][
christof.ressi at gmx.at[christof.ressi at gmx.at]]>
Cc: "Alexandre Torres Porres" <porres at gmail.com[porres at gmail.com][
porres at gmail.com[porres at gmail.com]]>, "i go bananas" <hard.off at gmail.com[
hard.off at gmail.com][hard.off at gmail.com[hard.off at gmail.com]]>, "
pd-list at lists.iem.at[pd-list at lists.iem.at][pd-list at lists.iem.at[
pd-list at lists.iem.at]]" <pd-list at lists.iem.at[pd-list at lists.iem.at][
pd-list at lists.iem.at[pd-list at lists.iem.at]]>
Betreff: Re: [PD] s~ & r~ with block size other than 64?

Here's the short story:

[s~] and [r~] are pretty straightforward: [s~] fills a block buffer every
sample, and any [r~] with the same name can find that buffer and read from
it. In principle it wouldn't be too hard to let them be any block size so
long as they're the same size, but there would be some tricky things with
overlap and resampling. [catch~] reads from a one-block buffer and zeroes
it out as it goes, and [throw~] sums into its catcher's buffer.
[delwrite~]/[delread~] work with any block size because the buffer size
isn't related to any block size.

On Fri, Feb 26, 2016 at 11:23 AM, Christof Ressi <christof.ressi at gmx.at[
christof.ressi at gmx.at][christof.ressi at gmx.at[christof.ressi at gmx.at]]>
wrote:I think he rather meant that [s~] and [r~] doesn't need to check the
vector size for each DSP cycle. The error message you're talking about is
only thrown after creating [s~] or [r~] objects in a subpatch with
blocksize != 64 AND everytime you set a "forbidden" blocksize dynamically
with a message to [block~], so it *could* be that the check is only
performed for such events and not for each DSP cycle. Although getting an
error message for dynamically changing the blocksize rather implies a check
for each DSP cycle... But I'm only making assumptions. Apart from possible
performance optimations I can't see any reason for this restriction either!

BTW: It's not like a pair of [s~] and [r~] won't generally work for
blocksizes other than 64. It basically works as expected when used as
"wireless audio connections" (at least in the situations I tried) but
things get screwed up once you try feedback or if the blocksizes don't
match. Again, it would be really cool if someone could clarify what's
really going on under the hood (e.g. how [s~] and [r~] differ from
[delwrite] and [delread~]) or point to an already existing thread in the
mailing list archive.



Gesendet: Freitag, 26. Februar 2016 um 07:08 Uhr
Von: "Alexandre Torres Porres" <porres at gmail.com[porres at gmail.com][
porres at gmail.com[porres at gmail.com]][porres at gmail.com[porres at gmail.com][
porres at gmail.com[porres at gmail.com]]]>
An: "i go bananas" <hard.off at gmail.com[hard.off at gmail.com][
hard.off at gmail.com[hard.off at gmail.com]][hard.off at gmail.com[
hard.off at gmail.com][hard.off at gmail.com[hard.off at gmail.com]]]>
Cc: "pd-list at lists.iem.at[pd-list at lists.iem.at][pd-list at lists.iem.at[
pd-list at lists.iem.at]][pd-list at lists.iem.at[pd-list at lists.iem.at][
pd-list at lists.iem.at[pd-list at lists.iem.at]]]" <pd-list at lists.iem.at[
pd-list at lists.iem.at][pd-list at lists.iem.at[pd-list at lists.iem.at]][
pd-list at lists.iem.at[pd-list at lists.iem.at][pd-list at lists.iem.at[
pd-list at lists.iem.at]]]>
Betreff: Re: [PD] s~ & r~ with block size other than 64?

really? can't see how much more relevantly efficient it'd be, and it kinda
does check it already, hence the errors

cheers

2016-02-26 3:07 GMT-03:00 i go bananas <hard.off at gmail.com[
hard.off at gmail.com][hard.off at gmail.com[hard.off at gmail.com]][
hard.off at gmail.com[hard.off at gmail.com][hard.off at gmail.com[hard.off at gmail.com]]]>:I
would assume it's also slightly more efficient that pd doesn't have to
check the vector size when processing the s~ / r~ functions.
 _______________________________________________ Pd-list at lists.iem.at[
Pd-list at lists.iem.at][Pd-list at lists.iem.at[Pd-list at lists.iem.at]][
Pd-list at lists.iem.at[Pd-list at lists.iem.at][Pd-list at lists.iem.at[
Pd-list at lists.iem.at]]] mailing list UNSUBSCRIBE and account-management ->
http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list]][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list]]][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list]][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list]]]]

_______________________________________________
Pd-list at lists.iem.at[Pd-list at lists.iem.at][Pd-list at lists.iem.at[
Pd-list at lists.iem.at]][Pd-list at lists.iem.at[Pd-list at lists.iem.at][
Pd-list at lists.iem.at[Pd-list at lists.iem.at]]] mailing list

UNSUBSCRIBE and account-management ->
http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list]][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list][http://lists.puredata.info/listinfo/pd-list[http://lists.puredata.info/listinfo/pd-list]]]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-list/attachments/20160227/1f66af03/attachment-0001.html>


More information about the Pd-list mailing list