[PD] Making a Realtime Convolution External

Seth Nickell seth at meatscience.net
Tue Apr 5 04:48:32 CEST 2011


>> 2) Anyone have requests for features/api? Its currently simplistic:
>>   - takes a "read FILENAME" message, loads the file, does a test
>> convolution against pink noise to normalize the gain to something sane
>
> Is this done within the main Pd audio thread?

The convolution engine has support for doing it either on the calling
thread, or a background thread. I'm thinking of default to a
background thread. That seem like the right move?

>>
>>   - caches the last N impulse responses, as the test convolution
>> takes a little time
>>   - allows setting the cache size with a "cachesize N" message
>
> To make sure I understood this: cachesize is not the size of the first
> partition of the partitioned convolution, but the cache that tries to avoid
> audio dropouts when performing the test convolution?

The convolution engine can swap-in a pre-loaded ('cached') IR in
realtime without glitching... but it means keeping 2x the Impulse
Response data in RAM. To keep the default API simple but useful, I'm
defaulting to caching only the last 5 impulse responses in RAM.
"cachesize N" lets you increase that number.... lets say in a
performance you wanted to use 30 different impulse responses and you
have 2GB of ram... should be nbd.

>>
>>   - disable normalization with "normalize 0" or "normalize 1"
>
> Yes, disabling this could be a good idea! You could also add a "gain 0-1"
> message for manual control.

Its worth noting that impulse responses are usually whack without gain
 normalization.... like factors of hundreds to millions off a usable
signal.

>>  Features I'm considering (let me know if they sound useful):
>>    - load from an array instead of from disk (no gain normalization?)
>
> Very good.
>>
>>    - It wouldn't be hard to enable MxN convolution if that floats
>> somebody's boat.
>
> I am sure if you come up with a convolution as efficient and flexible as
> jconv by Fons within Pd, then soon a multichannel use and hence request will
> come up fast.

I'd be interested in what flexibility means in this context, it might
give me some good ideas for features to add. Efficiency-wise, last
time I benchmarked its more efficient than jconv, but the difference
is offset by less graceful degradation under CPU load (I convolve in
background threads to preserve realtime in the main thread while
avoiding an irritating patent that's going to expire soon...).

WRT to Pd's audio scheduling... are Pd signal externals held to
realtime or can my dsp call vary the number of cycles it takes by 100%
from call to call? VST seems to do ok with this, but AudioUnits get
scheduled to run at the very last instant they possibly could. If Pd
can have some variance, I can drop the threads and improve the
external's degradation under high CPU load.

thanks for the feedback (also, is the best list for this kind of feedback?),

-Seth



More information about the Pd-list mailing list