[PD] Making a Realtime Convolution External

Hans-Christoph Steiner hans at at.or.at
Wed Apr 6 20:26:11 CEST 2011


On Apr 4, 2011, at 10:48 PM, Seth Nickell wrote:

>>> 2) Anyone have requests for features/api? Its currently simplistic:
>>>   - takes a "read FILENAME" message, loads the file, does a test
>>> convolution against pink noise to normalize the gain to something  
>>> sane
>>
>> Is this done within the main Pd audio thread?
>
> The convolution engine has support for doing it either on the calling
> thread, or a background thread. I'm thinking of default to a
> background thread. That seem like the right move?

Pd has its own scheduling system which is best to stick to as long as  
you can so that you can keep the deterministic operation intact.  For  
convolution, I can't see a reason to use a thread.  It adds complexity  
and more code to run, but if the CPU is overtaxed by realtime  
convolution processing, you are going to get an interruption in the  
audio regardless of whether the processing is in a thread or not.

.hc


>>>   - caches the last N impulse responses, as the test convolution
>>> takes a little time
>>>   - allows setting the cache size with a "cachesize N" message
>>
>> To make sure I understood this: cachesize is not the size of the  
>> first
>> partition of the partitioned convolution, but the cache that tries  
>> to avoid
>> audio dropouts when performing the test convolution?
>
> The convolution engine can swap-in a pre-loaded ('cached') IR in
> realtime without glitching... but it means keeping 2x the Impulse
> Response data in RAM. To keep the default API simple but useful, I'm
> defaulting to caching only the last 5 impulse responses in RAM.
> "cachesize N" lets you increase that number.... lets say in a
> performance you wanted to use 30 different impulse responses and you
> have 2GB of ram... should be nbd.
>
>>>
>>>   - disable normalization with "normalize 0" or "normalize 1"
>>
>> Yes, disabling this could be a good idea! You could also add a  
>> "gain 0-1"
>> message for manual control.
>
> Its worth noting that impulse responses are usually whack without gain
> normalization.... like factors of hundreds to millions off a usable
> signal.
>
>>>  Features I'm considering (let me know if they sound useful):
>>>    - load from an array instead of from disk (no gain  
>>> normalization?)
>>
>> Very good.
>>>
>>>    - It wouldn't be hard to enable MxN convolution if that floats
>>> somebody's boat.
>>
>> I am sure if you come up with a convolution as efficient and  
>> flexible as
>> jconv by Fons within Pd, then soon a multichannel use and hence  
>> request will
>> come up fast.
>
> I'd be interested in what flexibility means in this context, it might
> give me some good ideas for features to add. Efficiency-wise, last
> time I benchmarked its more efficient than jconv, but the difference
> is offset by less graceful degradation under CPU load (I convolve in
> background threads to preserve realtime in the main thread while
> avoiding an irritating patent that's going to expire soon...).
>
> WRT to Pd's audio scheduling... are Pd signal externals held to
> realtime or can my dsp call vary the number of cycles it takes by 100%
> from call to call? VST seems to do ok with this, but AudioUnits get
> scheduled to run at the very last instant they possibly could. If Pd
> can have some variance, I can drop the threads and improve the
> external's degradation under high CPU load.
>
> thanks for the feedback (also, is the best list for this kind of  
> feedback?),
>
> -Seth
>
> _______________________________________________
> Pd-list at iem.at mailing list
> UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list



----------------------------------------------------------------------------

As we enjoy great advantages from inventions of others, we should be  
glad of an opportunity to serve others by any invention of ours; and  
this we should do freely and generously.         - Benjamin Franklin





More information about the Pd-list mailing list