[PD-dev] implementing pools of clocks?

Christof Ressi info at christofressi.com
Sun Oct 25 02:53:11 CEST 2020


But if you're still worried, creating a pool of objects of the same size 
is actually quite easy, just use a https://en.wikipedia.org/wiki/Free_list.

Christof

On 25.10.2020 02:45, Christof Ressi wrote:
>
>> A) Am I right, both about being bad, and about clock pre-allocation 
>> and pooling being a decent solution?
>> B) Does anyone have tips on how one should implement and use said 
>> clock pool?
> ad A), basically yes, but in Pd you can get away with it. Pd's 
> scheduler doesn't run in the actual audio callback (unless you run Pd 
> in "callback" mode) and is more tolerant towards operations that are 
> not exactly realtime friendly (e.g. memory allocation, file IO, firing 
> lots of messages, etc.). The audio callback and scheduler thread 
> exchange audio samples via a lockfree ringbuffer. The "delay" 
> parameter actually sets the size of this ringbuffer, and a larger size 
> allows for larger CPU spikes.
>
> In practice, allocating a small struct is pretty fast even with the 
> standard memory allocator, so in the case of Pd it's nothing to worry 
> about. In Pd land, external authors don't really care too much about 
> realtime safety, simply because Pd itself doesn't either.
>
> ---
>
> Now, in SuperCollider things are different. Scsynth and Supernova are 
> quite strict regarding realtime safety because DSP runs in the audio 
> callback. In fact, they use a special realtime allocator in case a 
> plugin needs to allocate memory in the audio thread. Supercollider 
> also has a seperate non-realtime thread where you would execute 
> asynchronous commands, like loading a soundfile into a buffer.
>
> Finally, all sequencing and scheduling runs in a different program 
> (sclang). Sclang sends OSC bundles to scsynth, with timestamps in the 
> near future. Conceptually, this is a bit similar to Pd's ringbuffer 
> scheduler, with the difference that DSP itself never blocks. If Sclang 
> blocks, OSC messages will simply arrive late at the Server.
>
> Christof
>
> On 25.10.2020 02:10, Iain Duncan wrote:
>> Hi folks, I'm working on an external for Max and PD embedding the S7 
>> scheme interpreter. It's mostly intended to do things at event level, 
>> (algo comp, etc) so I have been somewhat lazy around real time issues 
>> so far. But I'd like to make sure it's as robust as it can be, and 
>> can be used for as much as possible. Right now, I'm pretty sure I'm 
>> being a bad real-time-coder. When the user wants to delay a function 
>> call, ie  (delay 100 foo-fun), I'm doing the following:
>>
>> - callable foo-fun gets registered in a scheme hashtable with a 
>> gensym unique handle
>> - C function gets called with the handle
>> - C code makes a clock, storing it in a hashtable (in C) by the 
>> handle, and passing it a struct (I call it the "clock callback info 
>> struct") with the references it needs for it's callback
>> - when the clock callback fires, it gets passed a void pointer to the 
>> clock-callback-info-struct, uses it to get the cb handle and the ref 
>> to the external (because the callback only gets one arg), calls back 
>> into Scheme with said handle
>> - Scheme gets the callback out of it's registry and executes the 
>> stashed function
>>
>> This is working well, but.... I am both allocating and deallocating 
>> memory in those functions: for the clock, and for the info struct I 
>> use to pass around the reference to the external and the handle. 
>> Given that I want to be treating this code as high priority, and 
>> having it execute as timing-accurate as possible, I assume I should 
>> not be allocating and freeing in those functions, because I could get 
>> blocked on the memory calls, correct? I think I should probably have 
>> a pre-allocated pool of clocks and their associated info structs so 
>> that when a delay call comes in, we get one from the pool, and only 
>> do memory management if the pool is empty. (and allow the user to set 
>> some reasonable config value of the clock pool). I'm thinking RAM is 
>> cheap, clocks are small, people aren't likely to have more than 1000 
>> delay functions running concurrently or something at once, and they 
>> can be allocated from the init routine.
>>
>> My questions:
>> A) Am I right, both about being bad, and about clock pre-allocation 
>> and pooling being a decent solution?
>> B) Does anyone have tips on how one should implement and use said 
>> clock pool?
>>
>> I suppose I should probably also be ensuring the Scheme hash-table 
>> doesn't do any unplanned allocation too, but I can bug folks on the 
>> S7 mailing list for that one...
>>
>> Thanks!
>> iain
>>
>> _______________________________________________
>> Pd-dev mailing list
>> Pd-dev at lists.iem.at
>> https://lists.puredata.info/listinfo/pd-dev
>
> _______________________________________________
> Pd-dev mailing list
> Pd-dev at lists.iem.at
> https://lists.puredata.info/listinfo/pd-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-dev/attachments/20201025/8bf6b9b0/attachment-0001.html>


More information about the Pd-dev mailing list