[PD-dev] implementing pools of clocks?

Christof Ressi info at christofressi.com
Sun Oct 25 16:30:50 CET 2020

Hi, Pd doesn't have a notion of global transport time, it just knows 
about logical time. People can (and have) build their own transport 
abstractions on top of that.

For further inspiration you could look into Supercollider's TempoClock 
class. The idea is that each TempoClock can have its own logical time 
and tempo. Changing the time or tempo of a TempoClock affects all 
Routines that are scheduled on this clock. You can, of course, have more 
than one TempoClock at the same time.


On 25.10.2020 16:16, Iain Duncan wrote:
> Thanks Christof, that is helpful again, and also encouraging as it 
> describes pretty well what I've done so far in the Max version. :-)
> I've enabled both one shot clocks (storing them in a hashtable owned 
> by the external) and periodic timers. The one shot clocks exist in 
> both a transport aware and transport ignorant format for working with 
> and quantizing off the Max global transport, and there are periodic 
> timers for both too. (The transport time stops when the transport is 
> stopped, the other is just a general X ms timer). I have also ensured 
> the callback handle gets passed back so that any timer or clock can be 
> cancelled from Scheme user-space. (is there such a thing as a global 
> transport in PD?)
> I was actually planning on something like you described in Scheme 
> space: a user defined scheduler running off the timers. I will look 
> into priority queues. I had one thought, which I have not seen much, 
> and was part of the reason I was asking on here about schedulers. I 
> would like to ensure the user can run multiple transports at once, and 
> hop around in time without glitches. I was thinking that instead of 
> using just a priority queue, I would do something like a two stage 
> structure, perhaps with a hashmap or some other fast-read-anywhere 
> structure with entries representing a time period, and holding 
> priority queues for each period. This would be to enable the system to 
> seek instantly to the bar (say) and iterate through the queue/list for 
> that bar. Wondering if anyone has used or seen this type of pattern or 
> has suggestions? Basically I want to make sure random access in time 
> will work ok even if the number of events in the schedule is very 
> high, thus allowing us to blur the lines between a scheduler and a 
> full blown sequencer engine. Thoughts, suggestions, and warnings are 
> all welcome.
> iain
> On Sun, Oct 25, 2020 at 4:21 AM Christof Ressi <info at christofressi.com 
> <mailto:info at christofressi.com>> wrote:
>     Actually, there is no need to use a clock for every scheduled LISP
>     function. You can also maintain a seperate scheduler, which is
>     just a priority queue for callback functions. In C++, you could
>     use a std::map<double, callback_type>. "double" is the desired
>     (future) system time, which you can get with "clock_getsystimeafter".
>     Then you create a *single* clock in the setup function *) with a
>     tick method that reschedules itself periodically (e.g.
>     clock_delay(x, 1) ). In the tick method, you get the current
>     logical time with "clock_getlogicaltime", walk over the priority
>     queue and dispatch + remove all items which have a time equal or
>     lower. You have to be careful about possible recursion, though,
>     because calling a scheduled LISP function might itself schedule
>     another function. In the case of std::map, however, it is safe,
>     because insertion doesn't invalidate iterators.
>     Some more ideas:
>     Personally, I like to have both one-shot functions and repeated
>     functions, being able to change the time/interval and also cancel
>     them. For this, it is useful that the API returns some kind of
>     identifier for each callback (e.g. an integer ID). This is what
>     Javascript does with "setTimeout"/"clearTimeout" and
>     "setInterval"/"clearInterval". I use a very similar system for the
>     Lua scripting API of my 2D game engine, but I also have
>     "resetTimeout" and "resetInterval" functions.
>     On the other hand, you could also have a look at the scheduling
>     API of the Supercollider, which is a bit different: if a routine
>     yields a number N, it means that the routine will be scheduled
>     again after N seconds.
>     Generally, having periodic timers is very convenient in a musical
>     environment :-)
>     Christof
>     *) Don't just store the clock in a global variable, because Pd can
>     have several instances. Instead, put the clock in a struct which
>     you allocate in the setup function. The clock gets this struct as
>     the owner.
>     typedef struct _myscheduler { t_clock *clock; } t_myscheduler; //
>     this would also be a good place to store the priority queue
>     t_scheduler *x = getbytes(sizeof(t_myscheduler));
>     t_clock *clock = clock_new(x, (t_method)myscheduler_tick);
>     x->clock = clock;
>     On 25.10.2020 02:02, Iain Duncan wrote:
>>     Thanks Christof, that's very helpful.
>>     iain
>>     On Sat, Oct 24, 2020 at 5:53 PM Christof Ressi
>>     <info at christofressi.com <mailto:info at christofressi.com>> wrote:
>>         But if you're still worried, creating a pool of objects of
>>         the same size is actually quite easy, just use a
>>         https://en.wikipedia.org/wiki/Free_list
>>         <https://en.wikipedia.org/wiki/Free_list>.
>>         Christof
>>         On 25.10.2020 02:45, Christof Ressi wrote:
>>>>         A) Am I right, both about being bad, and about clock
>>>>         pre-allocation and pooling being a decent solution?
>>>>         B) Does anyone have tips on how one should implement and
>>>>         use said clock pool?
>>>         ad A), basically yes, but in Pd you can get away with it.
>>>         Pd's scheduler doesn't run in the actual audio callback
>>>         (unless you run Pd in "callback" mode) and is more tolerant
>>>         towards operations that are not exactly realtime friendly
>>>         (e.g. memory allocation, file IO, firing lots of messages,
>>>         etc.). The audio callback and scheduler thread exchange
>>>         audio samples via a lockfree ringbuffer. The "delay"
>>>         parameter actually sets the size of this ringbuffer, and a
>>>         larger size allows for larger CPU spikes.
>>>         In practice, allocating a small struct is pretty fast even
>>>         with the standard memory allocator, so in the case of Pd
>>>         it's nothing to worry about. In Pd land, external authors
>>>         don't really care too much about realtime safety, simply
>>>         because Pd itself doesn't either.
>>>         ---
>>>         Now, in SuperCollider things are different. Scsynth and
>>>         Supernova are quite strict regarding realtime safety because
>>>         DSP runs in the audio callback. In fact, they use a special
>>>         realtime allocator in case a plugin needs to allocate memory
>>>         in the audio thread. Supercollider also has a seperate
>>>         non-realtime thread where you would execute asynchronous
>>>         commands, like loading a soundfile into a buffer.
>>>         Finally, all sequencing and scheduling runs in a different
>>>         program (sclang). Sclang sends OSC bundles to scsynth, with
>>>         timestamps in the near future. Conceptually, this is a bit
>>>         similar to Pd's ringbuffer scheduler, with the difference
>>>         that DSP itself never blocks. If Sclang blocks, OSC messages
>>>         will simply arrive late at the Server.
>>>         Christof
>>>         On 25.10.2020 02:10, Iain Duncan wrote:
>>>>         Hi folks, I'm working on an external for Max and PD
>>>>         embedding the S7 scheme interpreter. It's mostly intended
>>>>         to do things at event level, (algo comp, etc) so I have
>>>>         been somewhat lazy around real time issues so far. But I'd
>>>>         like to make sure it's as robust as it can be, and can be
>>>>         used for as much as possible. Right now, I'm pretty sure
>>>>         I'm being a bad real-time-coder. When the user wants to
>>>>         delay a function call, ie  (delay 100 foo-fun), I'm doing
>>>>         the following:
>>>>         - callable foo-fun gets registered in a scheme hashtable
>>>>         with a gensym unique handle
>>>>         - C function gets called with the handle
>>>>         - C code makes a clock, storing it in a hashtable (in C) by
>>>>         the handle, and passing it a struct (I call it the "clock
>>>>         callback info struct") with the references it needs for
>>>>         it's callback
>>>>         - when the clock callback fires, it gets passed a void
>>>>         pointer to the clock-callback-info-struct, uses it to get
>>>>         the cb handle and the ref to the external (because the
>>>>         callback only gets one arg), calls back into Scheme with
>>>>         said handle
>>>>         - Scheme gets the callback out of it's registry and
>>>>         executes the stashed function
>>>>         This is working well, but.... I am both allocating and
>>>>         deallocating memory in those functions: for the clock, and
>>>>         for the info struct I use to pass around the reference to
>>>>         the external and the handle. Given that I want to be
>>>>         treating this code as high priority, and having it execute
>>>>         as timing-accurate as possible, I assume I should not be
>>>>         allocating and freeing in those functions, because I could
>>>>         get blocked on the memory calls, correct? I think I should
>>>>         probably have a pre-allocated pool of clocks and their
>>>>         associated info structs so that when a delay call comes in,
>>>>         we get one from the pool, and only do memory management if
>>>>         the pool is empty. (and allow the user to set some
>>>>         reasonable config value of the clock pool). I'm thinking
>>>>         RAM is cheap, clocks are small, people aren't likely to
>>>>         have more than 1000 delay functions running concurrently or
>>>>         something at once, and they can be allocated from the init
>>>>         routine.
>>>>         My questions:
>>>>         A) Am I right, both about being bad, and about clock
>>>>         pre-allocation and pooling being a decent solution?
>>>>         B) Does anyone have tips on how one should implement and
>>>>         use said clock pool?
>>>>         I suppose I should probably also be ensuring the Scheme
>>>>         hash-table doesn't do any unplanned allocation too, but I
>>>>         can bug folks on the S7 mailing list for that one...
>>>>         Thanks!
>>>>         iain
>>>>         _______________________________________________
>>>>         Pd-dev mailing list
>>>>         Pd-dev at lists.iem.at  <mailto:Pd-dev at lists.iem.at>
>>>>         https://lists.puredata.info/listinfo/pd-dev  <https://lists.puredata.info/listinfo/pd-dev>
>>>         _______________________________________________
>>>         Pd-dev mailing list
>>>         Pd-dev at lists.iem.at  <mailto:Pd-dev at lists.iem.at>
>>>         https://lists.puredata.info/listinfo/pd-dev  <https://lists.puredata.info/listinfo/pd-dev>
>>         _______________________________________________
>>         Pd-dev mailing list
>>         Pd-dev at lists.iem.at <mailto:Pd-dev at lists.iem.at>
>>         https://lists.puredata.info/listinfo/pd-dev
>>         <https://lists.puredata.info/listinfo/pd-dev>
>     _______________________________________________
>     Pd-dev mailing list
>     Pd-dev at lists.iem.at <mailto:Pd-dev at lists.iem.at>
>     https://lists.puredata.info/listinfo/pd-dev
>     <https://lists.puredata.info/listinfo/pd-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-dev/attachments/20201025/a4fbe4d2/attachment-0001.html>

More information about the Pd-dev mailing list