[PD-dev] why must one never send a message from a perform routine ?

Christof Ressi info at christofressi.com
Thu Aug 24 00:23:44 CEST 2023


> I actually get fewer xruns in callback mode, 
This sounds highly unlikely. Maybe your "delay" setting is too low? Or 
Pd is not actually running with realtime priority?

> I also bump the sound-generation process up to realtime priority.
Pd itself already tries to raise the thread priority; if this fails, you 
might not have sufficient permissions.

Christof

On 23.08.2023 22:55, Day Rush wrote:
> For what it's worth, I actually get fewer xruns in callback mode, but 
> I am running computation-heavy externals for much of my sound design 
> so YMMV. I also bump the sound-generation process up to realtime priority.
>
> On Wed, 23 Aug 2023 at 15:39, Joseph Larralde 
> <joseph.larralde at gmail.com> wrote:
>
>     Wow, thanks again Christof, this greatly improves my understanding
>     of Pd's engine.
>     Indeed, I never use callback mode because everytime I did in the
>     past I got some xruns, but had no clue about what was happening
>     behind the scene.
>     I feel a bit ashamed, I'm pretty sure I could have figured this
>     out by reading more posts / docs / code ...
>     I'm very grateful to both of you for taking the time to explain
>     this in detail, and I hope this will benefit other users on the list.
>
>     Best regards,
>
>     Joseph
>
>     Le 22/08/2023 à 17:22, Christof Ressi a écrit :
>>
>>>     I've always been puzzled by the fact that everything runs on a
>>>     single thread in Pd.
>>     By default, Pd operates in "polling mode", i.e. the scheduler
>>     runs in its own thread (the main thread) and communicates with
>>     the audio callback via two lockfree ringbuffers (one for input,
>>     one for output). The size of the ringbuffers is set by the
>>     ominous "delay" parameter in the audio settings. The actual audio
>>     thread only reads/writes samples from/to the ringbuffers.
>>
>>     If Pd operates in "callback mode" (= "callbacks" is ticked in the
>>     audio settings), the scheduler runs directly in the audio
>>     callback. You can save a little bit of latency, but it is less
>>     forgiving to CPU spikes or non-realtime-safe operations.
>>
>>     Christof
>>
>>     On 22.08.2023 15:08, Joseph Larralde wrote:
>>>     Thanks Christof for the additional insight.
>>>     I've always been puzzled by the fact that everything runs on a
>>>     single thread in Pd.
>>>     I guess this single thread IS the audio thread because it
>>>     processes audio, and I've always heard that one must never
>>>     perform too many non-audio operations during an audio callback.
>>>     But as you say, Pd runs fine for the general user base which I
>>>     am part of.
>>>     I'll probably give your version a try if I hit the limits with
>>>     my current (rapidly growing) project running on a Pi 3 B+, but
>>>     can't make any promises with my current schedule.
>>>     Thanks for your work anyways.
>>>
>>>     Best,
>>>     Joseph Larralde
>>>     --
>>>     freelance developer
>>>     www.josephlarralde.fr  <http://www.josephlarralde.fr>
>>>     Le 22/08/2023 à 11:55, Christof Ressi a écrit :
>>>>
>>>>>     How well does it work?
>>>>     It seems to work quite well. With synthetic benchmarks I can
>>>>     get a 6x speedup on my 8 core machine, but I need to do some
>>>>     more practical testing and benchmarking.
>>>>
>>>>>     It looks like the repo is based off of 0.52?
>>>>     I think it's based on 0.53. I want to rebase it on 0.54, but
>>>>     there are lots of conflicts I need to resolve. It's definitely
>>>>     on my TODO list. That's also why I haven't really made a formal
>>>>     announcement yet.
>>>>
>>>>>     Multithreaded DSP would have been much higher on my list than
>>>>>     multi-channel,
>>>>     Priorities are very subjective. Personally, I don't really
>>>>     think that multithreaded DSP has high priority for the general
>>>>     user base, as many patches seem to run fine on a single CPU.
>>>>     However, I do have projects that reach or exceed the limits of
>>>>     a single CPU - even on a beefy machine -, that's why I started
>>>>     working on this.
>>>>
>>>>>     so I'm wondering if I could get away with using your tree as
>>>>>     my basis for a while :)
>>>>     Actually, it would be great to have some testers apart from
>>>>     myself!
>>>>
>>>>     Christof
>>>>
>>>>     On 22.08.2023 10:32, Day Rush wrote:
>>>>>     How well does it work? It looks like the repo is based off of
>>>>>     0.52? Multithreaded DSP would have been much higher on my list
>>>>>     than multi-channel, so I'm wondering if I could get away with
>>>>>     using your tree as my basis for a while :)
>>>>>
>>>>>     - d
>>>>>
>>>>>     On Tue, 22 Aug 2023 at 01:17, Christof Ressi
>>>>>     <info at christofressi.com> wrote:
>>>>>
>>>>>         To expand on Miller's reply:
>>>>>
>>>>>         Conceptually, messaging and DSP are two separate domains.
>>>>>         Sending a
>>>>>         message from a perform routine violates this separation.
>>>>>         Instead you
>>>>>         should use a clock with delay 0 to defer the message to
>>>>>         the begin of the
>>>>>         next scheduler tick.
>>>>>
>>>>>         Miller already mentioned the greatest danger, but there
>>>>>         are other, more
>>>>>         subtle issues. DSP objects typically operate on the
>>>>>         premise that the
>>>>>         object's state won't change from the outside during the
>>>>>         perform routine.
>>>>>         For example, imagine a delay object with a buffer that can
>>>>>         be resized
>>>>>         with a message; by sending a Pd message from the perform
>>>>>         routine, it
>>>>>         might accidentally feed back into the object and
>>>>>         reallocate the buffer
>>>>>         while still in progress.
>>>>>
>>>>>         Unfortunately, very little of this is documented. Ideally,
>>>>>         this should
>>>>>         be covered in the externals-how-to
>>>>>         (https://github.com/pure-data/externals-howto); I just
>>>>>         added an item on
>>>>>         my (long) TODO list.
>>>>>
>>>>>         Finally, although Pd is currently single-threaded, this
>>>>>         could change in
>>>>>         the future. FWIW, here is a PoC for multi-threaded DSP:
>>>>>         https://github.com/spacechild1/pure-data/tree/multi-threading.
>>>>>         This is
>>>>>         only possible because perform routines may only use a
>>>>>         restricted set of
>>>>>         API functions - which, in my fork, are annoted with the
>>>>>         (empty)
>>>>>         THREADSAFE macro (and made thread-safe, if necessary).
>>>>>
>>>>>         Christof
>>>>>
>>>>>         On 21.08.2023 20:55, Joseph Larralde wrote:
>>>>>         > Hmm, I see ... unfortunately my random bug is totally
>>>>>         unrelated to
>>>>>         > this weakness of my code.
>>>>>         > Thanks Miller for the explanation and pointers to examples !
>>>>>         > And thanks Claude for the extra example.
>>>>>         > I'll check all my objects to see if there are other ones
>>>>>         I can
>>>>>         > consolidate.
>>>>>         >
>>>>>         > Cheers !
>>>>>         >
>>>>>         > Joseph
>>>>>         >
>>>>>         > Le 21/08/2023 à 19:08, Claude Heiland-Allen a écrit :
>>>>>         >> See bang~ in pure-data/src/d_misc.c for an example that
>>>>>         uses a clock
>>>>>         >> to send a message from DSP.
>>>>>         >>
>>>>>         >> On 21/08/2023 18:02, Miller Puckette wrote:
>>>>>         >>> The built-in objects "delay", "metro" and "pipe" use
>>>>>         clocks in
>>>>>         >>> various ways.
>>>>>         >>>
>>>>>         >>> On 8/21/23 18:02, Joseph Larralde wrote:
>>>>>         >>>> I just read in an answer from Christof to Alexandre :
>>>>>         "never ever
>>>>>         >>>> send a Pd message directly from a perform routine !
>>>>>         Always use a
>>>>>         >>>> clock !"
>>>>>         >>
>>>>>         >>
>>>>>         >>
>>>>>         >>
>>>>>         >> _______________________________________________
>>>>>         >> Pd-dev mailing list
>>>>>         >> Pd-dev at lists.iem.at
>>>>>         >> https://lists.puredata.info/listinfo/pd-dev
>>>>>         >
>>>>>         >
>>>>>         >
>>>>>         >
>>>>>         > _______________________________________________
>>>>>         > Pd-dev mailing list
>>>>>         > Pd-dev at lists.iem.at
>>>>>         > https://lists.puredata.info/listinfo/pd-dev
>>>>>
>>>>>
>>>>>
>>>>>         _______________________________________________
>>>>>         Pd-dev mailing list
>>>>>         Pd-dev at lists.iem.at
>>>>>         https://lists.puredata.info/listinfo/pd-dev
>>>>>
>>>>>
>>>>>
>>>>>     -- 
>>>>>     GPG Public key at http://cyber-rush.org/drr/gpg-public-key.txt
>>>>>
>>>>>     _______________________________________________
>>>>>     Pd-dev mailing list
>>>>>     Pd-dev at lists.iem.at
>>>>>     https://lists.puredata.info/listinfo/pd-dev
>>>>
>>>>     _______________________________________________
>>>>     Pd-dev mailing list
>>>>     Pd-dev at lists.iem.at
>>>>     https://lists.puredata.info/listinfo/pd-dev
>>>
>>>
>>>     _______________________________________________
>>>     Pd-dev mailing list
>>>     Pd-dev at lists.iem.at
>>>     https://lists.puredata.info/listinfo/pd-dev
>>
>>     _______________________________________________
>>     Pd-dev mailing list
>>     Pd-dev at lists.iem.at
>>     https://lists.puredata.info/listinfo/pd-dev
>
>     _______________________________________________
>     Pd-dev mailing list
>     Pd-dev at lists.iem.at
>     https://lists.puredata.info/listinfo/pd-dev
>
>
>
> -- 
> GPG Public key at http://cyber-rush.org/drr/gpg-public-key.txt
>
> _______________________________________________
> Pd-dev mailing list
> Pd-dev at lists.iem.at
> https://lists.puredata.info/listinfo/pd-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-dev/attachments/20230824/a372645e/attachment-0001.htm>


More information about the Pd-dev mailing list