[PD] Pd vs GUI (Was Re: pd and multi-core processors)

András Murányi muranyia at gmail.com
Tue Apr 6 20:27:14 CEST 2010


On Mon, Apr 5, 2010 at 8:43 PM, Matteo Sisti Sette <
matteosistisette at gmail.com> wrote:

> (this is a little OT respect to the thread)
>
> > nicely enough, pd's graphical interface and the actual process,
> > are separate threads,
>
> The communication between the engine of Pd ("Pd") and the graphical
> interface ("GUI") is not as efficient as you may expect it to be - at least
> not as much as I expected it to be.
>
> When you send a message to a GUI object, such as for example a "set"
> message to the receive-symbol of a slider, the Pd process sends a message to
> the GUI process and the GUI process actually redraws what it has to redraw:
> so the Pd process is not "blocked" while all the (embarassingly cpu
> consuming) draw operation is performed. So you would expect that the time
> needed to send a message to a GUI object is just the time needed to send a
> message through a TCP socket.
>
> Not quite so.
> I don't know why, but when you send a message to a GUI object it takes
> significantly much more CPU time to the Pd process than simply sending a
> message through TCP, though much less than it takes to actually redraw
> things. I _guess_ the Pd process probably waits for some kind of
> aknowledgement or respose from the GUI process or something like that, but
> this is only a guess.
>
> I found this out because I create patches that have to be used "on the
> stage" by users that are not "pd-ers", so I make extensive use of GUI. All
> significant parameters or values that can be changed and/or need to be
> monitored are displayed on the GUI. So I send a lot of messages to the GUI.
> I also store "snapshots" of configuration that are then called (_not_ loaded
> from disk at the moment of calling them), so there often are "massive"
> bursts of messages to a lot of GUI objects in zero logical time (I already
> reduced the messages to only the actually needed ones). So, soon I began to
> have a lot of audio dropouts.
>
> So I tried out a solution that seemed ridiculous at first: I made an
> "engine" patch which does all the audio and midi stuff but has no GUI, and
> an "interface" patch that has only the GUI stuff, and exchanges control data
> (in both senses of course) with the engine patch. And obviously I run them
> on two different instances of Pd.
>
> My "protocol" of communication between the two patches is not even very
> optimized, so I send a lot of messages that could actually be avoided (don't
> tell anybody).
> Well with this system, despite the huge quantity of messages to and from
> GUI, I get no dropouts at all, everything works fine.
>
> I have indeed replicated "at the patch level" the engine-GUI architecture
> that is already implemented in Pd. When I did it, I really was afraid that I
> was doing something stupid; but it did work, and it makes an enormous
> difference (well I did do some test before, that seemed to indicate that it
> may work).
> So the time it takes (meaning the time during which the Pd process is
> either blocked or busy) to send a message through a [netsend] (with even a
> little overhead: passing through a [s] and a [r], a [list prepend send] and
> a [list trim] at the very least) is significantly less than the time it
> takes to send a message to a GUI object.
>
> I am curious to know whether this overhead in the communication between the
> two processes of Pd is entirely necessary (to robustly guarantee consistency
> for example) - and in case it is not, whether it is going to be addressed in
> the gui-rewrite......
>
> --
> Matteo Sisti Sette
> matteosistisette at gmail.com
> http://www.matteosistisette.com
>

I'd just like to add that the same happens to MIDI with DSP off on a rather
strong machine (Opteron 148 @ 2200).
This is a very interesting thing that you brought up and i would very much
like to hear the experts' opinions.

Andras
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-list/attachments/20100406/04c5b73e/attachment.htm>


More information about the Pd-list mailing list