[PD] netsend/netreceive questions ...

Giulio Moro giuliomoro at yahoo.it
Wed Feb 22 21:49:07 CET 2017


> (so the order of data arrival is guaranteed). 

Well this is a design feature of UDP: there is no guarantee of whether packets are going to be received or in what order. If you use UDP, you MUST write your program in such a way that it is resilient to data loss and data order. If you don't, then you may encounter problems later down the line, when you are in less than ideal conditions.
This said, it seems that [mrpeach/net] is designed in such a way that out-of-order delivery occurs more often than with other systems. Good to test the resiliency of your system.
My brief experience with net objects (I could be wrong, but this is what I remember):
[netreceive] / [netsend]: they do the socket job in the audio thread. This gives the following deterministic behaviour: the message is written to the socket before the audio callback is performed. What it does not give is any deterministic guarantee about when the packet is going to go out of your interface or be delivered. So given the latter, I am not sure why the former matters.
[iemnet/udpsend] / [iemnet/updreceive]: threaded, for [udpsend] the packets are stored in memory from the audio thread. A worker thread reads them and writes them to the socket. Issue is the audio thread uses malloc() to store the stash the values in memory, so it may occasionally hang while waiting for the kernel to provide more memory.
[mrpeach/net] have not looked at the code in person, but it has been mentioned here that it uses multiple working threads. If threads are created within the audio thread (as opposed to using a fixed pool of workers that get "activated" from the audio thread), then this also will occasionally hang while waiting for the kernel.
None of the approaches above is workable on the platform I am working on (Bela), as - running under Xenomai - the usual constraints that apply to audio programming (no I/O, no allocation, no creating threads in the audio thread) are even more strict (i.e.: you REALLY need to follow these principles).My tentative approach was to turn [netreceive] into a threaded object, using a lock-free queue beween the threads (the one provided by libpd), using ifdefs to reuse most of the existing vanilla code. https://github.com/giuliomoro/pure-data/commits/Bela-net
I am not quite happy with it yet: the code looks like a mess with all the ifdefs, and [netsend] is not working atm, but [netreceive] now can be safely used. I guess it would be better if I were to package it as an external, removing the ifdefs.
Giulio

      From: Roman Haefeli <reduzent at gmail.com>
 To: pd-list at lists.iem.at 
 Sent: Wednesday, 22 February 2017, 15:19
 Subject: Re: [PD] netsend/netreceive questions ...
   
On Mit, 2017-02-22 at 15:41 +0100, IOhannes m zmoelnig wrote:

> mrpeach/net should block less than the built-in object, but in theory
> it
> might still block when spinning up to many threads.
> also mrpeach/net is prone to race-conditions, where one sending
> thread
> can overtake another sending thread (so the order of data arrival is
> not
> guaranteed). obviously mrpeach/net doesn't always exihibit that
> problem
> (else nobody would use it), but iirc i was able to trigger that
> behaviour in a lab situtation.

netpd - as an example of a non-lab situation - does trigger such
problems with mpreach/net. Last time I checked, it presented incoming
data as lists which suggests that it uses some auto-magic internal
delimiting function, but it does not, it relies on pure chance. It's a
misconception that the author refuses to address. 

As far as I can tell, mrpeach/net suffers issues that iemnet does not.
I don't see any advantage in using mrpeach/net besides the fact that
Pd-l2ork / Purr Data - due to their Pd-extended heritage - come with
mrpeach and not with iemnet. 

Roman
_______________________________________________
Pd-list at lists.iem.at mailing list
UNSUBSCRIBE and account-management -> https://lists.puredata.info/listinfo/pd-list


   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-list/attachments/20170222/41dc1c10/attachment.html>


More information about the Pd-list mailing list