[PD-dev] netreceive: listen failed: Address already in use

Pierre Guillot guillotpierre6 at gmail.com
Sun Feb 28 19:35:23 CET 2021


Thank you! In fact, this problem is the opportunity for me of a really good
introduction to sockets :) I was completely lost in this subject and I
might have gone in all the possible directions.
So I have to ensure that netreceive_free() is called before the new object
is created and the netreceive_listen() is called. I can debug directly with
Xcode attached to Reaper so it should be easy to check.
Thank you again IOhannes and Cristof!

Le dim. 28 févr. 2021 à 12:52, Christof Ressi <info at christofressi.com> a
écrit :

> To give some more background info:
>
> When you close a UDP socket, the port becomes available immediately,
> because UDP is connectionless.
>
> With TCP, however, a closed socket remains in the TIME WAIT state for
> some time during which the port still appears to be taken (for an
> explanation, see
>
> https://networkengineering.stackexchange.com/questions/19581/what-is-the-purpose-of-time-wait-in-tcp-connection-tear-down).
>
>
>
> Now, as IOhannes explained, SO_REUSEADDR resp. SO_REUSEPORT allows
> several sockets to bind to the same port.
>
> In the context of TCP, this is often used to make a TCP port available
> immediately after closing a socket. This should never be necessary for UDP.
>
> There are some cases where you want several *active* sockets to listen
> on the same port. Generally, it is not specified which one receives the
> packet. Since TCP is a stream protocol, it rarely makes sense to have
> several active sockets listening on the same port because the data would
> be distributed arbitrarily between sockets. With UDP, you always get a
> complete datagram. On Linux, you can even use SO_REUSEPORT to build a
> multi-threaded UDP server
> (
> https://engineering.opensooq.com/using-so_reuseport-flag-in-multi-processes-and-multi-threaded-udp-server/
> ).
>
> Finally, there is an exception for UDP multicast, where it is guaranteed
> that all sockets on the same port that have joined the same multicast
> group receive the same packet.
>
> ---
>
> > So the problem might be that when the patch is closed in the camomile
> plugin, the connection used by the [netreceive] object is not really
> detached
>
> In your case, since you're using UDP, the port should be released
> immediately on closing the patch. Maybe the DAW tries to open the new
> plugin *before* closing the old one? You could check this with some good
> old printf debugging :-)
>
> Christof
>
>
> On 28.02.2021 09:29, Pierre Guillot wrote:
> > Thank you for the clarification!
> >
> > So the problem might be that when the patch is closed in the camomile
> plugin, the connection used by the [netreceive] object is not really
> detached (I guess internally, it’s not synchronous). So when the patch is
> reopened, the new connection cannot be done because the address is still
> already in use.
> >
> > One solution could be to ensure that the connection is detached when
> closing a patch.
> >
> > But I don’t understand why this is different in Pd and in Camomile...
> >
> >> Le 27 févr. 2021 à 21:17, IOhannes m zmölnig <zmoelnig at iem.at> a écrit
> :
> >>
> >> Am 27. Februar 2021 19:22:26 MEZ schrieb Pierre Guillot <
> guillotpierre6 at gmail.com>:
> >>> Is it normal?
> >> yes.
> >> this is how networking works.
> >> only a single listener on any given port pet interface.
> >>
> >>
> >>> I managed to solve this problem by replacing SO_REUSEADDR to
> >>> SO_REUSEPORT
> >>> on the function socket_set_boolopt() (l. 703 of x_net.c). I don't know
> >>> much
> >>> about sockets but I understand that it allows two [netreceive] objects
> >>> to
> >>> use the same address AND the same port. Do you think this is a proper
> >>> way
> >>> of fixing this problem?
> >> no it's not.
> >> as you've already discovered, this is not supported on all platforms,
> and those platforms that do support SO_REUSEPORT, can (and do) implement it
> very differently.
> >> in most cases it will allow you to bind to the same port multiple
> times, but you will not receive data on all clients.
> >>
> >> https://stackoverflow.com/a/14388707
> >>
> >> one possible fix for this is to have a single socket that binds to the
> port, that serves all `[netreceive]` instances (using that port). this
> obviously canbonly work if all `[netreceive]` instances live in the same
> application.
> >>
> >> you might also have luck with multicast.
> >>
> >>
> >> mfg.hft.fsl
> >> IOhannes
> >>
> >>
> >> _______________________________________________
> >> Pd-dev mailing list
> >> Pd-dev at lists.iem.at
> >> https://lists.puredata.info/listinfo/pd-dev
> >
> >
> > _______________________________________________
> > Pd-dev mailing list
> > Pd-dev at lists.iem.at
> > https://lists.puredata.info/listinfo/pd-dev
>
>
>
> _______________________________________________
> Pd-dev mailing list
> Pd-dev at lists.iem.at
> https://lists.puredata.info/listinfo/pd-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-dev/attachments/20210228/dd45cdbb/attachment.htm>


More information about the Pd-dev mailing list