[PD-dev] [mrpeach/tcpserver]: full socket send buffer blocks pd

Roman Haefeli reduzierer at yahoo.de
Wed Dec 16 00:35:27 CET 2009


Hi Martin

On Sat, 2009-12-12 at 14:09 -0500, Martin Peach wrote:
> Hi Roman, sorry for the late reply.

Complex matters require time.
> 
> > I wasn't sure, if I should report that as a bug. The new version of
> > [tcpserver] seems to work fine, BUT now it suffers again from the
> > initial problem, that maxlib's [netserver] is still suffering from: 
> > 
> > If the send buffer of a certain socket is full, [tcpserver] is blocked
> > and thus blocks Pd.
> > 
> >>From a user perspective, i can think of a few solutions to deal with
> > that. However, i don't have clue, what's possible/difficult to
> > implement.
> > 
> > a) If the server doesn't get the necessary ACKs in a meaningful time, if
> > shuts down the connection after a while.
> > 
> 
> It's up to the OS to deal with ACKs. The programmer has no (easy) way of 
> accessing the low-level TCP communications. The send() function puts the 
> data in the queue and returns. If the queue is blocked you don't find 
> out until much later. The connection is closed some minutes after the 
> disconnect occurs.

I see.

> 
> > b) Instead of writing to the send buffer, [tcpserver] could output a
> > message saying, that the data couldn't be sent.
> > 
> 
> Except it doesn't know the data couldn't be sent. I tried unplugging the 
> cable and then plugging it back in. Sometimes all the data that was sent 
> in the meantime will suddenly appear at the client, a few minutes after 
> it was sent from the server. Other times I get 'client disconnected' 
> messages a few minutes after the cable was unplugged. The timeout could 
> probably be adjusted with the Time To Live parameter for the socket but 
> it won't ever be zero.
> The internet protocols were designed to work over broken networks so 
> it's normal that they wouldn't fail at the first attempt to send. UDP 
> was intended for 'send and forget' messages. Maybe you could try 
> [udpsend] to send vast quantities of data, alongside [tcpserver] for 
> control.
> The previous incarnation of [tcpserver] used select() to see if there is 
> still room in the buffer but as we saw this slows the whole thing to a 
> crawl since the buffer has to be checked for every single byte in case 
> it just filled up. It seems more efficient to have the client reply when 
> it gets the message.
> 
> > c) Change the behaviour of the 'sent' message from last outlet so, that
> > it reflects the amount of bytes actually transmitted and not the number
> > of bytes written to the send buffer.
> > 
> 
> Again 'it thinks' the data _was_ sent, so it returns the amount that it 
> sent, even if it is still in the queue on the server, as long as the 
> connection is still open.
> 
> > d) <put something here, that i didn't think of>
> > 
> 
> I think you need to incorporate some kind of feedback from the clients. 
> Either each client could send a request or a heartbeat every second or 
> so, or the server could require a response to each message before 
> sending more. Or use a separate UDP channel for the kind of data that 
> could rapidly fill a buffer, like audio.

As _I_ understand TCP (i might be wrong), it is supposed to handle all
transport-only related issues, such as correct order, completeness etc.
Of course, i could adapt the protocol i use to what i think is supposed
to be part of the transport layer, but because i think, that checking
the responsiveness (or availability, actually) of the clients is part of
the transport and not the application layer, i am not going to introduce
that change. 

I see that it is very complex matter and personally i understand too
little of it in order to have an idea how those issues could be
addressed (in [tcpserver], for instance). 

My personal experience is, that i tried to write a netpd-server in plain
python based on the module SocketServer. I finally managed to do it, but
i ended up with a server, that suffered from the exact same problem:
When wfile.write method didn't return because of a full socket buffer,
the whole server was blocked. People on #python advised me not to use
SocketServer directly, which most people find hard to deal with because
of such issues, but to use the python-twisted framework instead. It was
dead-easy to do it in twisted; basically all i had to do was to
implement the netpd-protocol. Twisted does handle all nasty issues quite
gracefully. If on a specific socket no ACKs are received for a certain
while (or until the send buffer is full, i don't know exactly), it just
disconnects the client. If the client would be available after that
again, it would just receive the content of the buffer from the server
and then disconnect. Both ends notice, that something went wrong, and
that they would have re-establish the connection from scratch in order
to get to the similar state as before. In terms of netpd, this means,
that all patches' states are not confirmed, so the client has to request
a state dump again. This all can be handled consistently, since the
client can be sure, that it either receives everything or if not, it
_certainly_ gets disconnected. 

If i would know how threading works (in python) and how to use, i think
would try to implement it this way:
Each socket would have its own queue. Whenever the server wants to send
a message to one or more clients, it would put the message on the
socket's queue. Each socket connection would run in its own thread and
tries to send the message from the queue to its respective client. If
the wfile.write() method wouldn't return immediately (let's say after
some milliseconds time-out), it would try it again after some time. If
the queue of a certain client reaches a certain limit size, the
respective client would get disconnected.
I haven't really tried that approach (haven't digged that deep into
python) and probably it is not as easy as it may sound, but when I heard
of the new [tcpserver] I thought it might be something along those
lines.
I really don't mean to sound smart-assy, but do you think, such an
approach could be feasible for [tcpserver]?

Cheerio
Roman 





More information about the Pd-dev mailing list