[PD] question about netclient and netserver

martin.peach at sympatico.ca martin.peach at sympatico.ca
Wed Mar 24 18:23:36 CET 2010


reduzierer wrote:
>>>>From what I know, there is an internal buffer of ~4kB for the sending
>>> sockets in both netclient and netserver (I can't recall whether this
>>> buffer is built into the externals or is part of the network subsystem
>>> of the OS). If that limit is hit, the Pd process is blocked until that
>>> buffer is emptied again (which leads obviously to audio drop-outs).
>>>
>>> IMHO, this is a design flaw, which all sending net-externals suffer
>>> from. There is no way to check the state of this buffer, which would be
>>> required in order to avoid a buffer overrun. It would be sufficient to
>>> get notified only, when the buffer is completely emptied, so that you
>>> could design your patch in a way that it would only send the next
>>> message when the previous one is through.
>>>
>>
>> IMHO it's not a design flaw, but part of TCP's design philosophy that tries to send packets until they are received. In the original concept, bombs would be dropping all over the countryside, destroying cables and data centres willy-nilly, while messages could still get through to the missile silos and AA gunners.
>
> The fact that those externals/objects block the Pd process has nothing
> to do with TCP's design philosophy. I am no expert in this field, but I
> know of other implementations in other programming languages, that
> handle such situations more gracefully, for instance the python twistd
> server, which the new netpd-server is built upon.

Try the latest version. I think what was blocking Pd was that it was trying to print thousands of error messages, which is not one of Pd's strong points. The Pd process blocked because [tcpserver] kept trying to send packets and then print an error message whenever it failed.
The new version stops trying to send when that happens, until it gets unblocked. I tried it and it works: it stops sending when it can't create sender threads, and then starts again gracefully when it is manually unblocked. All the intervening data is lost of course, but there is a "blocked" message emitted through the status outlet so the Pd patch can use a delay or something to to restart the server.

> Imagine an Apache server being completely blocked, because one of its
> clients refuses to receive the webpage quickly enough. 

It's not the same thing, Apache doesn't send huge numbers of messages to clients that don't request them. If a client is dead, a single thread hangs until it times out. It won't try sending anything unless it is asked to.

The application you are using seems more like a video streaming server, which usually use UDP or something similar, which zero handshaking so nothing hangs. And the receiver sees video that's usually choppy and glitchy because of that.

> But this is the
> current situation with the Pd net externals.

Which ones don't work? Only the TCP ones?

> Don't get me wrong, there
> is no point in rebuilding Apache in Pd, nor am I demanding from anyone
> that the situation needs to be changed. Those net externals generally
> are very useful and cover a wide range of applications, but they fail
> also in other situations and that has _nothing_ to do with TCP's design,

Situations like what?

> they fail because the implementation of those objects is not designed to
> handle those situations.

Without knowing What those situations are, I can't say.


Martin

 		 	   		  



More information about the Pd-list mailing list