[PD] pd and tcp: what to do against crashes?

Roman Haefeli reduzierer at yahoo.de
Tue Feb 24 02:15:46 CET 2009


On Mon, 2009-02-23 at 19:05 -0500, Martin Peach wrote:
> Roman Haefeli wrote:
> > On Mon, 2009-02-23 at 21:03 +0000, Martin Peach wrote:
> >> OK I fixed it now in svn. It works on debian. The select() call was not 
> >> being done properly. Now I need to test it on Windows again.
> > 
> > hey, many thanks! it works. now i wonder, what happens, if the message
> > is triggered: 'tcpserver_send_buf: client 1 not writeable'. does that
> > indicated, that the buffer is cleared? does it mean, that when this
> > message appears, that at least one message didn't come through?
> > 
> 
> Right now it means that the message is dropped. I can't see a way of 
> holding on to it that wouldn't end up crashing Pd eventually if you keep 
> sending to an unconnected client.

do i understand correctly, that if the buffer is full, there is a time
limit for it to become emptied and if it is not emptied in that given
time interval, the content is cleared? if this is true, i think, the one
second interval is way to short. for instance, if a state dump happens
in netpd (probably several hundred messages), it could well be, that the
connection is not fast enough send enough messages in the given time, so
they would be dropped. i guess, for my on practice, i change the code to
use a much longer time interval.

what is not solved yet: similar to the previous version, a drop-out
occurs, whenever a buffer overrun happens. unlike before, it is not
possible, that pd hangs forever anymore (it will only hang at most for
the given time limit), but there is still no mechanism provided to
generally avoid drop-outs. 

> > somehow i need to design netpd in way, that as soon as one single
> > message is lost, the connection should be shut down and established
> > again, and the client should then again sync with other clients.
> > otherwise very bad things could happen (patches are not transmitted
> > completely and loading incomplete patches causes pd crashing). 
> > 
> 
> Well the easiest thing would be to have [tcpserver] close the connection 
> itself when that happens.

it's just too easy to trigger that. i think, it would lead to too many
unwanted disconnects. 

>  The next best would be to have it output a 
> message on a 'status' outlet that you could use to close the connection.

personally, i find this the much better idea.

> > before the change i could be sure, that either all messages came through
> > or the server crashed at some point, if messages could not be delivered.
> > now, since the server doesn't crash anymore, i need to know, if messages
> > were dropped. how can i know?

> At the moment it prints to the Pd window, which isn't much use for 
> control purposes. As I said, for me the easiest and most logical thing 
> is to have the connection closed automatically, but then you have to 
> keep track of the connection count to know whether it happened.
> What do you think?

without knowing how hard it would be to implement, the best solution IMO
(and the only one, that addresses all of above issues) would be, if the
whole buffering would happen in the pd patch itself, so that the patch
could adapt itself to the current network conditions. translated into
features, this would mean, that [tcpserver] needs to provide information
about its inner buffer state. the most simple and probably most
effective thing i can think of, would be an additional outlet, that
sends a bang every time, when the inner buffer is completely emptied. i
don't know, if it has several buffers, one for each client; if so, then
probably a number (socket number) would be more appropriate than a bang.
this way, a patch can send only as many messages, as the bandwidth
allows. also it would give the possibility to the patch to decide, what
time interval of not being able to send messages is appropriate to shut
down the connection. the time interval could be dynamically set without
the need to change the code of [tcpserver]. 

i see, that implementing those features would make the use of and the
programming around [tcpserver] much more complex, although it would make
it much more powerful. personally, i am all for giving the most control
to the patch programmer, since i believe, that only then pd can be used
for robust programming. it's probably a matter, if someone sees pd as a
fully featured programming language or rather as a tool for fast
prototyping or a 'quick hacking-together' à la 'reaktor'. both
expectations are valid, but speaking for myself, i never found, that
things were _too_ low-level in pd. 
[tcpserver] is actually a good example for explaining what i mean: it
was originally designed to tranport streams of data between the server
and clients. in order to transport packet oriented protocols,
[tcpserver] would have needed to be adapted accordingly, while each
protocol would have required its own code. the fact, that i can do all
that in pd, let's me implement those protocols, that i personally need
(without touching the code of [tcpserver]). this way, i can expand the
functionality of [tcpserver] myself. the same would go for [tcpserver]
providing more info about its inner state: it enables the patch
programmer to design a server around it for very particular needs. 

this is what i think. sorry, that got quite long again. 

what do you think?


while we are at it: 
error handling is another (similar) issue. i would pretty much like
[soundfiler] or [textfile] giving me an appropriate message, when a file
could not be found. this is just for the record. i know, that this came
up a few times already.

roman



		
___________________________________________________________ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de





More information about the Pd-list mailing list