[PD-dev] about [tcpserver] (mrpeach and iemnet)

Roman Haefeli reduzent at gmail.com
Mon Dec 13 19:01:04 CET 2010


On Mon, 2010-12-13 at 11:43 -0500, Hans-Christoph Steiner wrote:
> On Dec 13, 2010, at 10:19 AM, Roman Haefeli wrote:
> 
> > On Fri, 2010-12-10 at 10:54 -0500, Hans-Christoph Steiner wrote:
> >> On Dec 10, 2010, at 5:12 AM, IOhannes m zmölnig wrote:
> >>> On 12/09/2010 10:28 PM, Roman Haefeli wrote:
> >>>>
> >>>> @ IOhannes
> >>>> Though I like this 'stable'/reliable behaviour of iemnet's
> >>>> [tcpserver],
> >>>> I wonder what happens, if it keeps sending data to the unreachable
> >>>> client. Will it just go on and buffer everything until the whole
> >>>> RAM of
> >>>> the computer is consumed? If so, wouldn't it be more wise to just
> >>>> disconnect that client at some point in order to avoid the box
> >>>> running
> >>>> out of memory?
> >>>
> >>> you can query the fillstate of the buffer from within the patch and
> >>> act
> >>> upon that: if you prefer to disconnect after 300MB (because of the
> >>> 2.5GB
> >>> memory you have, 2GB are only swap), or if you rather go and crash  
> >>> or
> >>> whatever...it's up to you.
> >
> > Great! That's even better than to disconnect clients at some arbitrary
> > buffer size.
> >
> >> A 300 MB network buffer!  That sounds scary.
> >
> > It's not really a network buffer. If I understand correctly, it's the
> > RAM used by the Pd process that grows, when this buffer is filled. I
> > guess, there is nothing to be really scared about.
> >
> > Anyway, there is no 'real' alternative to buffering data. When a  
> > certain
> > client is not totally responsive,  you either have to make the server
> > blocking ([netsend]/[maxlib/netserver]), or you decide to discard data
> > that cannot be delivered ([mrpeach/tcpserver]), or you buffer the data
> > ([iemnet/tcpserver]). The last option is definitely the best, in my
> > opinion.
> >
> > Doing "networking with Pd" is special in that you use a real-time
> > oriented framework (Pd) together with a 'time-agnostic' one. TCP only
> > cares about _what_ is delivered, but not _when_. Accordingly, special
> > measures are necessary to join both worlds in a reliable way.
> >
> > Roman
> 
> I've never heard of an app do so much network buffering. 

Probably they don't because other apps are not Pd? (No sarcasm intended,
I really think that many others apps are hardly comparable with Pd) 

>  At some  
> point, that data needs to be processed, not just buffered.

Actually, I was trying to explain exactly that in my previous post. In
Pd, you only can do process data 'now', not 'somewhere within the next
two hours'. So, either the sending side needs to buffer or it'll be
blocked. 

>   If you end  
> up having a giant buffer, chances are there is a problem that needs to  
> be dealt with.

The problem is that Pd is 'real-time'? I'm not sure if I understand
where you're heading to.

>  Might as well deal with it after a couple of megs are  
> filled rather than just some

The thing is, that it is perfectly possible to drop a client after
buffering only a few kilobytes. There _might_ be situations where one
prefers to buffer more. I wouldn't say that this is problematic by
design. It really depends on the situation.

I'm sorry for constantly using netpd as an example, but part of it is
sending dumps of all settings of all instruments to a freshly connected
client. Obviously, I want neither the clients nor the server to be
blocked at any time. Also, I don't want to drop any data. What could be
implemented, though, would be to send only small chunks of data upon
request. This would create huge programming overhead. It is _much_
easier and more transparent to buffer the data until everything is
transmitted. As a patch programmer, I am still able to decide if there
is a problem or not. Let's say if the buffer doesn't get smaller after 1
minute (arbitrary example), the server decides that this client is 'not
responsive enough' and disconnects it. However, the most important thing
- in my eyes - is that you have the freedom to design your networking Pd
patch the way you prefer.  

>  From what I've seen, most network buffering is done with a ring  
> buffer, so a fixed size.  That's my two bits...

I'm not knowledgeable enough to comment on the lack of a ring buffer. I
assume it would be more efficient than a dynamically growing buffer? 
In this case the importance and necessity  of a dynamic buffer probably
outweighs the advantages of a fixed size ring buffer? 

In my humble opinion, the approach used by iemnet's classes is the most
suitable and most flexible I can think of and is inevitable for what I
consider 'more elaborate' networking in Pd.

Roman





More information about the Pd-dev mailing list