[PD-dev] [tcpserver]: new 'clientbuf' method seems to be buggy

Roman Haefeli reduzierer at yahoo.de
Thu Apr 9 01:37:42 CEST 2009


On Wed, 2009-04-08 at 18:49 -0400, Martin Peach wrote:
> Roman Haefeli wrote:
> > On Mon, 2009-04-06 at 21:26 +0000, martin.peach at sympatico.ca wrote:
> >> Spoke too soon. On Debian setting the buffer to any size always
> >> returns me 2048, so that's no good.
> >> On WinXP some values (1,2) do what you said. Others (10,12) don't. I'm
> >> not sure what to do about that. It seems to be the OS.
> > 
> > here, what [tcpserver] reporst as the used buffersize, is always twice
> > the number i send. for numbers below 1024, 2048 is reported. for numbers
> > higher than 131071, always 262142 is reported.
> > 
> 
> It's completely up to the operating system to manage the buffers the way 
> it chooses. If you ask for a buffer of a certain size you can get 
> anything above that size as a result. So Linux appears to never go below 
> 2048 while Windows gives you whatever you ask for.
> 
> Anyway, today I changed [tcpserver] and [tcpclient] in svn to use a 
> default 1ms timeout in the select call before sending each byte. This 
> seems to give the buffer time to clear before attempting to send the 
> next byte.
> You can change the timeout in microseconds with the [timeout( message if 
> you want to tweak it. A value of zero works most of the time but the 
> problem of dropped data seems to disappear if you use about 1ms.
> The timeout should normally not delay Pd because it only comes into 
> effect when the buffer is not emptying fast enough.


thanks for the update. i'll be happy to test.

roman


		
___________________________________________________________ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de





More information about the Pd-dev mailing list