[PD-dev] [tcpserver]: new 'clientbuf' method seems to be buggy

Martin Peach martin.peach at sympatico.ca
Thu Apr 9 00:49:11 CEST 2009


Roman Haefeli wrote:
> On Mon, 2009-04-06 at 21:26 +0000, martin.peach at sympatico.ca wrote:
>> Spoke too soon. On Debian setting the buffer to any size always
>> returns me 2048, so that's no good.
>> On WinXP some values (1,2) do what you said. Others (10,12) don't. I'm
>> not sure what to do about that. It seems to be the OS.
> 
> here, what [tcpserver] reporst as the used buffersize, is always twice
> the number i send. for numbers below 1024, 2048 is reported. for numbers
> higher than 131071, always 262142 is reported.
> 

It's completely up to the operating system to manage the buffers the way 
it chooses. If you ask for a buffer of a certain size you can get 
anything above that size as a result. So Linux appears to never go below 
2048 while Windows gives you whatever you ask for.

Anyway, today I changed [tcpserver] and [tcpclient] in svn to use a 
default 1ms timeout in the select call before sending each byte. This 
seems to give the buffer time to clear before attempting to send the 
next byte.
You can change the timeout in microseconds with the [timeout( message if 
you want to tweak it. A value of zero works most of the time but the 
problem of dropped data seems to disappear if you use about 1ms.
The timeout should normally not delay Pd because it only comes into 
effect when the buffer is not emptying fast enough.

Martin





More information about the Pd-dev mailing list