[PD] JACK affects UDP rate

Christof Ressi info at christofressi.com
Wed Jul 28 03:51:25 CEST 2021


> On Linux, the receive buffer seems to be ~4kB, you as you already stated.

Where did I state that? :-) I don't have access to a Linux machine right 
now, but I *think* on many systems the default is much higher, something 
around 64 kB. You can check the system wide default receive buffer size 
with "sysctl net.core.rmem_default". It's possible to override it per 
socket with setsockopt() + SO_RCVBUF.

BTW, on Windows the receive buffer size is 8 kB - which is quite low.

> The GUI slugginess also only occurs when using JACK. Maybe GUI updates
> are handled by the same polling function?
Yes, check the source code of sys_pollgui(). GUI updates are only sent 
when there are no sockets to *read* from. Until recently, you could 
completely freeze the Pd GUI by sending a continuous fast stream of 
network data to Pd, because Pd would only receive a single packet per 
DSP tick (see https://github.com/pure-data/pure-data/issues/55). Now 
this doesn't happen anymore with the Portaudio backend, because we 
repeatedly poll sockets while we're idle. Miller has also added some 
logic to sys_pollgui() which makes sure that GUI messages are sent at 
least every 0.5 seconds. This is probably what you experience with the 
Jack backend.
> Trying again: For the typical applications UDP is used for, it's
> probably desirable that the receive buffer is not too large. If the
> buffer is large and the incoming rate is exceeds the processing
> capacity, you get large delays. Often (mostly?), fresh packets are more
> interesting than older ones. Of course, there is a trade-off between
> avoiding packet loss and keeping latency short.

Decreasing the buffer size and using the resulting packet loss as some 
kind of rate limiting sounds like a bad idea to me.

Generally, you want to avoid packet loss at the UDP receive buffer by 
any means. Note that UDP packets can arrive in bulks, e. g. because of 
buffering in network links. If the receive buffer was too small, you 
would lose packets even if you could process them (over time) just fine.

Here are two strategies to avoid packet loss (which can also be combined):

a) increase the receive buffer to match the max. expected bandwidth * 
latency. For example, if you expect up to 1 MB/s of incoming traffic and 
the receive thread can block up to 0.01 seconds due to packet 
processing, the receive buffer should be 10 KB.

b) make sure to drain the UDP receive buffer as fast as possible. Many 
applications would use a dedicated thread that just receives incoming 
datagrams and pushes them to the application's main thread for further 
processing. One example is Supercollider.

---

Now let's assume you really need to apply some rate limiting, e.g. 
because you don't want to overload the audio thread in your audio 
application. Instead of simply counting the number of received packets, 
it often makes more sense to look at the packet content and decide based 
on the application type!

For example, you could have dedicated buffers for each remote endpoint. 
To ensure more fairness, you wouldn't restrict packet processing to just 
N packets per second, but rather process up to M packets *per endpoint* 
per second.

Another example: If you receive OSC bundles with timestamps, it would be 
totally fine to receive a bulk of 100 bundles because you just have to 
put them on a priority queue. You would only need to discard bundles if 
there are two many of them for a given timestamp!

Sometimes, different types of messages might have different priorities, 
so you could put them on dedicated queues. For time critical messages, 
you can minimize latency by using a short fixed size buffer (and drop 
messages on overflow). For messages that are not urgent the buffer can 
be larger or even unbounded.

Also, certain types of data are more redundant than others. With 
continuous data streams, like from an accelerometer sensor, you can drop 
every Nth packet without losing too much information. This is not true 
for individual command messages, which are either received or lost.

In other cases, like audio streams, you know exactly how many packets 
per second you must expect. You would rather put a limit on the number 
of audio streams and instead of starting to randomly drop invidual 
packets from all streams.

---

This should just demonstrate that rate limiting can be implemented in 
many different ways and that it is wrong to assume that "fresh" packets 
are automatically more relevant than older ones.

>> Hmmm... usually incoming UDP packets are discarded if the UDP receive
>> buffer is full. Are you saying that this is not the case on macOS?
> Yeah, they are discarded, too.

Thanks for verifying! I would have been quite surprised otherwise.

Christof






More information about the Pd-list mailing list