[PD] pd-extended crashes sending data to SSR with tcpclient

Matthias Geier matthias.geier at gmail.com
Tue Jul 2 17:07:47 CEST 2013


Hi Iain.

To be honest, I didn't think about the problem that a message could
need more than one packet.
It's good to know that iemnet/tcpclient can handle that.

@IOhannes: thanks for the suggestion. And (binary) 0 is indeed the
terminating character.

On Mon, Jul 1, 2013 at 10:14 PM, Iain Mott <mott at reverberant.com> wrote:
> Using iemnet/tcpclient and implementing IOhannes parsing suggestion, my
> patch is now communicating with SSR without crashing. There is a
> "bogging down" problem though and testing with just 3 sources, I need to
> keep the limit at 10 messages/sec for each. It stops working at higher
> rates but doesn't crash. SSR is running on this local machine and there
> is no WiFi involved. Unfortunately I don't think UDP is an option with
> SSR.

No sorry, not for now.
But feel free to hack into the SSR code!

> SSR also sends XML "level" data to tcpclient constantly for each source
> in the scene. Perhaps this extra traffic isn't helping. eg.
>
> <update><source id='1' level='-98.5405'/><source id='2'
> level='-99.8139'/><source id='3' level='-99.6628'/><source id='4'
> level='-101.127'/></update>
>
> I'll wait to hear back from SSR to see if they have any suggestions.

I guess you are talking to me ...

There is one quick and hackish way to avoid the level messages:
Go to src/boostnetwork/connection.cpp (around line 102) and remove the line

  _subscriber.send_levels();

... and recompile. This should get rid of the annoying "level" messages.
The SSR still sends all other messages, but if desired you can disable
them in a similar manner.

I'm aware that this isn't a satisfactory long-term solution, but for
now it may help.

We have big plans to modularize the network interface of the SSR in a
way that different network protocols can be used interchangeably, e.g.
WebSockets, FUDI, OSC, ...
MIDI could probably also be included there.

In addition, we want to implement a publish-subscribe mechanism (for
all protocols which have a back-channel) which allows clients to
select the exact amount (and probably rate) of information to receive
from the SSR.

However, currently we just don't have the resources to make these changes.

@list: if anyone wants to help feel free to contact us: ssr at spatialaudio.net!

BTW, some advertisement: Did everyone check out the brand new
"preview" version of the SSR: http://spatialaudio.net/ssr/download/?
It also features multi-threading and the brand new (and still quite
experimental) NFC-HOA renderer!

cheers,
Matthias

> Cheers and thanks for your help everyone,
>
> Iain
>
>
>
>
>
>
>
> Em Mon, 2013-07-01 às 14:29 -0400, Martin Peach escreveu:
>> Forty times a second is relatively slow. Must be something else. I would
>> use wireshark to see what packets are actually going over the wire,
>> especially to see what the last one is.
>> These speeds are probably too fast for [print]ing to the console; that
>> can cause problems.
>> Are you sending to the same machine? If not is WiFi involved?
>> Can you use UDP instead of TCP (for lower overhead and no out-of-order
>> packets)?
>>
>> Martin
>>
>> On 2013-07-01 13:58, Iain Mott wrote:
>> > Hi Martin,
>> >
>> > The actual patch I'm using is translating MIDI pitch bend data recorded
>> > in Ardour3 (location data encoded as pitchbend for practical purposes),
>> > translating it into XML and sending it through to the SSR. It's already
>> > limiting the rate to 10 messages every second for each moving source and
>> > so far I'm only using 4 sources. This rate, done for testing, is already
>> > less than ideal. Each location message sent SSR for a given source looks
>> > something like the following:
>> >
>> > <request><source id=1"><position x="1.234"
>> > y="-0.234"/></source></request>
>> >
>> > Does this seem excessive?
>> >
>> > Cheers,
>> >
>> > Iain
>> >
>> >
>> >
>> > Em Mon, 2013-07-01 às 13:20 -0400, Martin Peach escreveu:
>> >> It could be that you are overloading Pd with too many messages. If you
>> >> are wildly moving the slider and [tcpclient] is sending one TCP packet
>> >> per value you can add messages to the queue faster than they will be
>> >> sent out and Pd will eventually run out of resources.
>> >>
>> >> Maybe put a [speedlim] after your slider, or pack several values into
>> >> one message?
>> >>
>> >> Martin
>> >>
>> >>
>> >>
>> >> On 2013-07-01 11:53, Iain Mott wrote:
>> >>>
>> >>>>> I'll try the backtrace and other things you suggest and report back
>> >>>>> on mrpeach/tcpclient in another email.
>> >>>>
>> >>>> it could well be, that it only does not crash with [iemnet/tcpclient]
>> >>>> because you haven't parsed the output yet...
>> >>>>
>> >>>
>> >>> Don't think so - to crash Pd, I wasn't doing any parsing of incoming
>> >>> messages - just sending messages out.
>> >>>
>> >>> Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't
>> >>> actually crash. Got this response:
>> >>>
>> >>> #0  0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70
>> >>> #1  clock_unset (x=0x8c5c80) at m_sched.c:62
>> >>> #2  0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised
>> >>> out>)
>> >>>       at m_sched.c:81
>> >>> #3  0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
>> >>>
>> >>> at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380
>> >>> #4  0x00007ffff7bc4e9a in start_thread ()
>> >>>      from /lib/x86_64-linux-gnu/libpthread.so.0
>> >>> #5  0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> >>> #6  0x0000000000000000 in ?? ()
>> >>>
>> >>>
>> >>> Will do some more tests later.
>> >>>
>> >>> Thanks,
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> Pd-list at iem.at mailing list
>> >>> UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
>> >>>
>> >>>
>> >>
>> >
>> >
>> >
>> >
>>
>
>
>
> _______________________________________________
> Pd-list at iem.at mailing list
> UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list



More information about the Pd-list mailing list