[PD] pd-extended crashes sending data to SSR with tcpclient

Iain Mott mott at reverberant.com
Mon Jul 1 22:14:34 CEST 2013


Using iemnet/tcpclient and implementing IOhannes parsing suggestion, my
patch is now communicating with SSR without crashing. There is a
"bogging down" problem though and testing with just 3 sources, I need to
keep the limit at 10 messages/sec for each. It stops working at higher
rates but doesn't crash. SSR is running on this local machine and there
is no WiFi involved. Unfortunately I don't think UDP is an option with
SSR.

SSR also sends XML "level" data to tcpclient constantly for each source
in the scene. Perhaps this extra traffic isn't helping. eg.

<update><source id='1' level='-98.5405'/><source id='2'
level='-99.8139'/><source id='3' level='-99.6628'/><source id='4'
level='-101.127'/></update>

I'll wait to hear back from SSR to see if they have any suggestions.

Cheers and thanks for your help everyone,

Iain







Em Mon, 2013-07-01 às 14:29 -0400, Martin Peach escreveu:
> Forty times a second is relatively slow. Must be something else. I would 
> use wireshark to see what packets are actually going over the wire, 
> especially to see what the last one is.
> These speeds are probably too fast for [print]ing to the console; that 
> can cause problems.
> Are you sending to the same machine? If not is WiFi involved?
> Can you use UDP instead of TCP (for lower overhead and no out-of-order 
> packets)?
> 
> Martin
> 
> On 2013-07-01 13:58, Iain Mott wrote:
> > Hi Martin,
> >
> > The actual patch I'm using is translating MIDI pitch bend data recorded
> > in Ardour3 (location data encoded as pitchbend for practical purposes),
> > translating it into XML and sending it through to the SSR. It's already
> > limiting the rate to 10 messages every second for each moving source and
> > so far I'm only using 4 sources. This rate, done for testing, is already
> > less than ideal. Each location message sent SSR for a given source looks
> > something like the following:
> >
> > <request><source id=1"><position x="1.234"
> > y="-0.234"/></source></request>
> >
> > Does this seem excessive?
> >
> > Cheers,
> >
> > Iain
> >
> >
> >
> > Em Mon, 2013-07-01 às 13:20 -0400, Martin Peach escreveu:
> >> It could be that you are overloading Pd with too many messages. If you
> >> are wildly moving the slider and [tcpclient] is sending one TCP packet
> >> per value you can add messages to the queue faster than they will be
> >> sent out and Pd will eventually run out of resources.
> >>
> >> Maybe put a [speedlim] after your slider, or pack several values into
> >> one message?
> >>
> >> Martin
> >>
> >>
> >>
> >> On 2013-07-01 11:53, Iain Mott wrote:
> >>>
> >>>>> I'll try the backtrace and other things you suggest and report back
> >>>>> on mrpeach/tcpclient in another email.
> >>>>
> >>>> it could well be, that it only does not crash with [iemnet/tcpclient]
> >>>> because you haven't parsed the output yet...
> >>>>
> >>>
> >>> Don't think so - to crash Pd, I wasn't doing any parsing of incoming
> >>> messages - just sending messages out.
> >>>
> >>> Did a backtrace using mrpeach/tcpclient - on a "freeze" as it didn't
> >>> actually crash. Got this response:
> >>>
> >>> #0  0x0000000000442623 in clock_unset (x=0x8c5c80) at m_sched.c:70
> >>> #1  clock_unset (x=0x8c5c80) at m_sched.c:62
> >>> #2  0x000000000044266e in clock_set (x=0x8c5c80, setticks=<optimised
> >>> out>)
> >>>       at m_sched.c:81
> >>> #3  0x00007fffd21cfec1 in tcpclient_child_send (w=0xdec548)
> >>>
> >>> at /home/kiilo/Documents/dev/pd-svn/externals/mrpeach/net/tcpclient.c:380
> >>> #4  0x00007ffff7bc4e9a in start_thread ()
> >>>      from /lib/x86_64-linux-gnu/libpthread.so.0
> >>> #5  0x00007ffff6ec0ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6
> >>> #6  0x0000000000000000 in ?? ()
> >>>
> >>>
> >>> Will do some more tests later.
> >>>
> >>> Thanks,
> >>>
> >>>
> >>> _______________________________________________
> >>> Pd-list at iem.at mailing list
> >>> UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
> >>>
> >>>
> >>
> >
> >
> >
> >
> 





More information about the Pd-list mailing list