[PD] Message from the boss of Raspberry Pi Foundation ! (Jon)

Scott R. Looney scottrlooney at gmail.com
Sat Feb 9 19:32:50 CET 2013


i read that whole discussion thread on the RPi board, and based on that it
certainly sounds murky at best that GPU processing is going to become a
reality on the RPi anytime soon, without Broadcom's support/permission.
JamesH, one of their main hardware dev guys, seems pretty determined you
can't reverse engineer video code, and everyone else seems to have stars in
their eyes about it.

agreed, the idea that Eben might be working on some pathways to doing this
by getting Broadcom to agree to opening up certain pathways to processing
via the GPU is a very tantalizing prospect. someone should press on him a
bit regarding this...

scott




On Sat, Feb 9, 2013 at 4:57 AM, Pierre Massat <pimassat at gmail.com> wrote:

> Hi Jon,
>
> Thank you for your insight, and welcome in the list !
>
> I personnally think that for most real-time application (like the guitar
> effects processor i'm using), a latency equal à below 10 to 8 ms is
> definitely acceptable (especially considering the price). I could imagine
> many applications for other instruments that would work just fine with such
> a latency.
>
> Pd currently works fine with no GUI at 10 ms, with simple patches. One has
> to increase the latency to 16 ms (maybe more for very heavy patches) if one
> needs to do some fft or other demanding stuff (i used a phase vocoder in my
> video).
>
> So if using the GPU for DSP doesn't reduce latency, but allows for bigger
> patches, it's already great news.
>
> Cheers,
>
> Pierre.
>
> 2013/2/9 Jonathan Sheaffer <jon at jonsh.net>
>
>> Hi All,
>>
>> I've been a silent observer for some time now, but since GPU processing
>> is 'close to my heart', I thought I'd jump in... So there goes my first
>> post in the pd-list...
>>
>> In general, GPUs are really beneficial for parallelisable algorithms
>> involving heavy-computations, such as FFTs, fast convolution, BLAS with
>> huge matrices, finite difference modelling etc... To maximise performance,
>> the GPU kernel needs to unanimously operate over a large enough data set
>> which needs to be copied into the device's memory, as GPUs generally can't
>> access the host memory. This would mean large buffers --> increased
>> latency. So doing 'real-time' DSP on a GPU would probably make more sense
>> for stuff like physical modelling, complex additive synthesis etc...,
>> rather than to 'generally reduce the system latency'.
>>
>> *However*, if the SoC platform physically shares memory between the GPU
>> and the CPU, then this could, in theory, help reduce the inherent latency
>> (as no memory transfers would be required), but without having detailed
>> documentation, this would be difficult to assess.
>>
>> Cheers,
>> Jon.
>>
>> www.jonsh.net
>>
>> _______________________________________________
>> Pd-list at iem.at mailing list
>> UNSUBSCRIBE and account-management ->
>> http://lists.puredata.info/listinfo/pd-list
>>
>>
>
> _______________________________________________
> Pd-list at iem.at mailing list
> UNSUBSCRIBE and account-management ->
> http://lists.puredata.info/listinfo/pd-list
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-list/attachments/20130209/5f1ac0b6/attachment.htm>


More information about the Pd-list mailing list