[PD] Message from the boss of Raspberry Pi Foundation ! (Jon)

Jonathan Sheaffer jon at jonsh.net
Sat Feb 9 13:24:30 CET 2013

Hi All,

I've been a silent observer for some time now, but since GPU processing is
'close to my heart', I thought I'd jump in... So there goes my first post
in the pd-list...

In general, GPUs are really beneficial for parallelisable algorithms
involving heavy-computations, such as FFTs, fast convolution, BLAS with
huge matrices, finite difference modelling etc... To maximise performance,
the GPU kernel needs to unanimously operate over a large enough data set
which needs to be copied into the device's memory, as GPUs generally can't
access the host memory. This would mean large buffers --> increased
latency. So doing 'real-time' DSP on a GPU would probably make more sense
for stuff like physical modelling, complex additive synthesis etc...,
rather than to 'generally reduce the system latency'.

*However*, if the SoC platform physically shares memory between the GPU and
the CPU, then this could, in theory, help reduce the inherent latency (as
no memory transfers would be required), but without having detailed
documentation, this would be difficult to assess.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-list/attachments/20130209/ca3e3af7/attachment.htm>

More information about the Pd-list mailing list