[PD] Correct way to precess an audio stream
alessio.degani at ymail.com
Fri Jan 16 12:23:18 CET 2015
Excuse me for this little technical OT :)
The problem is about processing audio stream in real time, but let me
put it in the context od PD in order to fix the ideas.
When you want to write a PD external that process audio stream, the
1- PD passes chunked audio stream to the inlet of the external and read
an audio chunk from the outlet of that external.
Each input and output chunk is of fixed length of, say, Nc samples. Each
chunk is a NON-overlapped adjacent chunk of the audiostream to be processed.
2- The external, read this chunk and trigger the processing method that
process the input chunk and return the processed output chunk.
Now, if my processing routine requires overlapped (with an overlap of
o%) chunks and a chunk length Nf that, in general, is Nf >= Nb.
At this point, id the programmer of the external that is in charge of
manage the data structure and the overlapping scheme.
My question is:
There is a standard way, a "design pattern", to manage this kind of
processing? In the past I've written my own routines that manages a the
simple case of real time processing in the case of Nf = Nb, and with an
overlap of 50%.
It is clear that, in general, this kind of process introduce a delay of
one frame (Nb) or more.
If I know Nb, Nf and o% (or equivalent hop-size h). There is a standard
way to design my internal processing routine?
By standard way I mean a common mode of operation that is formally
correct and optimized (it is clear that, potentially, there are tons of
methods to do this).
Thank you, and excuse me again for this OT
More information about the Pd-list