[PD] ipoke~ ?

Miller Puckette msp at ucsd.edu
Thu Jun 14 20:41:01 CEST 2012


I've been thinking about this for some days.  I agree there are two
fundamentally different approaches (A: deal with each incoming sample
independently, for each one adding some sort of filter kernel into the
table; or B: advancing systematically through the table, filling each point
by interpolating from the input signal).  

I think in approach A it's better not to attempt to normalize for speed
since there would always be a step where you have to differentiate the
location pointer to get a speed value and that's fraught with numerical
peril.  Plus, we don't know how much the 'user' will know about write pointer
speed - perhaps there's a natural way to determine that, in which case the
numerically robust way to deal is to have the user take care of it
appropriately for the situation.

Aanyway, if we're simulating a real moving source (my favorite example
being lightning) it's physically correct for the amplitude to go up if
the source moves toward the listener, even to the point of generating a
sonic boom.

In the other scenario it seems that the result is naturally normalized, in
the sense that a signal of value '1' should put all ones in the table (because
how else should you interpolate a function whose value is 1 everywhere?)

Choosing (A) for the moment, for me the design goal would be, "if someone
writes a gianl equal to 1 and sent it to points 0, a, 2a, 3a, ... within
some reasonable range of stretch values _a_, would I end up with a constant
(which I wold suggest should be 1/a) everywhere in the table?  If not you'd
hear some sort of _a_ - dependent modulation.

I think you have to put a bound on _a_ - if it's allowed to be unbounded
there's no fixed-size kernel that will work, and varying the size of the
kernel again involves judging the "velocity" _a_ from the incoming data
which I argued against already.

cheers
Miller

On Thu, Jun 14, 2012 at 12:24:32PM -0500, Charles Henry wrote:
> On Wed, Jun 13, 2012 at 6:14 PM, katja <katjavetter at gmail.com> wrote:
> 
> > There should be an (optional) amplitude compensation for up- and
> > downsampling, as an amplitude effect would be inconvenient in the case
> > of a variable-speed sound-on-sound looper.
> >
> > Katja
> 
> I think that a consideration here to justify a scaling effect is to
> deliver the same rate of power.
> 
> I like looking at this problem with sinc functions, because the
> spectrum becomes easy to see, and the energy is easy to calculate.
> 
> The function with sampling rate f_s and unit spectrum from -f_s/2 to
> f_s/2 is f_s*sinc(t*f_s).  This function when it's convolved with
> itself, equals itself.
> 
> and if you have f1 < f2,  f1*sinc(t*f1) convolved by f2*sinc(t*f2) =
> f1*sinc(t*f1)   which is important for comparing interpolators at
> different frequencies.
> 
> The L2 norm of f_s*sinc(t*f_s) = f_s.
> Here's the term that grows larger when we increase f_s.
> 
> In a given block, you're always writing N samples.  Your goal is to
> write N orthogonal functions that fills all the values in some
> interval and keep normalized the power during that interval.
> 
> _______________________________________________
> Pd-list at iem.at mailing list
> UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list



More information about the Pd-list mailing list