# [PD] OT: faster than Fourier transform

Mathieu Bouchard matju at artengine.ca
Tue Feb 7 22:47:15 CET 2012

```Le 2012-02-03 à 10:10:00, Charles Henry a écrit :

> As I understand it, it's just a sheer mathematical fact expressing the
> trade-off between the std deviation of a distribution in the time
> domain and the std deviation of the fourier transform of that
> distribution.

Well, there are not-so-obvious implications, and this is why there are
several different reformulations of the principle (position vs momentum ;
energy vs time ; sampling signals ; entropy ; etc).

Even when just looking at the version(s) about sampling, different ideas
come out of it. You can't get information about a frequency if you don't
take enough time to measure it or if you don't take enough measurements.
Otherwise it may have been any other frequency. Whenever you do sampling,
you have to either suppose or ensure that the analog signal does not
contain frequencies that will confuse you. Otherwise, you'll get aliases.

This might be expressible more precisely in terms of standard-deviation,
but I haven't really studied the topic of sampling to that level of depth.

> I agree that it's a hard limit, but it's really just a math problem to
> begin with :)

Math is an excellent source of hard limits. There are not as many hard
limits that require you to look at the universe to find them.

>> What's the speed of the sliding DFT ?
> I can think of an O(n) algorithm for it (per sample).  I'd be
> surprised if there's anything better.

Well, if you have to produce n slightly different values at the time of
every sample, that implies O(n) steps. That's a hard limit unless lazy
evaluation.

But if you have different rates for different harmonics, you might be able
to skip more computations...

> You'd still have to do an FFT to begin with, and after shifting by m
> samples, you'd be in O(m*n) territory.

Shifting what by m samples ?

A sliding DFT doesn't imply a FFT. It could mean doing the long form of
the DFT but in a gradual fashion... a slow integrator just like what you
could be doing using [rpole~ 1] on the result of [*~] [osc~].

> I see some ways those steps could be made simpler.  Lump the 1st and
> 3rd steps together and just add s(n)-s(0) to all the DFT values, then
> multiply once by F1 (performing a circular shift in the time domain).

I was going to say that...

> On the next iteration, use s(n+1)-s(1).  So, the total cost is a
> buffer of N samples, a complex vector of N DFT values (F1)---N+1
> additions/subtractions, and N complex multiply operations (each is 4
> multiply and 2 additions/subtractions), per sample.

BTW, if you multiply F1 by 1+i = sqrt(2i), then you can use a
partially-precomputed version of Gauss' multiplication shortcut... but it
doesn't make an improvement of order.

http://en.wikipedia.org/wiki/Multiplication_algorithm#Gauss.27s_complex_multiplication_algorithm

> There might be some constants to shake out depending which DFT formula
> you decide to use.  The sign convention on the DFT is arbitrary, so I
> often get mixed up (F1 could be e^(-2*pi*i*k/N){k=0,...,N-1} if I'm
> wrong).

The sign used in a forward DFT depends on whether you are using
cos(t+offset) vs cos(t-offset). If your phases are mirrors of everybody
else's, then your forward DFT's sign will have to be the opposite of
everybody else's too.

______________________________________________________________________
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
```