[PD] Echo cancellation, adaptative filtering

Charles Henry czhenry at gmail.com
Thu Jan 7 23:28:23 CET 2010


On Thu, Jan 7, 2010 at 1:18 PM, Roman Haefeli <reduzierer at yahoo.de> wrote:

> Seriously, I think there is only little point in pointing to how others
> did it better. I haven't looked at your externals, though, but I pretty
> sure, that they are _very_ advanced (from my perspective. me, who
> doesn't have a clue about adaptive filtering at all, yet). It seems to
> be quite an interesting subject, when looking at your webpage.

Doing it better or worse is not the issue.  It's how well it fits your
application.  There's as many adaptive filters as you've got fingers
and toes, and there's quite a few different methods for the
adaptation.

Questions for the application are things like:
How long is the filter going to be?
Are you trying to solve a system modeling problem or a system inverse
problem?  Acoustic echo cancellation can mean either identifying the
system on the receiving end, or solving the inverse problem on the
output side.
Are you trying to filter a non-stationary system or make a calibrated system?
How fast does the convergence need to be?
How much latency is in the system?  You can't realistically expect to
make good adaptations if you're updating the system when the output
hasn't reached its way back since the previous adaptation.
Last one, and trickiest... are you trying to solve a problem that's
badly conditioned?

One class of methods is gradient descent.  These would be LMS, NLMS,
and RLS.  These typically have a burn-in period of about 10^2 to 10^4
and then a logarithmic rate of error decreases.  These are better for
system identification problems than inverses.

One issue here is that the gradient can be very small along certain
vectors.  So, your system acts as a unary operator on the signal you
feed through it.  It has eigenvectors and eigenvalues.  For linear
systems, the Riesz representation theorem applies, and we have an
integral equation with a kernel that can be decomposed into
eigenvector/eigenvalue pairs.  For systems good for filtering, we have
convolution operators whose eigenvectors are complex sines/cosines,
and your eigenvalues are the spectrum of the signal in convolution.

Gradient descent methods can only converge in the bad spots at the
rate of f_min/f_max (the spread of the magnitudes in the spectrum).
This is generally not a problem, since these spots also make small
contributions to the error.  But you will have some vectors in your
filter that just never converge to anything good, and depending where
you start from, you will get different solutions.

Then, there's the Gauss-Newton methods.  These are like magic for
certain problems.  Where the error function is linear-quadratic and
your operator is positive definite, the filter will converge to
something good in one shot.  An example of this method is the block
frequency domain method.  It's problematic where operators are
positive semi-definite--things can blow up easily and the system will
be prone to noise.  There are ways to regularize the 2nd derivative
operator so that it's inverse is stable, but it also reduces the
convergence rate.

I've got a method in the works (all except for implentation) for
minimizing the product of error and filter variance that will achieve
a trade-off between goodness of fit and computational expense at a
uniformly fast convergence rate, regardless of the ill-behavior of the
system.  It's sort of halfway between gradient descent and regularized
Gauss-Newton.  The idea of which is to solve the system closely where
the solution is good and be able to set the remaining parts of the
filter arbitrarily to make the filter look good, since they can't
affect the results much.

This is all much in how I just look at these problem and analyze
things to death, even when it's not warranted.  You might just grab an
implementation that exists, integrate it correctly on the first try,
and forget all about the analysis.  But the interesting problems
generally require a couple tries and a couple good guesses :)

Chuck




More information about the Pd-list mailing list