[PD] measuring entropy of a signal?

Charles Z Henry czhenry at gmail.com
Wed Feb 27 16:33:04 CET 2013


On Wed, Feb 27, 2013 at 7:40 AM, ronni montoya <ronni.montoya at gmail.com>wrote:

> Hi, why is not possible?


What I mean is using floating point numbers, as an approximation of real
numbers.  We have a finite number of samples, so it's impossible to work
with continuous distributions, except by approximation.
However--brainstorming a few methods of approximation is good.  I'm not
particularly an expert on the subject of entropy, but I enjoy it.


> Instead of analysing the real time value of
> the signal , maybe i can have  a memory or buffer  that store the a
> piece of signal ( groups of samples) from time to time and then
> analize that group of values.
>

If you're analyzing only pieces you might wonder if the signals behave the
same all the time.  There are many "bursting" phenomena that are
interesting.  Those kinds of signals have long-term correlations that have
lower entropy--but any small segment does not capture the behavior.


>
> Maybe it can convert that group of values into a string and then:
>
> http://www.shannonentropy.netmark.pl/calculate
>
>
That would do something, but may be meaningless--It would be just one way
of converting the signal from real numbers to a discrete set of
things/symbols that is easier to calculate.

Since you brought up the topic---I was reading on wikipedia about how
shannon entropy is used to obtain lower bounds on compression ratios.
There are some types of audio compression--could you find a connection
there?


> Other idea : ive seen using shannon entropy for calculating complexity
> in terms of spatial configuration.
>
> Maybe other option could be converting my signal into image for
> example using similarity matrix and then analyze that image to get
> entropy values.
>
>
>
>
> cheers
>
>
> R
>
>
>
>
>
> 2013/2/26, Charles Z Henry <czhenry at gmail.com>:
> > Hi Ronni
> >
> > How do you mean to do it?
> >
> > Shannon entropy is not an independent measurement--the information in a
> > observation is relative to the distribution of all it's possible values.
> >
> > If I just take one sample and it's evenly distributed between -0.98 and 1
> > and it's quantized in 0.02 increments (to make the math easier), then the
> > information of any value observed is:
> > -0.01*log(0.01)
> >
> > Then--if I had a signal that's N samples long, I have N times as much
> > information.  Or perhaps think of it as a rate of information.
> >
> > But for real numbers and continuous distributions, this doesn't work.
>  The
> > information in a single observation diverges.  So, doing that with
> floating
> > point numbers is not practical.
> >
> > You often see Shannon entropy describing digital signals.  If the signal
> > just switches between 0 and 1, we can generate a distribution of the data
> > and see what the probability is empirically.  The entropy of each new
> > sample is relative to the distribution.  Likewise, then if you know the
> > maximum rate of switching, you can figure out the maximum rate of
> > information in the signal.
> >
> > Just a few thoughts...
> >
> > Chuck
> >
> >
> >
> > On Tue, Feb 26, 2013 at 6:09 AM, ronni montoya
> > <ronni.montoya at gmail.com>wrote:
> >
> >> Hi , i was wondering if anybody have implemented the shannon entropy
> >> function in pd?
> >>
> >> Do anybody have tried measuring entropy of a signal?
> >>
> >>
> >> cheeers
> >>
> >>
> >>
> >> R.
> >>
> >> _______________________________________________
> >> Pd-list at iem.at mailing list
> >> UNSUBSCRIBE and account-management ->
> >> http://lists.puredata.info/listinfo/pd-list
> >>
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puredata.info/pipermail/pd-list/attachments/20130227/ace0ef64/attachment.htm>


More information about the Pd-list mailing list