<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Sat, Mar 2, 2013 at 12:28 PM, ronni montoya <span dir="ltr"><<a href="mailto:ronni.montoya@gmail.com" target="_blank">ronni.montoya@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi, Charles, my idea in using shannons entropy is to measure self<br>
generated songs.<br>
<br>
For example if you have a patch that generate sound structures using a<br>
generative rules it would be nice to measure that sound structure and<br>
use that measurement to evolve the rules that generate that sound<br>
structure in order to create more complex structures for example.<br></blockquote><div><br></div><div>Cool! That's a great idea!<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
But how to measure a sound structure using shannons entropy?<br></blockquote><div><br></div><div>I guess I'm interested because it's a really tricky problem to define. There's no clear mathematical formula to apply. I'm happy to discuss how you might do it, but I don't know if it's been done correctly already--or if there's some articles about entropy definitions for signals.<br>
<br></div><div>The important thing is if it captures the properties of the signal you care about. If you have no math to start from--describe it verbally first.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I was experimenting taking only short pieces of a larger sound ,<br>
converting each piece into a string and evaluate the shannon entropy<br>
of each string.<br>
<br>
In this case entropy varies with time and what i am interested in are<br>
the entropy trayectories.<br>
<br>
You can plot this trayectories and compare different trayectories from<br>
different songs .<br>
<br>
More complex sound structures should have more complex trayectories ,<br>
not chaotic , not periodic but more complex . The problem for me is<br>
that i need to plot or visualize the entropy trayectories (values) in<br>
order to see the complexity of a sound structure.<br>
<br>
It would be nice to find a way to automate , for example find a way of<br>
measure different trayectories algorithmically and that computer can<br>
tells automatically which one is more complex.<br>
<br>
Do you have an idea?<br></blockquote><div><br></div><div>Martin's suggestion about spectral distribution is good.<br></div><div>Autocorrelation might also have some good properties--the signal has less entropy when it is more self-similar. This also starts to sound like fractal dimension, which can be calculated by a box-muller method.<br>
</div><div><br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I have a question, why do you say it would be meaning less to convert<br>
signal into symbols?<br></blockquote><div><br></div><div>It may be meaningless if you choose a bad rule to convert them into symbols. Here's an example meaningless rule:<br></div><div>Convert ranges of signal values into discrete values:<br>
</div><div>-1 to -0.99 -> -99<br></div><div>-0.99 to -0.98 -> 98<br>...<br></div><div>-0.01 to 0 -> 0<br></div><div>0 to 0.01 -> 1<br>...<br><br></div><div>Then, if you had a signal and you multiplied it by 10, the entropy measured from the discrete values would increase. However--this does not mean the signal has more information. It just becomes louder.<br>
</div><div><br></div><div>If you decide to convert the signal into symbols, it has to be a meaningful rule. Otherwise, you might not be measuring the thing you meant to.<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Other way i was experimenting is using this with video and images, for<br>
example converting an image into a array of characters iterating over<br>
all the pixels an getting the color of each pixel , then converting<br>
those values into characters and then evaluating the shannons entropy<br>
of each image.<br>
<br>
I would like to expand this and use it also for self generated 3d<br>
structure, but im still thinking about this.<br>
<br>
<br>
cheers.<br>
<br>
<br>
R.<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
can you please explain me why do you say it would be meaningless?<br>
<div class="im"><br>
"That would do something, but may be meaningless--It would be just one<br>
way of converting the signal from real numbers to a discrete set of<br>
things/symbols that is easier to calculate."<br>
<br>
<br>
<br>
</div>2013/2/27, Charles Z Henry <<a href="mailto:czhenry@gmail.com">czhenry@gmail.com</a>>:<br>
<div class="im">> If you took the fft squared magnitude, perfectly noisy data should have a<br>
> chi-squared distribution in each bin (I think). If you assumed that model<br>
> and calculated the parameters of the distribution on each block, you'd find<br>
> out how much information is in each of those peaks relative to the assumed<br>
> distribution and just add it up.<br>
><br>
> What ever algorithm you choose probably needs to pass some "common sense"<br>
> tests like what you mention Martin, noise has more entropy than a sine<br>
> wave. Also, if you take noise and just apply a comparison > 0, you get a<br>
> signal with less entropy.<br>
><br>
> On Wed, Feb 27, 2013 at 7:54 AM, Martin Peach<br>
> <<a href="mailto:martin.peach@sympatico.ca">martin.peach@sympatico.ca</a>>wrote:<br>
><br>
>> Why not do an FFT and measure the variance of the channels?<br>
>> For instance white noise has maximum entropy and all the bins of its FFT<br>
>> will be more or less the same, while a sine wave has low entropy and one<br>
>> bin will be much larger than the others.<br>
>><br>
>><br>
>> Martin<br>
>><br>
>><br>
>> On 2013-02-27 08:40, ronni montoya wrote:<br>
>><br>
>>> Hi, why is not possible? Instead of analysing the real time value of<br>
>>> the signal , maybe i can have a memory or buffer that store the a<br>
>>> piece of signal ( groups of samples) from time to time and then<br>
>>> analize that group of values.<br>
>>><br>
>>> Maybe it can convert that group of values into a string and then:<br>
>>><br>
</div>>>> <a href="http://www.shannonentropy." target="_blank">http://www.shannonentropy.</a>**<a href="http://netmark.pl/calculate" target="_blank">netmark.pl/calculate</a><<a href="http://www.shannonentropy.netmark.pl/calculate" target="_blank">http://www.shannonentropy.netmark.pl/calculate</a>><br>
<div><div class="h5">>>><br>
>>><br>
>>><br>
>>> Other idea : ive seen using shannon entropy for calculating complexity<br>
>>> in terms of spatial configuration.<br>
>>><br>
>>> Maybe other option could be converting my signal into image for<br>
>>> example using similarity matrix and then analyze that image to get<br>
>>> entropy values.<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> cheers<br>
>>><br>
>>><br>
>>> R<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> 2013/2/26, Charles Z Henry <<a href="mailto:czhenry@gmail.com">czhenry@gmail.com</a>>:<br>
>>><br>
>>>> Hi Ronni<br>
>>>><br>
>>>> How do you mean to do it?<br>
>>>><br>
>>>> Shannon entropy is not an independent measurement--the information in a<br>
>>>> observation is relative to the distribution of all it's possible<br>
>>>> values.<br>
>>>><br>
>>>> If I just take one sample and it's evenly distributed between -0.98 and<br>
>>>> 1<br>
>>>> and it's quantized in 0.02 increments (to make the math easier), then<br>
>>>> the<br>
>>>> information of any value observed is:<br>
>>>> -0.01*log(0.01)<br>
>>>><br>
>>>> Then--if I had a signal that's N samples long, I have N times as much<br>
>>>> information. Or perhaps think of it as a rate of information.<br>
>>>><br>
>>>> But for real numbers and continuous distributions, this doesn't work.<br>
>>>> The<br>
>>>> information in a single observation diverges. So, doing that with<br>
>>>> floating<br>
>>>> point numbers is not practical.<br>
>>>><br>
>>>> You often see Shannon entropy describing digital signals. If the<br>
>>>> signal<br>
>>>> just switches between 0 and 1, we can generate a distribution of the<br>
>>>> data<br>
>>>> and see what the probability is empirically. The entropy of each new<br>
>>>> sample is relative to the distribution. Likewise, then if you know the<br>
>>>> maximum rate of switching, you can figure out the maximum rate of<br>
>>>> information in the signal.<br>
>>>><br>
>>>> Just a few thoughts...<br>
>>>><br>
>>>> Chuck<br>
>>>><br>
>>>><br>
>>>><br>
>>>> On Tue, Feb 26, 2013 at 6:09 AM, ronni montoya<br>
</div></div>>>>> <<a href="mailto:ronni.montoya@gmail.com">ronni.montoya@gmail.com</a>>**wrote:<br>
<div class="im">>>>><br>
>>>> Hi , i was wondering if anybody have implemented the shannon entropy<br>
>>>>> function in pd?<br>
>>>>><br>
>>>>> Do anybody have tried measuring entropy of a signal?<br>
>>>>><br>
>>>>><br>
>>>>> cheeers<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> R.<br>
>>>>><br>
</div>>>>>> ______________________________**_________________<br>
<div class="im">>>>>> <a href="mailto:Pd-list@iem.at">Pd-list@iem.at</a> mailing list<br>
>>>>> UNSUBSCRIBE and account-management -><br>
</div>>>>>> <a href="http://lists.puredata.info/**listinfo/pd-list" target="_blank">http://lists.puredata.info/**listinfo/pd-list</a><<a href="http://lists.puredata.info/listinfo/pd-list" target="_blank">http://lists.puredata.info/listinfo/pd-list</a>><br>
>>>>><br>
>>>>><br>
>>>><br>
>>> ______________________________**_________________<br>
>>> <a href="mailto:Pd-list@iem.at">Pd-list@iem.at</a> mailing list<br>
>>> UNSUBSCRIBE and account-management -> <a href="http://lists.puredata.info/**" target="_blank">http://lists.puredata.info/**</a><br>
>>> listinfo/pd-list <<a href="http://lists.puredata.info/listinfo/pd-list" target="_blank">http://lists.puredata.info/listinfo/pd-list</a>><br>
>>><br>
>>><br>
>>><br>
>><br>
><br>
</blockquote></div><br></div></div>