<br><br><div class="gmail_quote">On Wed, Apr 6, 2011 at 2:08 PM, IOhannes m zmoelnig <span dir="ltr"><<a href="mailto:zmoelnig@iem.at">zmoelnig@iem.at</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div class="im">-----BEGIN PGP SIGNED MESSAGE-----<br>
Hash: SHA1<br>
<br>
</div>On 2011-04-06 21:04, Seth Nickell wrote:<br>
> I use a thread per core, it does parallelize nicely.<br>
><br>
<br>
that's what i thought.<br>
<br>
please don't let yourself turn down by all those misers :-)<br>
<br>
fgmasdr<br>
<div class="im">IOhannes<br></div></blockquote><div><br>As a young curmudgeon myself, I might *grumble* seem discouraging. But really, I'd encourage you to take on convolution externals... but don't create a monster.<br>
<br>In the context of threading/part. conv, I had an idea to compute ahead. Most of the calculations for a given block can be computed ahead. Only the most recent block of samples needs to be actually convolved, right away.<br>
<br>Then, once you've summed the most recent block with the other partitions, you'd start a thread to "compute ahead" the next cycle's partitions. If the load is low enough, it would complete by the next cycle--of course, you'd need to wait/join the background thread to make sure that it completes.<br>
</div></div>