[PD] How do I squeeze more performance out of Gem?

cyrille henry ch at chnry.net
Wed Mar 9 08:59:31 CET 2011


hello,

- try using a display list to render a sphere, so that every point don't have to be send for every sphere.
see exemple 09.openGL/02.displaylist
you can also use a model with a sphere.obj to have the same result.

- gemhead are slow. i usually have better result using 1 gemhead and 200 separator, or 200 gemlist, than using 200 gemhead.

- try to put somewhere :
[gemhead 1]
[GEMglLightModeli GL_LIGHT_MODEL_TWO_SIDE GL_FALSE]

it help a lot on my computer (GT 425M), but i don't know on other computer.
could you try and tell me the performance improvement?
(you may have to reverse some light to have close lighting result).

- don't forget to check that you CPU run fullspeed. (sometimes pd use more than 100% cpu, but the cpu monitor fail to detect that and the cpu still run at low speed)

- start pd -noaudio if it is an option.

- you can separate the physical model and the other calculation in a separate pd instance than Gem. using pd~. This may help

well, i think all of this should be enough to draw 10 time more spheres that what you need, at a good fps.

Cyrille

Le 09/03/2011 02:16, John Harrison a écrit :
> I'm working with a high-powered machine but I'm running into a bottleneck with Gem. I'm running at 20fps and at times was intending to have as many as 200+ lines and spheres on a 1024x768 screen. At around 60 lines/spheres I'm already at 50% CPU. I know the problem is Gem because if I stop rendering, CPU immediately drops to less than 4%. There's some other manipulations I use periodically too causing another 40%+ of CPU so I'm a far cry from my 200+ intention while saturating my computational limits. If I turn lighting off, BTW, I already gain 10% CPU back (not an option I want to explore.)
>
> I'm not sure what to do and was even considering breaking the rendering into independent screens (this machine has 8 cores), then using pix_share to recombine them in a "master" instance. I'd have to use pix_snap to capture each of the Gem windows in each of the processes, and each one draws about 40% CPU when capturing a 1024x768 buffer at 20 fps so besides creating a headache for myself this is going to be a lot of CPU overhead. I also don't know how the graphics card is fitting into all of this, if it would become a bottleneck at some point, how to tell, etc. What's the "top" command for a graphics card? :-)
>
> These lines and spheres are nothing special, btw. No texturing, just translates, colors, and alpha. the lines are made with curves of 2 points each.
>
> Is there some trick or some area of programming or using the graphics card I need to be considering? Any thoughts/advice would be appreciated. It's strange --- I don't think I'm seeing performance on this machine much better than on my not-so-special laptop.
>
> This machine has Nvidia GeForce GTX 460, Ubuntu 10.10 32 bit, Pd-extended 0.42-5 binary from the Pure Data site, Intel i7 3Ghz. I'm using Nvidia proprietary driver 290.19.06.
>
> When I say stuff like "40% CPU" I mean for a single core. So in theory this machine has 800% CPU limit in my nomenclature. But since an instance of Pd/Gem runs on only a single core, I have a limit of 100% for any single Pd/Gem instance (as most of you already know I'm sure.)
>
> -John
>
> P.S. I'm loving working with Gem and pmpd these days. Awesome stuff, guys! :-)
>
>
>
>
>
>
>
> _______________________________________________
> Pd-list at iem.at mailing list
> UNSUBSCRIBE and account-management ->  http://lists.puredata.info/listinfo/pd-list



More information about the Pd-list mailing list