[PD] pix_texture: not using client storage with

chris clepper cgc at humboldtblvd.com
Tue Jun 8 03:28:08 CEST 2004


On Jun 7, 2004, at 12:21 PM, IOhannes m zmoelnig wrote:

> Jan Meinema wrote:
>> Does anyone know why I get this message every time I alter the color 
>> and rotate object. What does "not using client storage" mean. The 
>> changes I
>
> no need to worry.
> i means, that client-storage is not used (which is a more-or-less 
> efficient way to transfer pixel-data from main-memory on the client to 
> the gfx-card on the server - if i understand it correctly ;-))
>
> yoyou can send a message "client_storage 1" to the [pix_texture] to 
> enable it; this might speed up the texturing (or slow it down: that's 
> why the default is off)

Here's the lowdown on that message:

Client storage is the term used by Apple for their cross-vendor 
(meaning ATI and NV) OpenGL extension for DMA transfers of texture data 
over the AGP bus.  Without this extension the GL driver keeps a copy of 
the pixels and uploads it, while the Apple extension eliminates this 
extra copy.  The difference in performance can be dramatic especially 
as texture data grows in size - in fact turning this on is required for 
1080 HD performance over 30fps on a dual G5 (frame rates more than 
double with it enabled).

Now why is it disabled by default?  The reason actually has to do with 
Quicktime and not OpenGL.  In order to get the best QT performance it 
is best to let QT manage all of the frame decompression internally over 
time rather than just asking for a frame every n milliseconds.  This 
creates a big problem in GEM because GEM has it's own timing callback 
routines and thus QT has to 'sync' to this - however QT is designed the 
other way around: apps are driven by QT.  I did some hacking to get 
playback decent when using lots of movie files at once, and the code 
isn't very pretty.  This only refers to using 'auto 1' and the 'rate 
$1' messages for playback, which gives the best possible performance 
for playback (far in excess of what you've been led to believe about 
the limitations of PC type hardware for video).

Ok, so what the hell does all that mean?  It means that any 
discrepancies between the internal QT tasking and GL texture uploads 
result in artifacts.  These artifacts look like random interlacing 
flickering over the image, and it's quite noticeable.  So the solution 
I decided to turn off client storage by default rather than explain 
over and over how to make movies not look like ass.

How does one turn on client_storage and avoid artifacts?  Two ways 
exist:  the one sure way involves using pix_movie which almost never 
has these artifacts, and does fast texturing by default.  The downside 
is that you can't process the image, but you get the fastest texture 
handling possible (perhaps on any platform).  The second method is to 
manually request frames using the right inlet of pix_film.  This does 
not use the Quicktime internals as much and can often give perfect 
looking textures.  It's not a guaranteed technique however, and 
sometimes you will have to take the performance hit of the GL driver 
copy.  And the really terrible thing is that the faster I make the 
processing code using Altivec and other PPC tuning tricks, the more 
likely the problem is to surface.  I think the solution is to make the 
processing code even faster so the time spent doing the extraneous copy 
is offset somewhat.  In the end, it's still faster than most every 
other code set, commercial or otherwise I've ever used.

File this away somewhere as 'missing' documentation for GEM.

cgc





More information about the Pd-list mailing list