[PD-dev] [GEM] update

chris clepper cclepper at artic.edu
Sun May 11 22:07:24 CEST 2003


>Well, Jitter does it the GridFlow way, which is already pretty abstracted
>out from low-level considerations. The most difficult part is actually
>mapping the dimensions and indices of one data structure to another in a
>_meaningful_ way (or at least an interesting way). The most lowlevel is
>direct redimensioning, but actually there are much more sophisticated
>transforms than mere reinterpretation of data.

the 'meaningful way' is what i'm talking about.  how to make it 
simple for the user to get some element of the pixel data, like the Y 
for example, and map it to another system like texcoords or vertices.

>In any case, a type conversion is not more that just saying that you want
>a type conversion to happen ([@cast uint8]), and a redim is not more than
>saying you want it to happen ([@redim {240 320 3}]), and all objects do
>check bounds automatically. So I'm not sure what kind of "housekeeping
>like creating buffers, type and bounds checking, etc" that you are talking
>about.

ok, i'm not making any sort of value judgement about GridFLow at all, 
but i think you actually made the point for me: i don't think users 
need to care or even know about which C data type the pixel info is 
in and what it needs to be cast to to work in GL.   there are lots of 
people i know and work with that really couldn't be bothered with the 
process of explicitly doing the buffer parsing to get at the data 
they want, casts and resizes - that's the sort of thing the 
programmers of the system are supposed to do for them. ;)


>  > instead a series of bridge objects can be constructed either in code
>>  or by abstraction to translate and guide the data around. [...] just
>>  wanted to bounce the idea off the others devs to get some input.
>
>could you please elaborate on this?

i do not have anything working at the moment, it was only an idea i 
thought the other devs might want to think about too.  but basically 
the concept is to have objects that took info out of the pixel buffer 
and turned that info into something that the particle system could 
use or could be vertices used to built a 3D Geo, etc.  the object 
would simply require the user to select which part of the data they 
want to use and optionally resize and rearrange it.   a hypothetical 
object could be [pix_to_vertex R 64 64] which would take the red 
channel of the video and scale it to 64 by 64 array of vertices.  of 
course this is quite limited in a way but also direct, and hopefully 
efficient to implement in C code as well.  again this is purely off 
the top of my head with no actual system in place, so who knows what 
the final result will be...

cgc




More information about the Pd-dev mailing list