[GEM-dev] Re: [PD] multiple_window feature of Gem?

Jeremy Cooperstock jer at cim.mcgill.ca
Thu Mar 24 23:19:10 CET 2005


UltraVideoconferencing is closed source but the binaries are freely 
available for research use.  I'm not up on the details of lighTWIST but 
if there's some relevant mutual interest, we should indeed talk.

- Jeremy

B. Bogart wrote:

> Hi Jeremy,
>
> Ah I did not realize DVTS was multicast capable.
>
> Is UltraVideoconferencing open-source or IP-restricted?
> Indeed the project sounds very interesting. I hope to hear about the
> future developments. Have you spoken to how this may help the TOT
> lighTWIST system?
>
> B.
>
>
> Jeremy Cooperstock wrote:
>
>> Greetings all,
>>  IIRC, the TOT project is using DVTS or a derivative thereof, in which
>> case, the software is already multicast capable (although perhaps not in
>> the base release).
>>
>>  Our UltraVideoconferencing system is expected to gain multicast
>> capability (hopefully for both IPv4 and IPv6) later this year and this
>> would serve (at least) for DV, analog, or SDI video distribution.  Since
>> our needs entail processing of the video as per the requirements of each
>> receiver, and with minimal latency, DV would not be a suitable candidate
>> for this purpose, given the encoding and decoding costs, so we're
>> considering instead raw transmission of a bounding box region of
>> segmented (i.e. background-removed) video.  In the typical case, this
>> payload would fit comfortably on a 100Mbps network, although
>> segmentation problems or scene discontinuities that lead to full frame
>> transmission would require either compression or gigabit infrastructure.
>>
>> - Jeremy
>>
>> B. Bogart wrote:
>>
>>> Thanks for the link, I'll take a look when I have more time.
>>>
>>> Ok so its Chromium itself that passes the texture data through the
>>> cluster. Indeed video would be more difficult, but with a 1Gbps
>>> multicast lan one should easily be able to distribute a DV stream to 
>>> all
>>> machines... in theory! Are there multicast DV streamers???
>>>
>>> The streaming part fits very well into TOT.
>>>
>>> I've CCed Franz Hildgen and Simon Piette who are looking after the DV
>>> point-2-point application teleCHACHA.
>>>
>>> Franz and Simon, we're talking about pd/Gem working in a cluster
>>> context, where the GL context is forwarded to a number of machines that
>>> each processes and projects one part of the image. This is very closely
>>> related to the lighTWIST and pixelTANGO integration problem.
>>>
>>> Would it be possible to multicast a DV stream to all the cluster
>>> machines, so that each could use the stream in its portion of the final
>>> image?
>>>
>>> Mike Wozniewski from McGill is looking at using chromium with pd/Gem 
>>> for
>>> a cave application.
>>>
>>> B>
>>>
>>> Mike Wozniewski wrote:
>>>
>>>> Hey.
>>>>
>>>>> I did not see a link to any in depth documentation from chromium.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Check http://chromium.sourceforge.net/doc/index.html. Very
>>>> comprehensive.
>>>>
>>>>> I'm not a c++ programmer so I'm not sure
>>>>> what it would involve to build these functions into GL wappers for 
>>>>> Gem.
>>>>> I'm not sure how the functions latch onto an existing context, and 
>>>>> how
>>>>> the whole thing works architecturally (what parts run on what 
>>>>> machines,
>>>>> master slave connections etc..
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Well, from the docs, it seems that we don't have to do anything at all
>>>> to Gem. This is because Chromium disguises itself as the OpenGL 
>>>> library
>>>> - ie, when a GL instruction is made, Chromium intercepts and the 
>>>> regular
>>>> system OpenGL library sits idle.
>>>>
>>>>> I just wonder about performance, considering the speed of the AGP bus
>>>>> for texture transfers vs ethernet transport between the source and
>>>>> destination for the texture! especially if your talking about moving
>>>>> video...
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> So according to 
>>>> http://brighton.ncsa.uiuc.edu/%7Eprajlich/wall/ppb.html,
>>>> when distributing video, the movie has to first be played in "write
>>>> mode", where all textures are cached onto the disks across the 
>>>> cluster.
>>>> Then subsequent playback is done in "read mode", where it's just read
>>>> from local disk on each machine. I see problems with this in that all
>>>> videos have to exist on disk first (no streaming from live 
>>>> cameras), the
>>>> first playback is going to be SLOW, and if you have many many video
>>>> clips this could be extremely annoying.
>>>>
>>>>> (are you guys even using video in your cave application?)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Not yet. But we will eventually want to put video avatars of remote
>>>> participants into the world (ouch - this is not going to work with the
>>>> above mentioned strategy).
>>>>
>>>> -Mike
>>>>
>>>>
>>

-- 
Please note my email policy: http://www.cim.mcgill.ca/~jer/email.html





More information about the GEM-dev mailing list