[PD] Approaches to "show control" platforms
Simon Wise
simonzwise at gmail.com
Mon Nov 17 10:24:44 CET 2014
On 17/11/14 17:57, Stephen Lucas wrote:
> We've been very interested in implementing a custom show runner with a
> focus on making cues for reliable routing of multichannel audio and
> multichannel video. I built a player in Max6 to get 3ch video 2ch audio for
> somebody's show and it ran pretty well. However, when trying to implement
> the same code for a piece that relied on having very consistent, very
> smooth HD video, we couldn't maintain a consistent framerate. Ultimately,
> we used QLab, but we'd prefer to not need their expensive licenses.
>
> I'd be interested to know benchmarks for multichannel audio and HD video
> playback using Pd/GEM. We hadn't investigated it since I assumed Jitter
> rendering to GPU with HAP should be better than what GEM can do, but I
> would be ecstatic to go to Pd for this if the playback performance was good.
Video playback is very hardware dependent, GEM uses openGL so what you can
output depends on the GPU, the drivers for this, and the speed of your media
drives and the speed of the available transfer to the GPU.
Depending on the cueing and sync requirements it can be much easier to run 1
smaller computer for each video output, but for very tight sync a single
computer may be better.
I have been using Raspberry Pis recently to run multiple projection screens, at
$40 or so each they can deliver 1 HDMI output with stereo HDMI audio, at 1024p
resolution and can be be cued via ethernet using a pd patch. Some other machine
could run as many audio channels as you want, and the cues. Audio channels are
limited by your audio hardware/budget.
Raspberries not the easiest path, GEM is not much use (yet?) since they use
openGL-ES, and really this is about the limit of their capabilities and will
need careful tuning for good performance. I put together an external for pd that
will load a media file in a paused state, then play it very tightly on a cue
using DBUS, I used it last week and I'm in the process of cleaning it up to
share. I achieved projection mapping and some alpha overlays on an earlier
version of the show, but ran into the GPU limits as far as resolution and such
was concerned and fell back on this approach.
The same idea is much much more easily achieved using more powerful (and
expensive) devices, and GEM.
For many years I have been using old mac minis as the playback units, using one
for master timing and audio playback it is easy to use a metro to play each
machine frame by frame and keep video-frame-accurate playback and cues across as
many projectors as you have minis. The first time I did this was 8 years ago,
with a 5 projector installation making a single very wide HD image of a dancer
moving across the whole field so frame accuracy and audio sync was very
critical. All the capabilities in GEM are of course available for mapping,
effects and such. You will want a dedicated ethernet switch for the LAN so
nothing interferes with the communications. Even G4 mac minis can deliver
multiple audio channels and multiple video layers, this is well within the
capabilities of reasonably priced machines today.
Another approach depends on a single powerful machine. Using technology
available in 2007 (but not cheaply!) I used a PCI based industrial video grabber
card which could deliver 4 composite video streams (selected in pd from its 16
composite inputs) to the GPU without much CPU use at all, then with SATA drives
I could add HD media playback ... I gave up testing at 12 alpha layers with
mapping (some HD, some camera, some generated scrolling text etc with fades and
playback all completely smooth). It was all achieved in pd with the user
interface entirely via a midi fader board. I had set this linux box up because I
was projecting live camera images behind the actors speaking and needed close to
lip-sync with the performer ... the only possible way was a PCI frame grabber
like this, the cameras were old and not gen-locked but the delay was always
between 1/2 and 1 1/2 frames so it worked out fine. Firewire or any other
transport was useless since there was too much buffering hence unusable latency.
Graphics cards are much more powerful now, though getting that kind of
connection into a machine and directly to the GPU will take a careful selection
of motherboards. The machine was also used at other times to run 4 DVI outputs
(it had 2 linked Nvidia cards) from this vision mixer, it toured through many
venues in the hands of a non-computer, very much analogue video guy without any
need for my intervention ... it was all extremely stable and comfortable for
traditionally trained theatre techs to handle.
Simon
More information about the Pd-list
mailing list