FEED is a stage-play adaptation of a novel by M.T Anderson that is set in a disturbingly not too distant future where everyone has the internet in their heads. Everybody has the knowledge of the world at their fingertips and, as the world slides into oblivion, nobody is interested in anything beyond accessorising (sound familiar?).
I'm pretty excited about the idea at the moment. I think you could make this as a Hollywood blockbuster with shed-loads of special effects and the jaded audience wont even blink(though the story is probably better than a lot of the films I've seen in recent times). But no, the film buff amongst them would make comparisons to something like Tron and off they'd all trot to get a burger. But I think that to try and do something like this live on stage will make the audience sit up in their seats and take notice (and hopefully engage a little more with the content of the story).
The two biggest challenges in all of this for me and are firstly to try and visualise what it might be for everyone to have the internet in their heads and secondly how to pull this off in a theatre setting. At the moment I imagine the Feed to be this swirling vortex of data (text, images, video, etc) and then within that a little sub-vortex orbiting the head of each of the characters. The items circling each of the characters is stuff that is specific to them and will indicate what they're thinking about. The way they pass images and data from one another too will be a big part of the whole thing. And then there's "texting" as well. Maybe a third of the script is done with text messages so there needs to be some kind of representation of that as well.
To try and do this its going to be necessary to track the performers as they move about on stage and then use that info to drive the visuals. So that the performers can just act, and not have to worry about being in the right place at exactly the right time. I've managed to do this so far using a Kinect and Tuio, talking to Quartz Composer, building on some of the work I did (but ultimately abandoned) for Highly Strung. I may stick with this, or I may find something better. At the moment I like the way the kinect works on infra-red light so it doesn't get confused by projections or the lighting on the actors (or complete darkness for that matter). Ive got zero interest in xbox games but, for $150, the kinect is a great bit of kit.
And the code to do that looks like this.
This has been my first foray into Quartz composer and what they call spaghetti code. Fortunately there are a lot of great resources out there on the internet (kineme.net in particular, I've learned loads from deconstructing/butchering some of the things on that site). Basically though, it's lots of little "bits" that do "stuff" and you link them up in a big ugly tangle...
...And it makes something that looks a bit like this.
And then you can project all that onto a scrim in front of the performers.
I managed to get a bit of time in with Greg and Anna recently in between them stressing out over their performance of Oliver's Tale on the salt lake. The shot above is of Anna doing a much better job than me of looking like a youth as we test out the texting thing.
For the Feed itself the plan is to run the show with a live connection and be doing live image searches on the internet based upon things that are relevant to the context of the show then have the results thrown into the mix with everything else. Below are the results of a couple of google image searches on a few different strings.
"Disco" |
"Ecological Disaster" |
"Boobs, Kissing & Ulcers" (quite a heady mix) |
There's still lots to figure out and lots more to learn. But right now I'm excited about the possibilities... and the fact that the performance will be all INDOORS and immune to the whims of the weather gods AND there will be absolutely NO giant puppets in the show. All this makes me feel positively relaxed about the whole thing. How hard can it be?