24 April 2013

Disembodied heads - Feed Development Part 5


Nina.

The voice of the Feed.

She is Violet's personal Feedtech assistant. Benign at first, then increasingly less so as the story goes on and Violet's situation worsens.


We still haven't decided if shes the manifestation of someone in a remote call center or computer generated entirely.
The mouth responds to an audio Feed, generated by one of the performers on stage with a microphone.
Full control of the head, eyes, mouth, eyebrows, blinking etc, controlling Nina's face was the most complicated of all the tasks as I'd set up.
Sometimes two hands just weren't enough.
TouchOSC is capable I think of sending up to 10 controls simultaneously but my brain starts to melt once I try and think about more than three things at the same time. Ultimately Ill probably set up a bunch of buttons that just trigger pre-composed sequences but for now its all hands on deck.
 




18 April 2013

Ricochet - Feed Development Part 4



The characters play a game called Ricochet near the start of the story that basically involves slamming into one another and trying to rack up points by hitting the bouncing glowing bits. Again, not really a scene that was pivotal to the story but an opportunity to really try and get the tracking sorted out.
 
We did two tests one with  the players being tracked with a kinect and the other just with me tracking them manually on the iPad and all this was hooked to an actual working game so that the actors could just try and play the game rather than trying to act like they were playing a game. The still images don't really do justice to it but the difference between the quality of the actors movement when their bodies were actually being tracked and and when they were just pretending to play the game was startling. When they were actually being tracked you could see them crouching, ducking and moving with much more of a sense of purpose than when they were just pretending which mostly involved spectacular leaps. The problem for now is that when they were jumping around and crossing over  each others paths, the tracker would get confused about who was who and the players would find that they were suddenly scoring own goals. This was a problem that I could avoid when I was manually tracking but, when I did that, the whole cause and effect sensation was broken. It might be possible to mount the kinect in the roof looking straight down on the players so they never cross over but my feeling is still that some kind of RF/WiFi tracking where each player has a unique id is the way forward. The game visuals could do with a bit more tarting up too really, I imagine something like plasma pong that you jumped around inside of would be absolutely mind blowing for actors and audience like.


13 April 2013

UpCar™ Blues - Feed Natimuk Development Part 3


I think the UpCar™scenes were one of my favourite experiments thus far. Not all that pivotal to the plot, more of a means of getting Violet and Titus from one scene to the next but I was really keen to have a go at updating this...
(image source movietom.wordpress.com)
The old rear projected screen behind the stationary vehicle was a classic staple of 50s and 60s cinema and I was itching to try it in a theatrical setting and put my own twist on it, not just projecting the background, but the car as well. Unlike the old cinema version, the car in Feed is free to turn in 3D and move about the stage depending on which way its facing the background stream. The two passengers are seated on the moving table. The actors, with only their faces, are seated on the moving table just behind the sharks-tooth scrim as bright colours in the projection obscure them except where the windscreen which has been left black to reveal their faces. 

Theoretically it would be possible to track the passengers as they moved around and update the projections accordingly but for the time being I was just tracking them manually with the ipad.
I could control the rotation, left right up down and tilt as well as the X Y and Z position of the car in space and it was pretty easy to keep up with the performers who were moving the passengers around on the stage. It would be possible if you wanted, to have the car do a complete 360 withthe passengers inside it. The background would just recede into the distance based on the direction the car was facing and the whole scene is rendered in real time so there is no need for the actors to have to try and move the car to exactly the right spot at the right time which would have been almost impossible.
It was one of those things that seemed like it ought to work but really, until we set it up and actually tried it, it was impossible to know it if was going to be convincing enough for the audience to be worth doing. It was a lot of work to get this scene all set up and when we first tried it at the Horsham workshop the scrim we were using was too transparent and the effect totally lost (Below is the extremely lame looking Horsham first take on the whole car thing).
I almost abandoned the idea at this point, so it was quite a relief to see it looking just so much more convincing with the sharks-tooth. Very pleasing after all that to finally see it working as well as it did.


12 April 2013

The Stagecraft of Feed - Natimuk Development Part 2


Violet's father (Craig) delivers his speech
One of the things we have been experimenting with was how we could assemble these few elements and combine them with projections to build a range. The idea was to work with a set of basic but versatile elements and reconfigure them to construct all the elements we need for the show. As well as the semi transparent white sharkstooth scrim across the front (for the feed and text messaging), we had three other elements, a table and two screens (roughly 3x4meters), all on wheels.  The two moveable screens actually had projectors mounted behind them so that they could be continuously projected upon even as they are being moved around the stage regardless of what angle they were facing (originally the plan had been to track and project onto these surfaces but it just made so much more sense to just mount the projectors on them and be done with it). 

For simplicity I kept the controls for the two mobile screens on a separate controller so that someone else could operate it if the backgrounds needed to change whilst I was needing to concentrate on the foreground animation.
At various points the two screens form the internal corner of a seedy motel, the external street corner corner of a building a flat wall with an open doorway and a range of other combinations besides. To compliment this the mobile bench served variously as bed, UpCar, kitchen table etc (and will likely also be used as an additional projection surface in the final production).


Titus and Violet (James and Libby) at the hotel.

One of the goals of this development phase was to experiment with the way the transition from one scene to the next could work as a theatrical moment in its own right, taking the audience on a mental journey as the elements of one scene transform into the next. There's a moment of confusion as peoples brains struggle to comprehend what they're actually seeing, still trying and hold onto pieces of the old scene as it falls apart and then the payoff and a wave of comprehension as the new scene drops into place. When its well done, its something I've always enjoyed in animation (both watching and striving to create). Its even more exciting trying to make it happen live on the stage.

The scene below is a good example, the kitchen walls drop away into darkness and the table  transforms into an UpCar™ and suddenly we are on a joy flight through the city.

Domestic Bliss™ with James Craig and Naja

Titus and Violet (James and Libby) in the UpCar
The UpCarscene was pretty much my favourite thing we attempted this time round so Im going to save it for a whole post of its own...

coming soonish...




07 April 2013

Feed 86% - Natimuk Workshops

Recently we ran another Feed workshop in the Natimuk Hall inviting a group of local actors to interact with the animated scenes I'd been developing over the past few months.  

The main plan was to see how performers actually meshed with the projections Id been developing both in terms of how they could engage with the projection and how well the projections would respond to them. A lot of things you can imagine how they might work when you're looking at them on the computer screen but until you actually see them on a stage with real people its hard to be sure. Often live the effect is very different from the screen effect, particularly the speed things move at, things that seem to drift around the monitor in a leisurely manner rip across the scrim at lightning speed, confusing the performers and inducing seizures in the audience. Luckily a lot of the worst of this was became obvious during the Horsham workshops late last year so I was a lot more mindful of it going in to it this time.
One of the key elements of the play is the feed itself, a cloud of data that tracks each performer. Full of contextually relevant info, helpful pointers and targeted advertising. Characters can review new images from the feed, send images to one another share them with a group or communicate via text. All while moving around on the stage.
The script requires that there be as many as seven performers on stage doing this at anyone time and keeping track of it all and keeping it even even vaguely reliable has been the biggest challenge of the project so far using a range of techniques, video tracking, using the kinect, manual tracking with the iPad via touchOSC. Each method has its strengths and weaknesses and likely the ulimte solution will feature some kind of combination of the techniques. Manual tracking works really well up to about 4 objects at which point everything goes to pot and the operators head explodes. The kinect is awesome (the depth tracking and for not requiring visible light) but has some annoying limitations. It is limited to roughly a 6-meter area and it cant see through the scrim we need to have hanging across the stage so it can only track people in front of that. Visible light tracking allows for a greater range, sees right through that scrim but it lacks the depth perception of the kinect and is sensitive to visible light and therefore and gets confused by the projections. The biggest problem with all the automated tracking is that the computer tends to get confused when performers cross paths or bump into each other (that's where  human operators really come into their own). Something I havent tried yet but would like to is planting some kind of wi-fi tracker on each performer allowing them to be tracked and uniquely identified anywhere in the room. From what I've read, there can be issues with accuracy and lag, but if it could be got working it ought to be perfect. Anyone with any experience of this, PLEASE get in touch.

And thanks again to Regional Arts Victoria who's support allowed this recent stage of the development to happen.It feels like we've made massive headway over the last few months and the whole project is creeping steadily closer to being a reality.

These are all stills from the video we shot (thanks Gareth Llewellin). I'll be uploading that once Ive had a chance to edit something together but likely there'll be a bunch more stills first focusing on some of the scenes we worked through.