The shape of the performance


All images punctuating this post are stills from footage processed for the show at The Emporium.

Background

Back in the 1990s I performed a series of live film improvisations with Robert Johnson and Tony Woodhead under the name ‘Deconstructed Cinema’. This is the first time since then that I have felt it was correct to revisit that process.

Our method back then was very simple technically. We would use a basic Panasonic AVE mixer with two Hi-8 tape sources to create the visuals, two turntables and a denon vari-speed twin CD deck going through a spirit folio mixer to create the soundtrack. Tony Woodhead used film loops and mirrors to paint the room around our projection and fully immerse the audience. He sweated much more than we did being so close to so many projector bulbs.

At the time VJing was just emerging as an art form in the clubs, but we always felt that the tools could provide something more powerful than just creating cosmic visuals to accompany a dance track. Although we did our share of that too, it was fun, but did get monotonous very quickly. I was more interested in developing narrative films, something I felt was encouraged by the physical and more reactive approach we were forced to take by using linear tape. We did have a non-linear edit system which we could, and did once, use to give us more selective control over the live edit (and access to live effects). But this changed our mindest drastically when we performed, it pushed us into a VJ headspace, we were no longer working cinematically. The only upgrade we made technically to our live setup was switching from Hi-8 sources to DV sources.

The new approach

I’ve decided that as the current digital technology allows for a more lightweight and more powerful approach over those earlier shows, that we will adopt very different strategies. We will perform with just two PCs, one that we will edit the visual footage with (using a Puredata patch) and one to generate the soundtrack (using Ableton and a Mackie Firewire box).

There is nothing unique in this, as much as I never saw anything unique in what we did as ‘Deconstructed Cinema’. It is still to me an extension of, and return to, early silent cinema where live musicians would play an acompanying soundtrack in the theatre as the film played.

The improvising of the soundtrack I feel from my experience of using Ableton live on a number of occasions will be as flexible and open as we need it to be. Ableton is not as limiting as it first appears, the ability to remap any control in its signal path to any midi control you have available to you, without interrupting it’s output, allows for any reactive or reflective decision process that occurs during a jam to be actualised.

However the tools we need for improvising a film still don’t really exist in any of the current forms of VJ software.  Every misgiving I had about using NLEs and VJ software back in the early days still stands. It is all to me, (from Arkaos to VJamm to Resolume etc, etc), very unsuited to the kind of live editing/mixing required to develop a proper visual progressive arc. The nearest tool I’ve seen to what is required is LiVES which mixes an NLE with real time performance and effects. So you actual have the feeling of editing, there is a linear representation of your performance you can jump around, much as we used to do with linear tape. There is an almost physical sense that you are dealing with a visual continuum more than a sequence of unrelated loops which is what VJ software creates.

What I feel is required is an application that allows you to move freely around a linear sequence of shots while allowing you to have a flexible way of changing the overall signal path with a node based interface. The node based approach of Puredata is the reason why I’ve selected it as the platform to develop the application we will mix and edit with on. Although I have no intention of building a timeline I will have arrays of shots that will run, and I can jump around within like I was working with tape. It makes the process more reactive, and therefore more improvisational, because you have a sense of what shot is coming next if you were to make no interaction with the edit.

The initial Puredata patch I will build for the shows on the 13th and 14th of June will be very simplistic, although I will be using some processing effects I intend to keep them to a minimum to start with. Although it is an easy development platform to work in (relatively speaking, as a coder who isn’t completely weirded out by Reverse Polish Notation) it’s not exatly rock solid stable on a PC.  Instead I intend to prepare and process the footage in advance of the live edit/mix using Nuke. I can achieve more with the footage I’ve shot that way, and also it’s more in line with the way my method has developed since I made the  ‘Outward Displays of Inward Neglect’ film. By taking that approach (which is something we used to do with the footage we used when performing as ‘Deconstructed Cinema’) I also develop a better understanding of the footage I’ve shot over the weeks leading up to the show.

Not all the footage will be processed however, some of it I intend to work with live unprocessed.

Anna and I will work the performance the same way Rob and I used to work, in that we will switch between producing the sound and the film edit as the performance progresses. The soundtrack itself will be built entirely from processing the sounds captured while the video was being shot (cars, peoples conversations, street sounds) and Anna will be adding spoken narrative made up of overheard conversations as well as singing and adopting different characters. This is a massive departure for both of us and we’re looking forward to seeing how it will involve the audience more than our normal performances,  as it will encourage them to effect and interact with our performance.

The audience live in the area the film was shot in, and it’s likely a few of them will appear on the screen or in the soundtrack. Our process of de/reterritorialising Stokes Croft (that is happening before the show in how I deal with and process the footage in preperation) will only find it’s context, it’s space, when it’s brought together and influenced not just by ourselves but by the audiences reaction. We will be pulling the area apart and putting it back together as a performance.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: