Beyond the mechanical stage-hand:
Towards an Aesthetic of real-time Interaction between Musicians, Dancers and Performers and Generative Art  in live performance.

 

Martin Dupras, BMus, MA.

Digital Media Research Centre,University of the West of England, Bristol,UK.

e-mail: martin.dupras@uwe.ac.uk

 

Paul Verity Smith, BA (Photographic Arts, Polytechnic of Central London 1978 )

MA. (Interactive Multimedia , London Institute/London College of Printing 1997)

Senior Lecturer Interactive Multimedia Design, Digital Media Research Centre,

University of the West of England, Bristol,UK.

e-mail: paul3.smith@uwe.ac.uk.

 

Jeremy Hattosh-Nemeth, BA,MA.

Digital Media Research Centre,University of the West of England, Bristol,UK.

e-mail: paul3.smith@uwe.ac.uk.

 

 

 

Abstract

The authors have recently developed generative content as part of collaborative pieces presented in venues in the UK and Europe. In each performance the generative content was created to respond to live performance from dancers and musicians. By using microphones, sensors and video-based motion detection, contextual information was used to influence the direction and evolution of generative algorithms that were controlling 3D moving-images, sound, and video processing. Additionally some of the source material that was used in the generative content was acquired in real-time from the performance, such as video (used for 3D texturing) and sound. In the development of these works the authors have started formalising ways to analyse sets of real-time captured data and extract information that can be used extensively.

1. The Background

The search for an “expanded theatre” or an “expanded cinema “ is not new.   Writings from Richard Wagner in the first half of the nineteenth century call for a “total theatre” in which all of the art forms involved – lighting, sound scenery and performance - combine together. This is echoed nearly a century later in Lazlo Moholy-Nagy’s writings on a Bauhaus approach to theatre which reflect a fascination with mechanics and a desire to incorporate technology into performance.  It was Moholy-Nagy who wrote that the  “Theater of Totality with its multifarious complexities of light, space, plane, form, motion, sound, man – and with all the possibilities for varying and combining these elements – must be an organism."

 

 

Over the last century we have seen mechanical,  recorded material used to represent the virtual  contrasted with the live performer in the work of Samuel Beckett and virtual dancers  generated by computer  interacting with real dancers on stage in Merce Cunningham’s choreography.  In Film, the writing of Gene Youngblood on expanded cinema has pointed the way towards a medium which emulated a synaesthetic experience rather than that of a passive distant observer.

 

In all of our experiments we have been searching to create theatrical experiences which use the computer and computer generated imagery in a manner which goes beyond a simple “stage-hand” effect.  A more appropriate analogy would be that we are attempting to create instruments with which the musicians, dancers, actors, etc… can perform. They are ideally however, more than simply a passive instrument in that they have an intelligence that will continue to act and react without cues from the performers or technicians. They are both another instrument and a performer in their own right.

2.1. The Software

Most of the generative work was executed with the open-source free software PD (pure-data) with GEM, an openGL library for PD, with the rest of the work developed with Macromedia Director. PD is real-time software programming environment for live performance of music and multimedia. The software was chosen because of its robustness and performance, but also for the ease of programming changes in real-time during rehearsals or performance.

2.2 The Performances.

Each performance piece was created as a collaborative exercise. In each case, all parties involved exchanged ideas and experimented with modes of interaction in order to achieve a full and balanced integration of the generative and interactive content with the “human” performance. This was deemed especially important to avoid the multi-media content added on as an afterthought. Furthermore, the range of possibilities was not known by all parties beforehand, which made the dialogue and exchange of idea challenging, but exciting.

2.2.1 “Where Do We Go From Here?"

“Where Do We Go From Here?” was an interactive, non-linear film written and directed by Barbara Hawkins. It is a conventionally shot, black and white movie focusing on a relationship between two lovers. The film was written for live performance with the saxophonist Andy Sheppard playing an improvised soundtrack and controlling the playback and order of the scenes of the movie. While the actual time-based film sequences were shown on the main screen, two side screen displayed each 4 sequences that were further controlled by the saxophone in such a way that playback speed, size, cropping, colour, rotation, etc of each sequence responded in real-time to various parameters of the saxophone sound, such as amplitude, base frequency, and spectral richness.

This was achieved by using sound input from the saxophone into a first computer running a PD program in which we derived data from the sound input. This program essentially monitored the amplitude and spectral content of the signal and sent contextual messages over a LAN network to a second computer on which the video was manipulated with GEM. The input data was essentially interpreted by the first computer to send contextually meaningful messages to the second computer, such as the amount of activity, the richness of the spectrum and a rough fundamental frequency. This allowed us to control the side sequences with parameters that made the sequences respond to the music in ways that were clear to the audience.

We have found that the most critical aspect of the system, in order to achieve high quality “symbiosis” between music and images, was the latency of the system. The video processing was displayed at 50fps per second, rather than 25fps, and the whole latency of the system, from sound input to image manipulation was in the order of about 25-30ms. The authors found that the audience seems to intuitively understand the relationship between sound and image manipulation, and enjoy the experience better with as short a latency as possible.

 

 

2.2.2 “Dead East, Dead West”

This piece, performed at the ICA (London) on 1 August 2003,  was written by Sue Broadhurst as a live, semi-improvised performance piece involving two dancers/actors and a drummer, with live stereoscopic video processing and 3D image manipulation providing an ever shifting setting in which the performers evolved. The images were in a way illustrating the points of view and feelings of the two main performers, and the changing dynamic of their relationship.

The 3D image processing was realised using two computer, one doing sound input analysis (similar to the one developed for “Where Do We Go From Here?”) and sending network messages over a LAN to a second computer, which was receiving live video input from a DV camera, and displaying the 3D visuals using GEM. Motion detection was used on this computer to infer the amount of movement and direction of the performers. This was used to control 49 3D objects in a virtual 3D space onto which the input video was mapped as a live texture. Several aspects of the 3D objects, such as size, rotation, colour, and their spatial organisation were controlled by the data derived from the motion detection. The 3D virtual “set” was then displayed with two LCD projectors through polarising filters on a silver screen, which also displayed stereoscopic video of the performance seen from different vantage points. The members of the audience were wearing stereoscopic polarised glasses which enabled them to see the whole performance in 3D stereoscopy.

The piece was performed by Tom Wilton  (Dancer), Katsura Isobe (Dancer) and Dave Smith (composer/percussionist), choreographed by Jeffrey Longstaff, and the 3D video was realised and performed by Brian McClave.


2.2.3 Interaktions-Labor, Saarland Germany.

This piece was realised in collaboration with Lynn Lukkas, University of Minnesota (USA), Marija Stamenkovic, dancer (Barcelona, Spain) with programming in Macromedia Director by Paul Verity Smith.  The programme read the amplitude level of the dancer’s breathing so that she could improvise and control the playback speed of a video (of herself) projected behind her.  Additionally a heart rate monitor attached to her body controlled a drum track programmed in MAX-MSP by Mark Henrickson.

3. Development and approach

The nature of collaborative work meant that in each instance, the authors had to do most of the work in situ during the writing and rehearsal stages of the “human” performance parts. This implied the need for an approach where experimentation and prototyping can be done quickly. The authors felt that it was necessary to develop many modular algorithms that could be used and modified quickly and easily. Thus, much the work went into creating contextual representations of data to easily interface the different algorithms together.  This meant that complex, yet consistent real-time interaction was achieved. For instance, the amplitude of the sound input was derived to produce a variety of data, which resulted in messages which translated to concepts such as “sudden change”, or “very noisy” (as opposed to very pitched).

The advantages of this approach were two-fold. First, it did mean sending relatively small amounts of data over the network, thus allowing us to preserve very low-latency. Secondly, it meant that the algorithms could be developed around significant perceptual changes, which enabled us to better design the algorithms with the goal of achieving a perceived closeness between performance and generativity.

 

4. Conclusion.

A clear development in our work has been the use of open source software in preparing the parches for performances. We have used PD and GEM extensively. This has enabled us to create specific applications tailored to the precise performance needs, as opposed to having to adapt or subvert commercial applications for purposes to which they were not intended.

Further experimentation will be in the field of using live sampling of audio directly from the performers and musicians, creating a relationship between pre-recorded video and sound and the movement and speech of performers and dancers and further developments in responsive environments where the virtual stage setting responds to the performers and the audience.