A Study in Synchresis

 

Daniel Wilson, Sebastian Gassner

Art & Technology

Dept. of Applied Information Technology

Göteborg University and Chalmers University of Technology

402 75 Göteborg, Sweden

{wilson, sgassner}@ituniv.se

 

0) Abstract

The ability to record and replay images and sound is quite new, and the combining of the two is even more recent - with cinema spending its first 35 years separated from any synchronized sound. However, sound attracts slight notice outside of the realm of music - becoming quickly overlooked as soon as it is combined with image. Our aim is to separate and recombine the visual and auditory elements as a means of examining sound's influence on the cumulative message that we receive from an audio/visual source - also known as synchresis.

 

1) Introduction

The French avant-garde composer, author and theoretician, Michel Chion, has been a pioneer in the new field of audio-visual relationships, developing a theory he calls synchresis (derived from synchronism and synthesis), which he defines as:

"The spontaneous and irresistible mental fusion, completely free of any logic, that happens between a sound and a visual when these occur at exactly the same time" [chion:1994:1].

Our aim is to explore this concept; the idea that there is an emergent property which is revealed in the union of sound and image.

Historically, in the early days of sound films, it was common practice to record a different actor's voice from the one that actually appeared on screen.  This separation of the actual sound and the presented sound has continued - and become increasingly sophisticated.  Today, the audio for sound film is largely created - and totally created in the case of animation - in the post-production studio.  The audio in any given scene can range from a partial to total construct of sounds that were either recorded after the fact, by Foley artists, or drawn from massive high-quality sound libraries.

Through this process of sound design we are able to both conceal and emphasize - as well as create reassociations in the fabrication of a dual modality reality.  This is done through the use of sound that is often more real than the real.  What we hear is not simply a replacement of the sounds from the original scene, but the construction of a soundscape that is deemed most conducive to eliciting the desired psychological response.  This more real than real sound is what Chion calls rendered sound [chion:1994:2].

Chion sees this as a means of adding value to the image.  In other words, as the nature of synchronous sound is one that induces the audience to construe the image, and hence event, differently, we can view the relationship of sound and image in film, as well as visual media in general - as John Cage and Merce Cunningham explored in their collaborations - as one that is not merely associationist, but synergetic.

This synergetic emergent property has been explored in numerous scientific  studies, where it has been found that "Spatially and temporally coincident  acoustic and visual information are often bound together to form multisensory  percepts" [anderson:2004]. This, in turn, can give rise to illusory percepts, when  incongruent information is presented through these two modalities. The McGurk effect  is an example of this, using speech, where a spoken sound and a video of a person  articulating an unmatched sound gives rise to a third, imaginary sound;  or BA + GA = DA [mcgurk:1976].

Along these lines Chion had an exercise he called Forced Marriage which involved replaying the same film segment with different soundtracks in order to examine the relationships that are altered and created [chion:1994:3]. Our project is an extension of this concept in which we examine the power and effects of synchresis by means of an interactive installation that allows the audience to immediately and dynamically see, hear and feel the effects of this phenomenon.

2) The Installation The basic element of our study on synchresis is a short video - created with the goal of audio dynamism - shown on a screen in stereo sound.   The audio for this film is recorded and mastered in a way that every single  sound source has it's own audio track.  This allows each sound to be altered,  swapped or replaced independently.

2.0) Sound Templates In order to maintain a relation with the initial sound elements, each original track will leave it's template behind when a new track comes in to replace it.  In other words, it's volume, timing, and positioning in the audio environment will remain associated with the track, and will then be applied to the new sound (for example, a car approaching and passing from left to right would be replaced by the sound of a train which would have it's volume and pan matched to this movement).

2.1) Sound Manipulation The second dynamic and independent variable will affect the actual sound elements, such as pitch, timbre and harmonics.

 

3) Conclusion

Through this exercise the audience will develop both appreciation for the synergetic properties of sound and image as well as an awareness of the ability of the sound designer to manipulate and re-associate through careful selection.

It will provide some insight into how we hear - and how characteristics  of visual elements may be rendered in sound, such as "movement, weight,  size, solidity, resistance, contact, texture, temperature, impact,  release, ..." [sonnenshein:2001].

Synchresis is an often-overlooked element of life - with sound generally getting overshadowed due to the primacy of image.  Here we venture to pull audio back to the forefront through interactive and generative methods that aim to show both the malleability of the message we receive from visual information, and the power of creation and association that is the property of the union of auditory and the visual stimuli.

Appendix A) Bibliography

[chion:1994:1] Michel Chion, Audio-Vision: Sound on the Screen, p. xviii, New York: Columbia University Press, 1994.

[chion:1994:2] Michel Chion, Audio-Vision: Sound on the Screen, p. 98, New York: Columbia University Press, 1994.

[anderson:2004] T.S. Andersen, K. Tiippana and M. Sams, Factors influencing audiovisual fission and fusion illusions,

  Cognitive Brain Research, 21(3), p. 301-308, 2004.

[mcgurk:1976] H. McGurk and J. MacDonald, Hearing lips and seeing voices, Nature, 246, p. 746-748, 1976.

[chion:1994:3] Michel Chion, Audio-Vision: Sound on the Screen, p. 118, New York: Columbia University Press, 1994.

[sonnenshein:2001] David Sonnenschein, Sound Design - The Expressive Power of Music, Voice, and Sound Effects in Cinema,

  p. 27, CA (USA): Michael Wiese Productions, 2001.