research

-


// 2011/03/09 - 19:03 / 81.251.229.15
As an artist I am, and always have been, interested by real-time. By this I do not mean that I have followed the evolution of the speed of central processing units with particular interest, but rather that, although I trained as fine artist, I have never been able summon much interest for artworks that remain in a fixed state. Hearing is the sense that has the greatest precision and therefore the greatest responsibility in the perception of passing time, and I rapidly adopted sound as my medium of predilection. In my opinion, however, recorded sound, like traditional fine art, is somehow frozen in time. Modern computers offer the possibility, at a reasonable cost, of processing and outputting information at the same speed as it enters the machine. Thus autonomous real-time art has become possible. By this I mean that, if real-time art has always existed in the form of music or theater, up until recently, unlike fine art forms, it necessitated the presence of people to make it happen, and therefore has tended to be assimilated into an event or special occasion, a concert or performance for example. My interest in real-time art, or arts of flux, finds its origin in the fact that it is neither anchored to this notion of the exceptional event, nor is it fixed in a final unalterable form. It has become possible to create artworks to be experienced on a day-to-day basis – maybe in a similar way to that in which we experience the landscape evolving outside our window or indeed the sound environment in which we live.

When I started working on the RoadMusic project I was attracted by the particularities of the audio environment of the automobile: Unlike most situations in which we find ourselves, that of driving a (average modern) car is largely exempt of natural or incidental sound. It is rarely possible to hear the sounds of the landscape through which we are traveling and considerable efforts are made to reduce sounds produced by the machine itself, generally considered as unpleasant. Therefore what we listen to on the car stereo has become the ambient sound of the car ride by default and we have come to accept the relationship between sound, our visual field and physical sensations as being, in this context, inherently artificial.

Sound produced by the computer used in RoadMusic is synthetic and there is a deliberate effort made, in (algorithmic) compositional choices, to offer the driver a musical style which is not completely alienating - there is an attempt to take into account cultural codes related to music for cars. So the generated audio is far from ‘natural sounding’ – it does not attempt to simulate a natural sound environment. Despite these facts it creates a concrete relationship of sound to the surrounding environment in that music is generated from the situation in which it is perceived. Although aesthetic choices are made in advance of the listening experience through writing the computer program, the actual sound that is produced at the time when we listen depends on captured variations in elements that constitute the environment. In this context, it can be argued that the sound is that of the situation itself, ambient sound (or noise), even if it is produced artificially and even if it is organized as music.

The system (AutoSync)
The program runs on a dedicated onboard mini PC that is plugged into the auxiliary jack of the car’s sound system. Information about the car-drive is captured by accelerometers - which continuously send data concerning the x,y and z movements of the car - and a camera fixed with a suction cup inside the windshield is used to analyze the visual landscape.

Sonification Strategies
Several different strategies of sonification are employed simultaneously to create and end result that is relatively complex. They have been developed using an essentially intuitive approach, during which different techniques have been tested in successive versions of the program and retained, “tweaked” or abandoned according to whether they “work” or not.

The Data is the Waveform.
Vibrations measured by the accelerometers are continuously written into lookup tables (one for each axis), then read as audio (wavetable oscillators). This means that while pitch (the tune) is defined algorithmically within the program, the timbre of the sounds varies continuously in relation to the road surface, vibrations of the motor or other larger movements of the car. So on this level RoadMusic uses the data for audification.
These wavetable oscillators are implemented in different ways to produce a wide variety of sounds. The audio processing part of the program is organized as a series of modules that I will call instruments – at the time of writing there are around 15 instruments the sounds of which vary between continuous, noisy layers to rhythmic and melodic bass guitar type sounds. The audification works on a microscopic level producing often-imperceptible variations, however, the fact that the synthesis, like a note played by an acoustic instrument, is never quite the same, modifies the listening experience.

Data Analysis & mapping
Data from the accelerometers is “cooked” in different ways. Each data stream is rescaled so that it can be used as a continuous controller by any parameter of any instrument. For example the varying force of acceleration & deceleration, or g-force as the car goes round bends or over bumps, can be mapped to amplitude, pitch, tone, delay speed, tempo…
Events are detected within these same streams by measuring difference against time, so it is possible to discern a bump, a bend, an acceleration etc. These events are used to trigger sounds directly, to introduce or remove notes from melodies, or to switch signal processes on or off.
These events are also used to calculate statistics about the road - measures of bumpiness, bendiness, stops and starts etc.- that in turn produce new streams of data (moving frame averages). Like the rescaled data, these can be mapped to any parameter of any instrument, causing slower variations that mediate the drive on a macroscopic scale. A last level of analysis applies threshold values to the statistical data, producing a new set of events which are typically used to orchestrate the ensemble – switching different instruments on and off according to the type of road (straight, some curves, winding or flat, bumpy, very bumpy…).

The Landscape
A camera captures an image of the road ahead. This image is analyzed in two ways; blob-tracking is used to distinguish large moving objects, most often cars in the opposite lane. A detected object is represented by it’s moving x,y and z coordinates, which, as with the accelerometers can be mapped to any parameters of any sound. In practice, they are employed to create the impression that the movement of an instrument in the music follows that of an object outside the car by using psycho-acoustic cues (panning, amplitude and doppler shift ); Average, RGB (red, green, blue) levels of the whole scene are calculated and used as a data streams, typically to vary harmonic elements in an instrument. Here too an event is extracted when there is a change in the dominant color of the landscape and used in the same way as the events described above.

Composing
Since it would be unwise to program and drive at the same time, I have made it possible to record the data from the accelerometers and camera during a drive and to play them back in the comfort of my workspace. A versatile matrix system allows the testing of different routings of data to instrument parameters and routing of events that switch on and off instruments (as described above). These different “presets” can be saved as different versions that can be tested in the car.