Vibrations of loudspeaker membranes cause air pressure waves, which our hearing system perceive as sound. As long as they are travelling through the air, these sound waves stay invisible to human beings. But as Ernst Chladni described in his book Entdeckungen über die Theorie des Klanges already in 1787, sound can be visualised by bowing a surface lightly covered with sand. So-called Chladni patterns are produced in this way. In her piece Touche Nature (2017) I-lly Cheng creates a phenomenon similar to Chladni patterns by placing transparent bowls filled with water on loudspeakers. The sound waves are visualised as water waves:
A contact microphone is attached to the bowl and amplifies the bowl itself and the water movements in the bowl. Four percussionists each play a bowl filled with 1000 ml of water. During the performance the percussionists produce sounds by moving the water with their hands, rubbing the bowl, throwing stones in the bowl or pouring more water in the bowls. These sounds are all picked up by the contact microphones and then amplified through the PA system. The water sound itself is also picked up by four condenser microphones and amplified directly through the loudspeakers in the hall.
As the first page of the score shows, the sound is processed with a pitch shift effect in the computer and there is also some feedback used. The feedback is created by sending the signal of the contact microphones back to the four loudspeakers underneath the bowls. There is no direct feedback happening though, but a pair of two loudspeakers and two microphones is together producing one big feedback loop. The sound picked up by the contact microphone of player 1 is sent to the loudspeaker of player 3, which is then picked up by the contact microphone of player 3, creating a feedback loop by sending it to the loudspeaker of player 1. The same feedback is created between player 2 and 4.
You can hear different kind of effects, including some feedback, at the very beginning of the piece, performed here by We Spoke:
During the piece I-lly explores the sonic world between natural water sounds and more abstract percussive rhythms. At the end of this fragment some pre-produced sounds are played through the loudspeakers, creating big movements in the water. These movements can not only be seen but the water sound caused by these movements is amplified through the condenser microphones. The loudspeakers seem to become liquid themselves:
In his set-up for Small Movements (2016) Adam Basanta uses two microphones and seven loudspeakers in different combinations to create acoustic feedback. The sound of the feedback is surprisingly “clean”: it contains not much noise, but focuses essentially on a single pitch. This is no coincidence, because besides the seemingly chaotic set-up of all kinds of glass jars, wires and cassette players he uses the music software Max to control the resulting sound precisely. There is a constant interaction between the physical sound creation with the objects on the table and the virtual sound control in the computer.
Let’s first have a look (and listen) at how Adam uses the objects on the table to influence the sound. As mentioned before, acoustic feedback is an important element and an example of this can already be heard at the very beginning of the documentation video (you find the video at the end of this post). Physical interaction is done also by putting a stick on a loudspeaker membrane (see 15’44” in the video). On a later occasion Adam uses a metal wire (16’40” in the video). Due to low frequency sine waves sent from the computer to the loudspeaker (more on this in the part on the computer software used), the membrane will move back and forward and the stick or wire jumps on the membrane, causing a quick rhythm. The jars are used as what could be called a physical filter. By placing them on or close to a loudspeaker, the sonic outcome is influenced by the resonance frequencies of the jar (see 14’15” in the video). By holding a jar close to a microphone, the microphone picks up the resonance frequency of the jar and the pitch of the feedback will change (see 20’30” in the video).
This physical sound creation is now enhanced by the use of a computer software. A patch created in the music software Max manipulates the signals coming from the microphones before they enter the loudspeakers. I asked Adam what kind of sound processing is going on and he let me know that “Of course, there is some serious limiting on each channel, calibrated to each speaker’s capacity in order to avoid burning them out. But the main processing occurs through two algorithms, which regulate the feedback frequency and amplitude. For frequency, I use various filtering techniques, […] which only allow feedback to occur at specific frequencies. Filtering can allow very precise control of the frequencies, and also allow me to create feedback in ways which are very unfamiliar to us: for instance, very low frequency feedback, or a tonal triad using feedback.” The processing is thus not only keeping the feedback within reasonable loudness and avoiding damage to the loudspeakers. With the help of a Max patch, the feedback frequencies are fixed quite precisely: 14 different pitches can be played by using the 14 possibilities of feedback between the two microphones and seven loudspeakers. And with use of a foot pedal Adam can switch to another preset in the Max patch, giving him a new row of 14 different pitches.
In Small Movements different kinds of technologies are used in undogmatic ways. Although you might have the impression, that you see all sound processing happening, much is done by help of virtual sound processing in the computer. Besides the heavy filtering of the feedback sound, the low sine waves mentioned earlier are another example of sonic material generated by the computer. By using these low frequency movements of the loudspeaker membrane to “play” a stick or metal wire Adam connects the computer software to the physical sound production.
Two cassette players belong also to the set-up on the table, playing back the material just produced by the performer. In Adam’s words: “The cassette looping is kind of a layer on top. The cassette players themselves are quite old, and so I use them as a way to echo or repeat previously occurring material, but in a way that is quite degraded. They are mostly used as faded memory of material which was crystal clear at the time at which it was played.”
Our clothes can be seen as a form of communication between ourselves and the outside world. They give a visual impression of who we are and how we would like to be seen by others. Pauchi Sasaki designs dresses which are not only visible, but transmit sonic xterial as well. These dresses consists of around 100 loudspeakers, and are able to process sound live.
Pauchi got the idea for developing sonic costumes, when she performed in a temple in Lima. As she remembers: “But of course, it’s an ancient temple, so there was no electricity or outlets; I could perform only acoustic sounds, even though that’s not what I had planned. That’s when I got the idea of a self-contained system, but one that could be integrated into my body, that was the idea” (interview by Michael Barron).
The result was developed in 2014 and is simply called Speaker Dress. It is a self designed wearable sound sculpture. Two dresses exist nowadays, a black and a white one. The black one contains 96 loudspeakers, the white one even 125. Several loudspeakers are connected to the same amplifier channel. The black dress for example contains six channels of amplification, resulting in 16 loudspeakers per channel, and in six different sonic zones on the dress (a zone is formed by the loudspeakers diffusing the same sound).
The performer can choose from different input possibilities: a contact microphone, a lavalier microphone and an mp3 player are connected. These signals are sent wireless to a computer, which processes the sound in the music software Max. The sound is sent back to the dress again and is diffused by the loudspeakers.
This is a short video made during a sound check for the Ojai Music Festival made by sound engineer Nick Tipp. Pauchi is testing the dress and walks through the auditorium:
All kind of live sounds made by the performers can be processed live during the concert and the transformed version is sounding through the dresses. Flutist Claire Chase and Pauchi herself, who is a violinist as well, use their breath, their voices and their instruments in the first composition Pauchi composed for two dresses: Gama XV (2016). The performers are dressed in their own sounds, transformed by live electronics:
In Sound in a Jar (2016) by Ronald Boersen three performers— Ronald Boersen himself, Dganit Elyakim and Hadas Pe’ery—move three different microphones back and forwards to a very small loudspeaker placed in a jar. As Ronald explained me, this piece is a sound environment, which changes and developes algorithmically during the performance. The main task for the performers during the rehearsals is to explore this environment and find ways to engage musically with the sounds they can produce. The performers pick up the sounds of the loudspeaker in the small jar and it is sent back to the loudspeaker again, passing through a patch in the music software Max. By placing the loudspeaker in a jar, the sound will resonate easier, a very suitable feature for acoustic feedback. The main sound of the performance is thus acoustic feedback, coloured by the different characteristics of the three microphones used (two different condenser and a dynamic microphone).
The Max patch processes this feedback sound: as the scheme depicts, Ronald uses threshold triggered reverb pulses, feedback interval driven harmonisation and granular delay lines. By using amplitude thresholds and feedback frequencies these processes are directly influenced by the feedback sound itself, and the feedback itself is processed by the Max patch. In this manner Sound in a Jar uses a double form of feedback: acoustic feedback (using the sound itself) and data feedback (by using data streams generated from amplitude and frequency analyses of the loudspeaker sound, without using the sound itself), and both are effecting each other constantly. How much the sound of each microphone is processed by Max and which of the three processes is used (reverb pulses, harmonisation or granular delay) is changing during the piece, as is depicted in the diagram in the score. The relationships between microphone, processing and loudspeaker change not only accordingly to the distance between microphone and loudspeaker but also because of the temporal development of the kind of sound processing in the Max patch.
In this close-up video the development in sound processing and the direct relationship between the movements of the microphones and the resulting sound can easily be followed:
A very appealing aspect of this set-up is in my view, is that all three microphone signals are connected to a single loudspeaker. All three players have to find their own way of playing, because they have a different type of microphone and their sound is processed in a different way, but at the same time all these different paths come together again in a small loudspeaker in a jar. In the second part of the performance the sound of the small loudspeaker is slowly also diffused through the bigger loudspeakers in the hall (the PA loudspeakers). This does not cause any noticeable change in the acoustic feedback interaction, but the spatial and spectral characteristics do change due to the different in placement, sound diffusion and spectral response of these loudspeakers. The sound of the jar itself seems to fill the whole performance space now, instead of occupying a single spot. At the end of the piece, the loudspeakers in the hall fade out again and the sound moves back into the jar.