Vibrations of loudspeaker membranes cause air pressure waves, which our hearing system perceive as sound. As long as they are travelling through the air, these sound waves stay invisible to human beings. But as Ernst Chladni described in his book Entdeckungen über die Theorie des Klanges already in 1787, sound can be visualised by bowing a surface lightly covered with sand. So-called Chladni patterns are produced in this way. In her piece Touche Nature (2017) I-lly Cheng creates a phenomenon similar to Chladni patterns by placing transparent bowls filled with water on loudspeakers. The sound waves are visualised as water waves:
A contact microphone is attached to the bowl and amplifies the bowl itself and the water movements in the bowl. Four percussionists each play a bowl filled with 1000 ml of water. During the performance the percussionists produce sounds by moving the water with their hands, rubbing the bowl, throwing stones in the bowl or pouring more water in the bowls. These sounds are all picked up by the contact microphones and then amplified through the PA system. The water sound itself is also picked up by four condenser microphones and amplified directly through the loudspeakers in the hall.
As the first page of the score shows, the sound is processed with a pitch shift effect in the computer and there is also some feedback used. The feedback is created by sending the signal of the contact microphones back to the four loudspeakers underneath the bowls. There is no direct feedback happening though, but a pair of two loudspeakers and two microphones is together producing one big feedback loop. The sound picked up by the contact microphone of player 1 is sent to the loudspeaker of player 3, which is then picked up by the contact microphone of player 3, creating a feedback loop by sending it to the loudspeaker of player 1. The same feedback is created between player 2 and 4.
You can hear different kind of effects, including some feedback, at the very beginning of the piece, performed here by We Spoke:
During the piece I-lly explores the sonic world between natural water sounds and more abstract percussive rhythms. At the end of this fragment some pre-produced sounds are played through the loudspeakers, creating big movements in the water. These movements can not only be seen but the water sound caused by these movements is amplified through the condenser microphones. The loudspeakers seem to become liquid themselves:
In his set-up for Small Movements (2016) Adam Basanta uses two microphones and seven loudspeakers in different combinations to create acoustic feedback. The sound of the feedback is surprisingly “clean”: it contains not much noise, but focuses essentially on a single pitch. This is no coincidence, because besides the seemingly chaotic set-up of all kinds of glass jars, wires and cassette players he uses the music software Max to control the resulting sound precisely. There is a constant interaction between the physical sound creation with the objects on the table and the virtual sound control in the computer.
Let’s first have a look (and listen) at how Adam uses the objects on the table to influence the sound. As mentioned before, acoustic feedback is an important element and an example of this can already be heard at the very beginning of the documentation video (you find the video at the end of this post). Physical interaction is done also by putting a stick on a loudspeaker membrane (see 15’44” in the video). On a later occasion Adam uses a metal wire (16’40” in the video). Due to low frequency sine waves sent from the computer to the loudspeaker (more on this in the part on the computer software used), the membrane will move back and forward and the stick or wire jumps on the membrane, causing a quick rhythm. The jars are used as what could be called a physical filter. By placing them on or close to a loudspeaker, the sonic outcome is influenced by the resonance frequencies of the jar (see 14’15” in the video). By holding a jar close to a microphone, the microphone picks up the resonance frequency of the jar and the pitch of the feedback will change (see 20’30” in the video).
This physical sound creation is now enhanced by the use of a computer software. A patch created in the music software Max manipulates the signals coming from the microphones before they enter the loudspeakers. I asked Adam what kind of sound processing is going on and he let me know that “Of course, there is some serious limiting on each channel, calibrated to each speaker’s capacity in order to avoid burning them out. But the main processing occurs through two algorithms, which regulate the feedback frequency and amplitude. For frequency, I use various filtering techniques, […] which only allow feedback to occur at specific frequencies. Filtering can allow very precise control of the frequencies, and also allow me to create feedback in ways which are very unfamiliar to us: for instance, very low frequency feedback, or a tonal triad using feedback.” The processing is thus not only keeping the feedback within reasonable loudness and avoiding damage to the loudspeakers. With the help of a Max patch, the feedback frequencies are fixed quite precisely: 14 different pitches can be played by using the 14 possibilities of feedback between the two microphones and seven loudspeakers. And with use of a foot pedal Adam can switch to another preset in the Max patch, giving him a new row of 14 different pitches.
In Small Movements different kinds of technologies are used in undogmatic ways. Although you might have the impression, that you see all sound processing happening, much is done by help of virtual sound processing in the computer. Besides the heavy filtering of the feedback sound, the low sine waves mentioned earlier are another example of sonic material generated by the computer. By using these low frequency movements of the loudspeaker membrane to “play” a stick or metal wire Adam connects the computer software to the physical sound production.
Two cassette players belong also to the set-up on the table, playing back the material just produced by the performer. In Adam’s words: “The cassette looping is kind of a layer on top. The cassette players themselves are quite old, and so I use them as a way to echo or repeat previously occurring material, but in a way that is quite degraded. They are mostly used as faded memory of material which was crystal clear at the time at which it was played.”
In Sound in a Jar (2016) by Ronald Boersen three performers— Ronald Boersen himself, Dganit Elyakim and Hadas Pe’ery—move three different microphones back and forwards to a very small loudspeaker placed in a jar. As Ronald explained me, this piece is a sound environment, which changes and developes algorithmically during the performance. The main task for the performers during the rehearsals is to explore this environment and find ways to engage musically with the sounds they can produce. The performers pick up the sounds of the loudspeaker in the small jar and it is sent back to the loudspeaker again, passing through a patch in the music software Max. By placing the loudspeaker in a jar, the sound will resonate easier, a very suitable feature for acoustic feedback. The main sound of the performance is thus acoustic feedback, coloured by the different characteristics of the three microphones used (two different condenser and a dynamic microphone).
The Max patch processes this feedback sound: as the scheme depicts, Ronald uses threshold triggered reverb pulses, feedback interval driven harmonisation and granular delay lines. By using amplitude thresholds and feedback frequencies these processes are directly influenced by the feedback sound itself, and the feedback itself is processed by the Max patch. In this manner Sound in a Jar uses a double form of feedback: acoustic feedback (using the sound itself) and data feedback (by using data streams generated from amplitude and frequency analyses of the loudspeaker sound, without using the sound itself), and both are effecting each other constantly. How much the sound of each microphone is processed by Max and which of the three processes is used (reverb pulses, harmonisation or granular delay) is changing during the piece, as is depicted in the diagram in the score. The relationships between microphone, processing and loudspeaker change not only accordingly to the distance between microphone and loudspeaker but also because of the temporal development of the kind of sound processing in the Max patch.
In this close-up video the development in sound processing and the direct relationship between the movements of the microphones and the resulting sound can easily be followed:
A very appealing aspect of this set-up is in my view, is that all three microphone signals are connected to a single loudspeaker. All three players have to find their own way of playing, because they have a different type of microphone and their sound is processed in a different way, but at the same time all these different paths come together again in a small loudspeaker in a jar. In the second part of the performance the sound of the small loudspeaker is slowly also diffused through the bigger loudspeakers in the hall (the PA loudspeakers). This does not cause any noticeable change in the acoustic feedback interaction, but the spatial and spectral characteristics do change due to the different in placement, sound diffusion and spectral response of these loudspeakers. The sound of the jar itself seems to fill the whole performance space now, instead of occupying a single spot. At the end of the piece, the loudspeakers in the hall fade out again and the sound moves back into the jar.
In Transducer (2013) you might easily recognise all kinds of “classical” playing techniques for microphones and loudspeakers, twisted in surprising and clever ways. This results in a performance which reinvents and expands known pieces such as Steve Reich’s Pendulum Music, Karlheinz Stockhausen’s Mikrophonie I or Gordon Monahan’s Speaker Swinging into unexplored territories. Robin Fox and Eugene Ughetti composed this piece for Speak Percussion (Eugene Ughetti, Matthias Schack-Arnott and Leah Scholes, and guest percussionist Louise Devenish are on stage).
As the title Transducer already implies, this piece is focusing on so-called transducers: devices that transform one form of energy to another, a category microphones and loudspeakers belong to. The piece starts with a scene which reminds me of the swinging loudspeakers in Gordon Monahan’s Speaker Swinging. But this time a microphone circulates above Eugene’s head, and is, for example, picking up sounds diffused by loudspeakers carried around by other performers:
One of the main elements on stage is an array of eight microphones hanging above eight small loudspeakers, which remembers us of Steve Reich’s Pendulum Music. Although clearly inspired from the swinging microphones used by Steve Reich, this pendulum-array—containing more and smaller pendulums—is played in a different way, or more accurately: in many different ways. Reich’s Pendulum Music is process-based and acoustic feedback is its sole sound. After releasing the microphones the performers do not interfere anymore with the swinging microphones. The performance is finished as soon as the microphones are hanging stationary above the loudspeakers.
In Transducer Robin and Eugene develop an instrumental set-up with the pendulums, which produce many different sounds such as clicks, sine waves or noise. These different type of sounds are generated with the help of patches programmed in the music software Max. The pendulums in Transducer also do not feedback acoustically, but the swinging microphones amplify the sound coming from the loudspeakers underneath them in pulses: the closer the microphone moves to the loudspeaker the louder the sound gets. The signals of the microphones can be amplified through eight bigger loudspeakers placed around the audience.
The whole set-up for Transducer contains many different kinds of microphones and loudspeakers, and therefore a huge amount of possibilities for combining these. Besides the elements mentioned earlier, there are four different tables, all focusing on a specific topic of playing microphones and loudspeakers. On the Textured Table different surfaces are triggering a contact microphone to obtain musical material and have it feeding back through other loudspeakers and microphones. On the Speaker Table a loudspeaker is placed, which membrane moves other objects (including some ping pong balls!), and in fact is acting as a percussionist. The third table is the so-called Mic on Mic table, on which a microphone is amplifying another microphone, which itself is not amplified. The Electromagnetic Table creates sounds with the use of an induction coil and a pulled-open computer.
The piece ends with acoustic feedback: Eugene Ughetti approaches two loudspeakers with a microphone. In between them a big tam-tam is placed and starts to resonate according to the frequencies diffused by the loudspeakers placed right behind it. The acoustic feedback is coloured by the resonances of the tam-tam and by moving the microphone close to the tam-tam changes in resonances can be picked up. This might remind you of another well-known composition for microphones as musical instruments. And indeed, the second part of this Speak Percussion concert continues with Mikrophonie Iby Karlheinz Stockhausen.
The whole documentation video of Transducer can be viewed here: