Our clothes can be seen as a form of communication between ourselves and the outside world. They give a visual impression of who we are and how we would like to be seen by others. Pauchi Sasaki designs dresses which are not only visible, but transmit sonic xterial as well. These dresses consists of around 100 loudspeakers, and are able to process sound live.
Pauchi got the idea for developing sonic costumes, when she performed in a temple in Lima. As she remembers: “But of course, it’s an ancient temple, so there was no electricity or outlets; I could perform only acoustic sounds, even though that’s not what I had planned. That’s when I got the idea of a self-contained system, but one that could be integrated into my body, that was the idea” (interview by Michael Barron).
The result was developed in 2014 and is simply called Speaker Dress. It is a self designed wearable sound sculpture. Two dresses exist nowadays, a black and a white one. The black one contains 96 loudspeakers, the white one even 125. Several loudspeakers are connected to the same amplifier channel. The black dress for example contains six channels of amplification, resulting in 16 loudspeakers per channel, and in six different sonic zones on the dress (a zone is formed by the loudspeakers diffusing the same sound).
The performer can choose from different input possibilities: a contact microphone, a lavalier microphone and an mp3 player are connected. These signals are sent wireless to a computer, which processes the sound in the music software Max. The sound is sent back to the dress again and is diffused by the loudspeakers.
This is a short video made during a sound check for the Ojai Music Festival made by sound engineer Nick Tipp. Pauchi is testing the dress and walks through the auditorium:
All kind of live sounds made by the performers can be processed live during the concert and the transformed version is sounding through the dresses. Flutist Claire Chase and Pauchi herself, who is a violinist as well, use their breath, their voices and their instruments in the first composition Pauchi composed for two dresses: Gama XV (2016). The performers are dressed in their own sounds, transformed by live electronics:
As I describe in Chapter 3 of Between Air and Electricity tuning forks are in some ways a kind of predecessors of microphones and loudspeakers. Furthermore they can be seen also as a predated sine tone generator. Tuning forks were extremely important for nineteenth century acoustic research. A tonometer, for example, was a large set of tuning forks and used to define the frequency of other sounds. The one depicted above was made by Rudolph Koenig and contains 670 tuning forks from 16 to 4096 Hertz. It was exhibited during the Philadelphia Exposition of 1876 and apparently regarded as being “the most scientifically important instrument at the event”. (see Smithsonian National Museum of American History).
Nowadays tuning forks have left the scientific realm, and even for tuning often a digital device is preferred. But now and then they become part of a musical performance. Oscar Bettison uses two sets of tuning forks, each forming a chromatic scale from C4 (261.63 Hz) to C5 (523.25) in Apart (2012) for four percussionists. Each percussionists has six or seven differently pitched tuning forks. Oscar mentioned to me, that he is fascinated by tuning forks, because “For me they are like this beautifully dumb instrument—each one only has one note, but it’s a “perfect” note!”
One of the aspects which turns these tuning forks in Apart into a musical instrument—and which interests me in particular of course—is the use of amplification. The sound normally only heard privately in a singer’s head to find the right pitch, is made public in the concert hall now. Also, this amplification clearly influenced the compositional process, when Oscar worked with Sō Percussion to develop this piece: “In the studio we were thinking of resonant surfaces [to amplify the tuning forks], but nothing was particularly amazing, so we thought we’d try contact mics. The first problem was that tuning forks buzz, so we covered the mic enough to get rid of the buzz, but for it still to amplify. I’d already written material for them to play, it was a kind of chorale, but then we started using the amplification and we realised that the mic was a playing surface—they could strike the tuning forks somewhere else and then place them on the surface slowly or really quite rapidly.”
A weird relationship between the visual and audible aspects of sound production is characteristically for Apart. The biggest gesture—a tuning fork being hit to bring it into vibration—is nearly silent, whereas the sound being heard is caused by the small gestures placing the tuning forks on the surface equipped with the contact microphone. Sonically, a weird organ sound is the result, often emphasised by two percussionists playing the same pitch. The tuning fork resonances are slowly getting softer and then interrupted by a soft click of the tuning fork being hit to vibrate it again. At the end of each section in the score (see the score fragment below), Oscar asks all percussionists “to strike the fork(s) they require for the next bar simultaneously” and this should be done “as a grand gesture”.
The ways of placing the tuning forks on the cloth covering the contact microphone are notated precisely in the score and are the main feature of the piece. For example:
– R.P. is the abbreviation for Regular Pulsing and indicates to “chose a tempo and pulse the tuning fork(s) on the playing surface”.
– I.P. is the abbreviation Irregular Pulsing and one has to “Pulse the tuning fork(s) irregularly on the playing surface ending with a sustained note”.
– I.A. is the abbreviation for Irregular Alternation and the percussionist should “Alternate two tuning forks in a polyrhythm”(from the performance instructions of the score of Apart by Oscar Bettison, Boosey and Hawkes).
These different kinds of pulsations move slowly from the highest to the lowest tuning forks during the composition. The soft clunks of the unamplified tuning forks—sounding when hit to bring them into vibration—are a beautiful counterpoint to the airy sounds, produced when the forks are amplified. You can listen here to a recording of the piece, performed by Sō Percussion:
In Sound in a Jar (2016) by Ronald Boersen three performers— Ronald Boersen himself, Dganit Elyakim and Hadas Pe’ery—move three different microphones back and forwards to a very small loudspeaker placed in a jar. As Ronald explained me, this piece is a sound environment, which changes and developes algorithmically during the performance. The main task for the performers during the rehearsals is to explore this environment and find ways to engage musically with the sounds they can produce. The performers pick up the sounds of the loudspeaker in the small jar and it is sent back to the loudspeaker again, passing through a patch in the music software Max. By placing the loudspeaker in a jar, the sound will resonate easier, a very suitable feature for acoustic feedback. The main sound of the performance is thus acoustic feedback, coloured by the different characteristics of the three microphones used (two different condenser and a dynamic microphone).
The Max patch processes this feedback sound: as the scheme depicts, Ronald uses threshold triggered reverb pulses, feedback interval driven harmonisation and granular delay lines. By using amplitude thresholds and feedback frequencies these processes are directly influenced by the feedback sound itself, and the feedback itself is processed by the Max patch. In this manner Sound in a Jar uses a double form of feedback: acoustic feedback (using the sound itself) and data feedback (by using data streams generated from amplitude and frequency analyses of the loudspeaker sound, without using the sound itself), and both are effecting each other constantly. How much the sound of each microphone is processed by Max and which of the three processes is used (reverb pulses, harmonisation or granular delay) is changing during the piece, as is depicted in the diagram in the score. The relationships between microphone, processing and loudspeaker change not only accordingly to the distance between microphone and loudspeaker but also because of the temporal development of the kind of sound processing in the Max patch.
In this close-up video the development in sound processing and the direct relationship between the movements of the microphones and the resulting sound can easily be followed:
A very appealing aspect of this set-up is in my view, is that all three microphone signals are connected to a single loudspeaker. All three players have to find their own way of playing, because they have a different type of microphone and their sound is processed in a different way, but at the same time all these different paths come together again in a small loudspeaker in a jar. In the second part of the performance the sound of the small loudspeaker is slowly also diffused through the bigger loudspeakers in the hall (the PA loudspeakers). This does not cause any noticeable change in the acoustic feedback interaction, but the spatial and spectral characteristics do change due to the different in placement, sound diffusion and spectral response of these loudspeakers. The sound of the jar itself seems to fill the whole performance space now, instead of occupying a single spot. At the end of the piece, the loudspeakers in the hall fade out again and the sound moves back into the jar.
In Transducer (2013) you might easily recognise all kinds of “classical” playing techniques for microphones and loudspeakers, twisted in surprising and clever ways. This results in a performance which reinvents and expands known pieces such as Steve Reich’s Pendulum Music, Karlheinz Stockhausen’s Mikrophonie I or Gordon Monahan’s Speaker Swinging into unexplored territories. Robin Fox and Eugene Ughetti composed this piece for Speak Percussion (Eugene Ughetti, Matthias Schack-Arnott and Leah Scholes, and guest percussionist Louise Devenish are on stage).
As the title Transducer already implies, this piece is focusing on so-called transducers: devices that transform one form of energy to another, a category microphones and loudspeakers belong to. The piece starts with a scene which reminds me of the swinging loudspeakers in Gordon Monahan’s Speaker Swinging. But this time a microphone circulates above Eugene’s head, and is, for example, picking up sounds diffused by loudspeakers carried around by other performers:
One of the main elements on stage is an array of eight microphones hanging above eight small loudspeakers, which remembers us of Steve Reich’s Pendulum Music. Although clearly inspired from the swinging microphones used by Steve Reich, this pendulum-array—containing more and smaller pendulums—is played in a different way, or more accurately: in many different ways. Reich’s Pendulum Music is process-based and acoustic feedback is its sole sound. After releasing the microphones the performers do not interfere anymore with the swinging microphones. The performance is finished as soon as the microphones are hanging stationary above the loudspeakers.
In Transducer Robin and Eugene develop an instrumental set-up with the pendulums, which produce many different sounds such as clicks, sine waves or noise. These different type of sounds are generated with the help of patches programmed in the music software Max. The pendulums in Transducer also do not feedback acoustically, but the swinging microphones amplify the sound coming from the loudspeakers underneath them in pulses: the closer the microphone moves to the loudspeaker the louder the sound gets. The signals of the microphones can be amplified through eight bigger loudspeakers placed around the audience.
The whole set-up for Transducer contains many different kinds of microphones and loudspeakers, and therefore a huge amount of possibilities for combining these. Besides the elements mentioned earlier, there are four different tables, all focusing on a specific topic of playing microphones and loudspeakers. On the Textured Table different surfaces are triggering a contact microphone to obtain musical material and have it feeding back through other loudspeakers and microphones. On the Speaker Table a loudspeaker is placed, which membrane moves other objects (including some ping pong balls!), and in fact is acting as a percussionist. The third table is the so-called Mic on Mic table, on which a microphone is amplifying another microphone, which itself is not amplified. The Electromagnetic Table creates sounds with the use of an induction coil and a pulled-open computer.
The piece ends with acoustic feedback: Eugene Ughetti approaches two loudspeakers with a microphone. In between them a big tam-tam is placed and starts to resonate according to the frequencies diffused by the loudspeakers placed right behind it. The acoustic feedback is coloured by the resonances of the tam-tam and by moving the microphone close to the tam-tam changes in resonances can be picked up. This might remind you of another well-known composition for microphones as musical instruments. And indeed, the second part of this Speak Percussion concert continues with Mikrophonie Iby Karlheinz Stockhausen.
The whole documentation video of Transducer can be viewed here: