logo

Icsa 2017
Sep 7th to 10th, 2017
Graz, Austria

Menu

ICSA 2017 4th International Conferenceon Spatial Audio | September 7th to 10th, 2017, Graz, Austria #vdticsa

The History of Levitation / Finalist in Category 1: Contemporary Music, Computer Music

The History of Levitation is a contribution by Fredrik Mathias Josefson (Sweden, KMH).

.

Original Documentation:

Introduction

This text summarizes what I have focused on and worked with during my Bachelor studies at the Royal College of Music in Stockholm with a focus on my graduation composition A Reality One Can No Longer Touch. This composition consists of three parts: The Principles of Nature and Grace, The History of Levitation and Hyper Nature. Initially, I present my ideas and theories on how simulated flocking behaviour and negative space can be used as strategies in the compositions of electroacoustic music. Then I write about how I implement and apply these ideas in practice by providing examples from my graduation composition. The last three years, I have chosen to focus on composing music for surround sound where the speakers are not only in a horizontal plane, but also at different heights in the room, something that for me has brought new conditions and new opportunities.

Initially it can be a challenge in that practically spatialize sounds in a three-dimensional speaker room, however, there are several established techniques you can use, for example, Ambisonics and Wave Field Synthesis. How can the introduction of height dimension be used in an interesting and advanced way of compositions?

My response to that is to use simulation of flock behavior as a strategy to spatialize sound. In my composition, the use of flock behavior is not limited to only to how the sounds are positioned in the space domain but I also apply these flock behaviors in the frequency domain and time domain. During the work with the composition, the question emerged of how to handle the area where the flock was not currently present, i.e how to handle the space not taken up by the flock. This space, that the flock does not occupy, I define as a negative space and just as in the case of flock behavior, this is used as a compositional strategy, again applied to space domain, time domain and frequency domain.

.

Simulated flocking behavior as strategy in composition

To hear the sound of a bird that flaps are something that can be identified and instinctively is expected to sound come from a direction somewhere above the listener. In the price The Murder of Crows by Janet Cardiff and George Bures Miller they work with bird sounds in an interesting and sophisticated way. A bit into the piece will sound a single bird flapping from one side of the room to the other in a clear path over the listeners. In this work, it is consistently solitary birds and other occasional sound that moves in the speakers, both above and around the listeners.

It is rarely larger groups of sound objects. In a completely different way of working with sound is Chris Watson in his installation Listening Post, in which a field recording of a flock of birds is presented in a multi-channel system where the listener completely enclosed by the flock. By taking the starting point and inspiration from these two works I composed my graduation composition A Reality One Can No Longer Touch. As a starting point for the composition, I asked myself the question: how would it sound if you create the sound of a flapping bird flock? My ambition was to work with sound design and precision similar to the way Cardiff and Bures-Miller use to create the illusion of a big bird flock moves across and around the listeners like in Watson's installation. A composition in which many birds arriving, many birds flying away. How can we go about doing that? When it comes to the actual audio content, the sound objects, I decided to not use field recordings of bird flocks. The reason was not that I do not find the field recordings, interesting and useful for composition. But then I wanted to be in control of every agent in the flock with respect to the frequency domain, time domain and space domain in the composition and not view the flock as a unit. As a result using field recording is not a suitable approach for this purpose. A field recording automatically creates a unity of a flock. It is not possible to separate the individual birds in the flock from a field recording. Instead of field recordings of birds, I use fabrication and imagination where bird sounds are created with Foley techniques common in movie soundtracks. The sounds in themselves are very simple and when listening to static repetition distance sounds itself quickly from it they try to mimic.

In my case, I wanted the listeners to associate sounds to birds and accomplishes this by several sounds played at the same time, and with slight variations in pitch and spectrum approaching the illusion. The sound of flapping wings is a part of the illusion, the second part is the flock movement. Something that I find interesting is the pattern that is formed when many birds move in large flocks. Each individual acting individually, no individual controls the whole pack, but together they form a larger unit can apply these patterns in compositions? Flock behavior is something that has interested programmers and the first implementation of flock behavior was BOIDS developed by Craig Reynolds. Here the behavior of each individual, called the agent, in the flock is affected of the other agents that are closest to it. There are three parameters that affect every agent's behavior: separation, alignment and cohesion. The separation will affect every agent in the sense that they want to keep a distance from their neighbors in the flock while cohesion prevents this and makes them want to stay together as a flock. Alignment means that each agent would move in the same direction as their neighbors. With these three parameters, the behavior of the flock can be affected – something that I use in my composition, a restricted parameter space with clear and expected outcomes. With these three parameters the behavior of the flock can be affected as a whole, but does not control each agent's behavior in detail.

The parameters are then controlled by the flock algorithm? First, I would like to stress that when I talk about flock, I see each agent in the flock as a unique sound object, or the stream of sound objects. Just as each bird in the flock of the same species is unique at the same time as it is similar to the other birds in the flock so reminds each sound object if the others in the flock, but is unique. The sounding result of this process, I have chosen that term flock synthesis, and I think there are similarities between this and granular synthesis. In my thesis composition, I let the agents' virtual position in the flock decide each audio position in the speaker space as in the previous composition, but I also allows the position control of sound objects frequencies. Flock's behavior affects, therefore each sound object in the frequency domain. The sound objects absolute positions or the relationship between them is not so important for me, but it is their general positions and how sound moves with a common direction and speed are important.

The third and last parameter which is controlled by the flock is how the sound objects are affected in time domain. Agents' sounds envelopes are controlled by the flock. Another way that flock behavior affect sounds in the time domain is through a network of feedback loops, the so-called feedback loops. When each agent in the flock is aware of its neighbors as it sends a copy of its sound through a feedback loop. Agents' distance to each other affects how long it takes for the sound to reach and at the sound level, see Figure 1.

The received sound is processed and then sent on by new feedback loops. These feedback loops have thus dynamic length and then the agents in the herd constantly moving closer and farther apart to change the length of time which allows the sound material rises and falls in pitch. This is an expression of the link between time domain and frequency domain as discussed by Karlheinz Stockhausen in the Four criteria of electroacoustic music. To apply the behavior from the flock in these three domains can be done individually or in various combina-tions. Selections can also be made in how the data is transferred from the flock, either continuous or discrete. In the composition indicates how much the various parameters, separation, alignment and cohesion affect the flock. It is of course possible to have more than one flock and let different behavior to exist within and between flocks. My thesis paragraph consists of several parallel flocks, each flock has different characteristics and affect or ignore the other flocks. A flock can thus will bring together with any of the other herds while the flock would separate from the first flock. The parameters that apply to the agents within a flock can also be applied to behavior between flocks. This behavior could easily be described as a herd chasing the other, while the other flock tries to avoid the first. How does it sound when a bird of prey chasing a flock of birds?

This avoidance maneuver makes you experience a negative space in the flock. How can you focus on these negative rooms and make use of it within a composition?

.

Negative spaces as a strategy in composition

The concept of negative space is used in several different art forms and can be described as the room or area between the objects or around the object in an image. The term can also be described with the white surface on which an image or a text is printed. A classic example is Edgar Rubin's vase. It is an image that represents a vase and in the negative space around the vase contour of two faces. The artist Rachel Whiteread filled the room by using both the physical rooms, entire houses and other objects that molds to make casts and then remove the items and just leave the resulting negative space. Negative rooms are sometimes described using the Japanese word Ma. There is no direct translation but described with multiple words: door, space, pause and the space between two supporting members.

Can you transfer this concept to the composition, the music, the sound? In music can be both negative tones that cannot be played and resonance in the room where the music performed. I want to look at it from a different point of view and explore how to apply negative room concepts for audio in the frequency domain, time domain and spatial domain.

Negative rooms in the frequency domain, we define as negative sound. There is no valuation of the sounds, which causes them to be more or less positive or negative. These sounds are not from any particular kind of music style that anti-music, noise music or noise. But with the noise towards the manner with which I create negative sound. Think back on how the negative space was the white surface of a blank sheet of paper. This white surface corresponding to the reflection of white light, that is light, which consists of the entire visible spectrum. Within the sound, I let the white noise correspond to white light, that is to say, the noise where all frequencies are included. The white noise is itself not the negative space without the white area around the object. The subject-matter of the case for sound is the sound object. The negative noise occurs when the sound object subtracted from the white noise, i.e. sound object’s amplitudes over its frequency content subtracted from the white noise. The sound is thus all the frequencies the sound object does not produce.

As mentioned earlier occurs a negative space in the space domain when a flock of sound objects scattered by other flock of sound objects. There is another negative rooms where music is performed from the speakers depends on how speak-ers work and how we perceive sound. If audio is played from a single speaker sounds like sound coming from the speaker. If the music is performed by two or more speakers, the perceived source of the sound is placed in or between any of these speakers. The audio source can also be treated in such a way that it feels as if the sound comes from a point farther away from, somewhere behind the speakers. However, it is not possible to place the perceived sound source in front of the speakers. Thus, there are two rooms in the speaker music, on the other hand, the virtual speaker room created by the sound that is played from the speakers and the listening audience is experiencing physical space where the music. The speaker room is the stage for the music. From the point of view of the speaker's room, listening room, with respect to the audio source's perceived posi-tion, defined as a negative area. The perceived point of origin, that is to say, the position of the sound source, can either be placed in the speaker room or in the negative space, see Figure 2.

The two rooms in Figure 2 represent a dichotomy. By placing additional speakers in the listening room, the negative space is changing and is reduced, but the sound cannot be projected from the speaker room so that the audio source is perceived to be in the negative space. Shifts from the listening room to the speaker room is so common that they barely noticeable, and often try to avoid. When a acoustic instruments amplified with a microphone, this is happening. Shifts in the opposite direction, i.e. from the speaker room to the listening room is not nearly as common. Something similar to this are the concepts of diegetic music and non-diegetic music in film music. What determines whether the music is diegetic or non-diegetic depends on whether the music exists, and experienced by the characters in the fictional world, diegetic, or if it only exists for the audience, non-diegetic. There are many examples where the conventions of this is broken, then the music that has been diegetic to be non-diegetic and vise-versa, often with a surprising and humorous effect. I had experimented with this in the composition On remembrance then I in the beginning of the paragraph, let the music come from a funnel grammophone placed in the listening room and then sounded the same music continue in the speaker room. In the composition shift occurs in the same way as with the gain of an instrument, that is to say that the shift is done from the listening room to the speaker's room. In my thesis composition takes place shifts between these two rooms in both directions. Audio passes the thresh-old between speaker and negative space. To accomplish this, place I real audio in listening room, the negative space. These sounds are also played in the speaker room. In the time domain, there are several different types of negative space. These rooms should not however be equated with negative time, that is to say that the sound is played backwards, or negative frequencies, such as can be used for frequency modulation. One way to look at negative space in the time domain is to focus on the time both before and after music performed as well as the space between the different structural elements within the composition. The neg-ative rooms exist even within the composition in gaps, cut, move and breaks and within the paragraph. When the negative rooms in the time domain is combined with the movable flock over the same domain occurs something new that differs from both parts, there are synergies.

For me, the combination of the flock and the negative rooms a device to create music like myself can get lost in.

.

Flock machines

To my composition On Scientific Music and Poetic Science I wrote an essay inspired by Ada Lovelace who already 1842 took a huge leap of thought when she presented the possibility of music composed by machines. This idea has both inspired me to specific compositions and permeate my way to compose. Today, there are those machines that Ada Lovelace was thinking in terms of computers. In my compositions, I use my skills and techniques from my background as an engineer when I make systems and machines for creating music. In different parts of the compositional process, I step in and out of the role of composer and engineer. However, I see these two roles as interconnected and I see the programming of the code as a vital part of the compositional process. In the role of the engineer do I create prototypes, which I then test and evaluate. Evaluations may lead to additional engineering work with new or revised code but can also lead to ideas and inspiration to the decisions I make in the role of composer. The results of the programming of this part of the composition is what I call flock machines, so it is the composition of the code, links and structures and works like a vehicle that drives my composition forward. I have let my way of work inspired by the composer Roland Kayn, who created the music through the system of electrical feedback. Kayn created music by constructing system which governed itself independently. In my case, I let the systems not be totally self-regulating but builds parts of control that I can use for the decision in the composition. In the specific case of flock machinery, I let the control of parameters, separation, alignment and cohesion be made by decision during the composition process. One of the musical tool I use is the programming environment Kyma Pacarana and associated signal processor and in my diploma composition, I chose to implement the essence of flocking machines with these. I wanted to use a system of tight coupling between the implementation of the control data from the flock and audio engine which Kyma allows in a good and efficient way. The reason for this was that I wanted to have control over the behavior of the control system that directly, quickly and easily, can be linked to the Sonic material. An example of this is that I implemented to tones were generated on an agent in the flock discovered an-other agent in the flock. But while there is a strong connection between control system and audio engine are still separated in such a way that I can plug in different sound engines to the same control system. I implemented the flock behavior in what Kyma is called the tools (Tool), see Figure 3, and that, in turn, then ruled the various sounds (Sounds), see Figure 4. In Figure 3 is some agents as close to each other that they have discovered the existence and interact according to the implemented flock behavior, which are visualized with the smaller circles. The larger the circle visualizes a negative space that all agents are avoiding.

As the interface to the tool I used a graphics tablet with associated pen that lends itself well to control three parameters simultaneously with a high resolution and precision. It will be a machine that you can play on and with the timeline (Time-line) in Kyma the data can be recorded from the drawing plate. The recorded data can then be adjusted if necessary and then played again for the tool, that is to say, flock machine with agents who, in turn, control the sounds, see Figure 5.

In the timeline, I can make small adjustments and edit after evaluation and musical decisions. In this way, I can repeat the musical gestures and send control data to flock machine, which in turn, calculates how the flock should behave. This means that even if the gestures and control data are the same as the behavior of the flock will not be identical. You cannot predict outcomes in detail. Kyma system also has great potential to link up with other systems because it can send and receive MIDI and OSC. This was something I used in my thesis composition when I used the audio and signal processing from external synthesizers Buchla and Moog brands. Both of these instruments are monophonic in their structure. This causes a resource problem when a variety of control data to be played with an instrument in Mono and goes totally against my idea of creating a large variety of sounds in flocks. However, it is not a big problem since I do not have set as a requirement to be able to play the piece live and in real time. The solution I chose was to record the control data from flock machine via multiple channels of MIDI and OSC. Then could each channel with control data is played separately and thus control the monophonic synthesizer and thus record every sound for them-selves and thus to rebuild the sounding flock, see illustration in Figure 6. During this part of the process programmer I first sound material by patching the synthesizer, and then creates a new role as musician or implementer, when I play on the instrument while at control data sent to the instrument. This part of the process and way of work reminds me of the earlier studio technologies to "ride on the signal".

Then, I did not have that requirement that the music would be performed live, there was also the opportunity to separate the work, in the different domains that I was interested to apply flock behavior on, in various successive processes. After I had recorded the monophonic sounds from the external synthesizers I implemented new sounds in Kyma where sound material was processed in time do-main and spatial domain.

The audio material was processed in multiple iterations where flock behavior afects the sound differently. In the last step was processed sounds in the room and material spatialized for speaker set in Klangkupolen at the The Royal College of Music. In the spatialization I made another approach and here I used the existing tools rather than implement it on their own. Used to position the tool and flock Spa audio programming environment Max/Msp, see Figure 7.

In this way could flock behavior be programmed according to your requirements and instructions, so that control data sent to position the agents. But flock behavior could also be affected. Finally, rendering everything to separate speakers b-fore graduation concert. My choice to separate the work in the three domains in several separate, non-correlated processes can be questioned. However, I think that it worked well in most cases, and usually I had no desire to, for example, the frequency and position would be directly correlated. I did not there would be a connection where the sound that was high up in the room needs also would have a high frequency. However, there were instances in which I chose to use the same control data in the various domains, thus making the material correlated. This was done by recording the control data in the same way as before, see Figure 9.

Also, the choice to render out the results to fix the audio files before graduation concert is something that can be questioned from the perspective of the open work. In this specific case, it is a question of reliability, I wanted to minimize the risk of something going wrong during the graduation concert. In the face of other concerts don't necessarily need to be the case.

.

A Reality One Can No Longer Touch

My thesis composition A Reality One Can No Longer Touch consists of three load-bearing elements: The Principles of Nature and Grace, The History of Levitation and Hyper Nature but also the spaces that exist between the different components as well as the time before the composition begins. All parts use the two compositional strategies simulated herd behavior and negative space.

[…]

Part two: The History of Levitation

If the negative was the main strategy in the first part, the second part was the main strategy flock behavior. The sound material can in this part are categorized into two different groups. First a group of materials that create an overall sound environment, an accompaniment. Then a second group with a more active audio material created with flock synthesis, a material with more kinetic energy. The sound material to the first category are the parts of the compositions in their total length spans a longer period of time than this lot. They have been transformed from the time domain to room domain, as discussed earlier. The sound material in the second group has exclusively been created by applying the herd behavior of both the time domain and the frequency domain. The material consists both of sound whose source is both Buchla and Moog, created according to the process as presented earlier in the text. All the sounds in this part of the composition has spatialized in Ambisonics by applying flock behavior on sound objects. The composition consists of four flocks where any flock to separate, that is to say, keep a distance from other flocks, causing the sound objects separated in the speaker room. Sound objects has a tendency to dynamically repel each other in the speaker room. The sound object positions are fluid, there is an inherent elasticity between flocks. They press together and repel. There are two flocks with continuous, prolonged sound, from the first group of sound, both of which moves in a circular motion in the speaker room, beyond and behind the speakers. Within and between these two flocks are more a flock with sound from the other group. This flock moves more like a bird flock with a high degree of alignment and cohesion. Towards the end of this part rises all flocks up, sounds acends and disappears up the speaker room.

[…]

.

A transformation from time to space

In all three parts of the composition, I've used me of composed material that stretches far beyond the time that each part finally spans. In the first part used this material in the sound of wave movements, the second part that the continuous sounds and in the third part of environmental sounds. The longer the material has been folded and placed those on different location in the room. The result is that parts of the composition sounds simultaneously from different locations in the speaker room. So, there has been a transformation from the time domain to room domain.

.

The space between the load-bearing elements

Between the three parts of the thesis composition there are two gaps, spaces, breaks, spaces between load-bearing elements, negative rooms. Inspired by the concept of Ma, I have used these negative rooms in the composition. The first break contains a negative sound. There are three sound events played after each other. First played a positive sound. Then play the same sound and noise. Finally played the negative sound. The other break includes a flock of sound. Unlike in the three parts there is no active spatialization in this intermission. Each sound object is determined by the flock behavior with respect to frequency, filtering and contour. There is another negative area, which belongs to the degree composi-tion and it exists before the first part starts. Already when the listeners are taking place in the concert hall music is played through the speakers. This part has the function to turn on a tone, an expectation before first part starts. It is also in these parts as an approach is made to go between the speaker and negative space. This is done by me at the beginning of the first part plays on an Ocean Drum on the border in the negative space. The acoustic sound from the instrument then spread to the speaker's room. The whole composition ends but that I take out the sound of the birds in the flock to the negative space, which play on a mechanical bird.

.

Conclusion

As a result of my diploma in composition and concert, I think it lends itself well to use herd behaviors to create a room full of sound that envelops listeners. It fits well with my aesthetics of dense compositions in several layers, where the ambition is that the listeners will be surrounded by the sounds and to be in a sound environment and with active listening to explore it. The audio experience is favored by herd behavior in the sense that the separation of the various agents in the flock gets favorable results when sounds are spread out over multiple speakers in the speaker room. Herd behavior also creates musical gestures as they move through the alignment and cohesion in common courts. The idea with the flock moving across several similar sound objects also works well with repeated playbacks, then it has no more importance to the sound object to primarily hears that listeners, but instead works spatialization as an automatic mixing of sound volume and placement.

Something that surprised me positively during this work is how even abstract, synthetic sounds created without any direct vision of sound as a natural sound, a distinct organic, biological character then herd behaviors are applied in the vari-ous domains. Personally, I experienced this most strongly in part “Hyper Nature”.

While working on my master thesis composition, I have worked with simulations based on classical mechanics. It may, as a result you may experience that reaches or approaches a balance. I tried to counteract this balance by introducing the negative space. The idea was that they would break up the equilibrium structure. I see one of several opportunities for further work and development, such as letting the flock behavior form part of a system with feedback. Another opportunity that I see going on with is to deepen the work of feedbacks on the network level, and allow the network to take a greater part in composition. I have had it con-firmed that these concepts work on a general plan then I applied this process on spatialization in revising the work On Scientific Music And Poetic Science. The re-vised composition was performed at two different concerts in different speaker sets and thus different renderings. At both occasions the new spatializations worked very well. At the second concert had I not listened to render in full before the concert. At the concert, I was therefore able to get lost in the music. I believe that the two concepts, herd behavior and negative space, as I have gone through in this text, is well suited for composition in electro-acoustic music and is something that I will continue to work with. It was not always easy to apply the concepts in the three different domains, I chose to work in: spatial domain, time domain and frequency domain. But it was when friction arose that the work was most interesting for me and this is where I will continue to work.