Future of Electronic Music Performance with Bodiless Agents

written by Ozcan Ertek

The idea that artificial intelligence can compose music was unimaginable for a lot of people yet now artificial intelligence is making an impact almost everywhere. AI helps musicians and sound artists in the creative process. AI can even create new tracks by artists who are no longer with us.

Echoes of AI 

Live music exhibits considerable variations around predetermined forms. Performers have many choices in the time domain and in any given moment can select from high to low tempo, from sudden, unanticipated changes and innovations to monotony and recurrence.

Rhythm can provide important clues about what might be done in the time domain. They have also limitless varieties in space, from broad to narrow frequency range elements, and from sudden alterations in intensity to unity and minimal change. 

These adjustments and flexibility allow for musical expression of emotion and thoughts as they appear and subside. But what happens when all of these decisions are made by artificial musical agents? How do we hear and perceive the performance?

Last January, CTM and Transmediale night brought together different forms of music, Sound, and performances which also included AI. Artists explored boundaries of networks and different interlocking realities via artificial intelligence, computer-based and networked processes. 

Two of the artists were Paul Purgas and James Ginzburg, better known as Emptyset, who performed a live version of their recent record “Blossoms” which presents the duo’s collaborations with machine learning systems. In their performance, the AI agent was an additional member/performer of the band at Transmediale.

Feeding the AI

The machine learning system for “Blossoms” developed through extensive audio training. We hear the sounds come from AI which is trained with the discography of the duo and 10 hours of wood, metal, and drum stick recordings. As an artificial agent, AI software applied its model learned from these recordings at the performance. This process is called “seeding.” The AI created its own arrangements among these larger databases and formed hybrids and mutations.

Bodiless Agent in Artificial Spaces

In our daily lives, we perceive and experience architectural structures, which are very much connected with the space they reside in. Our daily experience of space is concerned with energy and its release. We move through space, as we carry out our daily tasks. Our mental vision of space allows us to interact with the objects and move through our surroundings. We witness environmental phenomena originating from multi-sensory perceptual processes. 

That is today the way we construct spatial representations of the world around us is based on multimodal information, not just sight. To fully map a space in our minds we must codify the spatial information that has been collected by our senses based on coordinate systems. Through the dimensions, the shadows and lights, reflections in the space, and the balance between objects, the observer perceives a proportion between himself and the structure (Labelle & Martinho, 2011:27). 

Originating from these ideas, so does the AI in “Blossoms” mix the different acoustic reverbs and spaces that it captured via impulses of the different architectural sites. Instead of the body and figure elements, this time performance was driven by different spaces and ground elements.

Conclusion

Watching Emptyset’s performance, thinking about the machine-learning

software that has formed the piece brought to mind the question as to whether it had a body or not, despite there being no physical recreation of one. As mentioned before, in embodied music cognition theory, gestural aspects of performance and action-perception coupling in musical performances state that the performer’s movement is important for cohesive musical performances. Moving sonic forms contain encoded within the intentional actions of the composer. The musician encodes gestures in sound, and the listener can decode particular aspects of them through corporeal reaction and imitation. They do it by interacting with the music and interpreting intentionality along with mental images. 

Without seeing the physical gestures of the AI, the sound it created was immersive. Despite the fact, everything being experienced was algorithmic sound objects and rhythmic cadences developed by training a neural network.

As a music producer and performer, you could say that digital musical instruments have greatly developed music performance. But in the current era, where we start to explore boundaries of networks, many more promises can be seen in network processes and AI-augmented compositions for the future of electronic sound and performance.

Listening is an action-oriented intentional activity of making sense of the world. We understand music regarding our own action-oriented ontology. Even if we can’t see a physical body on the stage, we will always continue to trace embodied aspects of music with our mental imagery and continue to dance.