Research project at Studio LOOS, 2023.
Algorithms have invaded contemporary life. The invasion is so deep that it is almost invisible at times. With the advent of AI, they have become independent actors – sometimes beyond their creators’ expectations and agendas. They are capable of creating indeterminant and organic cybernetic communication systems which sustain on their own. The process of training AI agents leads to the emergence of a set of behavioral patterns in them. These behaviours define how they act and respond to the inputs. This phenomenon has given rise to a research field called “machine behaviour”, which examines artificial intelligence agents as a class of actors with a specific ecology and expands beyond the discipline of computer science. Moving away from human exceptionalism, all non-human behaviours (generative or performative) and their causal traces can have aesthetic qualities, just like biophonic sounds which result in aesthetic experiences.
This collaboration-oriented research attempts to examine new aesthetics of machine behaviors in living creative processes by investigating their performativity and behavioral patterns. The research aims to develop two environments (Performative and Generative) that are fully controlled by AI agents based on their interactions with themselves and their surroundings, and are free from human intervention and control. This project has two outcomes, one of which includes two live performances.
Énacteur x Instruments
Two Live quadrophonic human-machine improvisation performances, 20’ x 2
Énacteur is an artificial improvising environment developed in SuperCollider. Énacteur can listen to audio signals, extract audio descriptors, make a compositional decision according to the descriptors, and generate sound in real-time without the necessity of human intervention. Énacteur’s sound generation module includes real-time synthesis and processing, as well as an internal library of pre- recorded sounds. Énacteur is capable of spatial diffusion of the audio output to any number of channels according to the speaker system.
Énacteur will take part in two free improvisation performances, one with a piano player and the other with a trumpet player. In these performances, Ènacteur expresses two different behaviors picked up from two different training sets. The behavior is be shaped by the example datasets it collects from training sessions with each musician.
Énacteur and the collaborating musicians’ sounds mutually influence each other, continuously changing their environment and forming a perceived macro-structure in time. The causality and emergence arising from the interactions of these agents shape self-organising complex behaviors that suggest improvisational features. The performances are investigations on human- machine interactions in free improvisation, and its creative application as models for electroacoustic music compositions.