8-channel human-machine improvisation, 2023.
Promoting experimental approaches to the mindful application of AI in creative musical problems, my research project seeks to raise reforming discourses with the tool through artistic experiments. As part of this research project, Énacteur, an AI-driven improviser is developed. Énacteur is programmed to actively participate in electroacoustic improvisational settings, fulfilling real-time analysis, decision-making, sound generation, and spatialization tasks.
This performance is a freely improvised piece where Énacteur improvises with saxophone player Cláudio Silva Pereira. Énacteur listens to Cláudio’s audio signal, extracting descriptive parameters using machine listening UGens in SuperCollider. It then generates and spatializes sounds in response to the saxophone, based on the compositional strategy it picked up from training sessions. Énacteur’s sound generation module includes real-time synthesis and processing, as well as an internal library of pre-recorded sounds. Énacteur and Cláudio’s sounds mutually influence each other, establishing an improvisational dialogue. The performance investigates human-machine interactions in free improvisation and its creative application as models for electroacoustic music compositions.