In a world where technology and creativity meet, there’s often a disconnect between computer code and the finer points of musical aesthetics. This is a big deal because, in the end, our goal is to use technology to enhance creativity, right? Let’s take a closer look at how composers and analysts approach their work, and how technology can bridge this gap in a meaningful way.
One of the initial challenges we face is that human musicians don’t typically see music as mere data points. But we can’t harness the power of computers without data. The secret lies in how we gather, interpret, and select musical elements for processing.
Understanding the context surrounding individual musical elements, helps us determine how different musical parts work together as a whole. This insight is what allows computers to make meaningful artistic judgments based on their accumulated experience or training. While I’ve spent years researching and developing strategies to tackle these challenges, the bottom line is pretty simple:
AI systems that don’t treat musical data as a way to create emotional impact usually generate boring music.
Music as Data
On the one hand, we have data-based musical analysis, which can be done by computers guided by a deep knowledge of music theory. This involves dissecting and explaining existing musical ideas — untangling a web of connections between musical events, each with its distinct characteristics. Interestingly, some of these connections might be deliberately crafted by the composer, while others naturally emerge.
The ultimate goal of analysis is to uncover a complex network of connections, regardless of whether the composer consciously planned them or not. This enriches our understanding of the structure and intricate details of the music. Analysis plays a pivotal role in computer-assisted composition based on models, but for it to create meaningful results, it needs to capture a wide spectrum of emotional experiences and create music that resonates deeply. In simple terms, analysis needs to expand to embrace the pleasures music offers us.
So, what exactly do I mean by the “pleasures” of music? Think of these as musical blueprints that hold a universal appeal, transcending time and cultural boundaries. They’ve been honed and developed for thousands of years, passed down from one generation to another. These designs are the foundation of many well-loved musical pieces — an inner logic that connects with our senses and beckons us to listen again and again.
Recall in your mind a melody that gives you chills or a rhythm that makes you want to dance, even if you’re not sure why. These are the kinds of patterns that create a lasting impression. They’re the elements that make a piece of music not just enjoyable, but unforgettable. This is the sweet spot that technology should aim to hit鈥攆inding these timeless musical elements that touch us on a deep level.
Learning from Creative Humans
Composers function as emotional architects, carefully selecting and arranging musical building blocks to create something that captures the listener’s imagination. This process involves searching for combinations of musical patterns that take on a life of their own. As someone who’s been on this journey for many decades, I can tell you it’s a bit like walking a tightrope without a net. Every step forward narrows down choices while opening up new possibilities, and when a combination of musical elements emerges with potential, there’s still no guarantee it will resonate with others.
Regrettably, this intimate insight into the act of composing doesn’t directly lend itself to designing creative algorithms. To build creative code, we need methods to define and compare the impact of musical ideas. We need to uncover what resonates and what falls flat. Starting in the 1950s, researchers like Meyer, Younblood, Krahenbuehl, and Coons began looking into how music affects our feelings and keeps us engaged. They used concepts from information theory to study things like how music makes us excited, keeps us curious, and surprises us.
Since then, other academics like Lerdahl, Jackendoff, Narmour, Huron, Margulis, and Farbood have come up with theories and ways to measure how we experience tension in music. Patterns of musical expectation that build over time to create opportunities for emotionally satisfying resolution. It works a bit like this: when we listen to music, we remember certain patterns and sounds we’ve heard before, and this memory helps us understand how the music will develop. As the music plays, we have an idea of how the different pieces fit together based on what we’ve heard in the past.
This interplay makes the music interesting because we’re always guessing what will happen next, and sometimes, the music surprises us in a way that keeps us engaged.
AI That Delivers Emotion
These researchers are uncovering the secrets behind why some melodies make us feel excited or why certain musical gestures create a sense of tension for most listeners. This helps us understand more about how music affects us and why we enjoy it so much! Ultimately, this work is paving the way to understanding how we can make music better.
We’re developing ways to turn this research into code for AI systems that use musical data.
For AI systems to generate music that stands the test of time, technology-driven creativity must uncover these timeless patterns and trends that evoke universal pleasure. It starts with delving into essential musical concepts like memory, identity, anticipation, and surprise. By building a data-driven understanding of what gives music its unique character, we can ensure that technology and creativity work in harmony to produce music that resonates and endures.