Your NPR news source
Classical AI

Researcher Bryan Pardo says that AI-generated music builds on what humans have already been doing for centuries: leaving some musical outcomes to chance.

An AI cyborg and an orchestra are about to collide onstage in Chicago

To a musician or artist, generative artificial intelligence, if left unchecked, is the stuff of nightmares. AI has sparked a lot of worries among creatives, not least of which is that, in the not-so-distant future, machines could possibly sap their livelihood.

Chicago composer, singer and multi-instrumentalist Clarice Assad, 46, wants to flip that script. Assad has written a new piece called The Evolution of AI that takes inspiration from artificial intelligence, rather than the other way around.

The work is unlike most things you would imagine unfolding onstage with a full orchestra. Assad describes it as part orchestral suite, part performance art piece. Onstage, Assad portrays a humanoid AI, armed with an electronic drum, a wearable MIDI ring that creates sound when she moves her hand, like a theremin, and a futuristic-looking jumpsuit. She fastidiously observes the orchestra as they play, roving around the stage to absorb their sounds and motions.



Clarice Assad

Clarice Assad (right) conducts her piece ‘The Evolution of AI,’ which takes inspiration from artificial intelligence.

“I’m no longer myself: I’m embodying this hybrid human-machine that’s rebooting and following the process of how machines learn,” Assad said.

Later in the piece, which she will perform in a series of concerts with the Chicago Sinfonietta on March 15 and 16, Assad uses a tablet to “flip through” classical music from all of recorded human history, the orchestra behind her playing snippets of everything from the Ancient Greek Seikilos Epitaph melody — the oldest surviving musical notation — to Stravinsky.

But for all her commentary on how AI works, Assad didn’t use generative AI models to compose the music itself. Everything you hear is straight from “her very human brain,” as the Pioneer Press quipped in its review of The Evolution of AI’s premiere in St. Paul, Minn. (The piece was co-commissioned by the Saint Paul Chamber Orchestra.)

“I’m not an AI optimist or pessimist — I’m in the middle,” Assad said. “If you’re using it to enhance your creativity, I don’t see why it’s bad.”

Elsewhere in Chicago, other artists are similarly wrestling with the tension between musician and machine.

On March 17, as part of a lecture-performance series called Chicago Creative Machines, multimedia artist X.A. Li runs through a series of video pieces that incorporate both generative and processing AI tools. One creepy video shows what happened when Li trained a generative AI model on hundreds of self-help YouTube influencers: A blurry, composite face with a vacant smile repeats meaningless platitudes. Another documents a conversation between Li and a text-based AI — she sends it evocative, poetic texts, and the AI responds in kind.

“These models are almost like a toddler. They bluntly regurgitate things [most people] would never say,” Li said. “Some of what they say is kind of disturbing, or it sounds kind of strange. But there’s a kernel of truth in it, because it’s learned to parrot back the values that we’ve modeled as a society.”



Hugo Flores García

Hugo Flores García (left) used AI to develop a tool that can take two aural phenomena and meld them together.

Hugo Flores García, a guitarist and researcher at Northwestern University’s Interactive Audio Lab, is also experimenting with AI. He developed a tool that can take two aural phenomena — one example is the click of a cash-counting machine and his own voice — and meld them together. During a recent Chicago Creative Machines lecture, he demonstrated how he transformed the clicks of the machine to rise and fall like the cadences of his own voice — sounds that would be impossible to replicate with such precision in real life.

“That’s the thing that I really enjoy about working with generative AI: I get surprises out of things that I was not expecting, because this thing has so much randomness built into it,” Flores said.

That randomness is what we, as users, interpret as intelligent choice. Bryan Pardo, the researcher and musician who leads the Interactive Audio Lab, compares generative AI’s process to Google Translate: Its algorithm analyzes text inputs word by word and spits out a translation based on cascading likelihoods.

“There is a random sampling procedure, but it’s guided by a probability distribution. And the probability distribution is learned from data,” Pardo said.

But Pardo points out that AI-generated music builds on what humans have already been doing for centuries: leaving some musical outcomes to chance. Composers like John Cage, Charles Ives and Karlheinz Stockhausen did so on a sophisticated scale. A popular trick in Mozart’s day strung together entire compositions based on dice rolls, their prewritten chunks corresponding to the numbers on the die. Ensuring the chunks sound good together, though, falls to the composers.

Doing that well, not to mention ethically, is something generative AIs — like Google’s MusicFX, Adobe’s Project Music GenAI Control and Suno, to name just a few — are still figuring out. Recently, Pardo and one of his students were tinkering with a Google-developed audio AI and generated a familiar string of notes.

“One of my students was like, ‘Does this sound like a Beethoven melody to you?’ We went and listened to a Beethoven piano sonata, and heck yes, it had started to spit out that Beethoven melody,” Pardo said. “Luckily for Google, Beethoven is not under copyright anymore. But what’s the likelihood your generative model is going to spit out something close to an exact copy of someone’s song?”



Clarice Assad

Clarice Assad (center) portrays a humanoid AI, armed with an electronic drum, a wearable MIDI ring and a futuristic-looking jumpsuit.

Assad’s The Evolution of AI also depicts her imagined cyborg’s limitations in a dramatic, abrupt conclusion. (No spoilers.) But as Li points out, AI’s shortcomings can be just as creatively inspiring as its superhuman capabilities.

“Most of the public discourse around ‘AI art’ these days is very limited to obvious tools that are meant to essentially replace the artist. But to me, that’s the least interesting use,” Li said. “I’m using tools that may not be intended originally for artwork and co-opting them to create uncanny experiences.” Li wants her works to show how technologies function — but, at the same time, illustrate how they might fail.

If you go: Clarice Assad’s Chicago premiere of The Evolution of AI with the Chicago Sinfonietta is March 15 at Wentz Concert Hall (171 E. Chicago Ave., Naperville) and March 16 at the Auditorium Theatre (50 E. Ida B. Wells Drive). Tickets start at $27 with discounts for students.

The next Chicago Creative Machines lecture is March 17 at Experimental Sound Studio (5925 N. Ravenswood Ave.) and will feature multimedia artist X.A. Li. The lectures are free.

The Latest
Drag performers glowed and glided. Pride-themed merch was sold. And an overall feeling of unity and welcome was felt Saturday in Lake View.
The musical, directed by Rob Ashford and set in sultry queer Savannah subcultures, promises to be a summer theatrical blockbuster. But the approach is far from traditional.
Here’s everything we know so far about season three of the hit show, which has enthralled the city and is back on Hulu on June 27.
“We read so many scripts and just couldn’t find an existing show we wanted to do,” Metcalf says of reuniting with Mantello. Steppenwolf’s artistic directors suggested they meet with Hunter about a commission.
Yes, Chicago is always a food town. But summer brings forth a particularly glorious array of handhelds, frozen delights and seasonal specials.