Neon green braids, face tattoos, gold chains and grillz — although he embodies the physical traits of a stereotypical American rapper, he is quite literally detached from the human world.
Powered by artificial intelligence [AI] technology, rapper FN Meka from the virtual reality [VR] company, Factory New, emerged in the music industry as the first virtual, non-human artist that signed with a major label — Capitol Records.
FN Meka’s significant social media presence and over 10M TikTok following exceeded that of established names in the field, including singers Dua Lipa and Halsey, and he was on track to becoming a pioneering artist who challenges music norms.
Inappropriate depictions of police violence and the repeated use of the N-word, however, prompted major backlash, especially upon knowledge that Factory New’s founders, Brandon Le and Anthony Martini, weren’t Black. Just a week later, Capitol Records issued an apology statement and terminated all activity and plans with the VR company’s once-promising project.
From this we learned that culturally appropriating and enforcing harmful stereotypes surrounding Black culture — any culture, for that matter — won’t induce the warmest of welcomes. But it’s no surprise that despite this considerable flop, more and more entertainment companies are striving to integrate advanced technology into their content — a revolution that may change the industry for the better.
In fact, applications of AI in entertainment are thus far, relatively common means of enhancing user experience. From subtitle generation to content personalization, these familiar media features all rely on AI algorithms. According to a 2022 report published by Grand View Research, the global AI in media and entertainment market size is projected to reach $99.48 billion by 2030.
Sure, these forms of technological advancements aren’t manifested as robots like what typical dystopian films suggest, but increasing numbers of artists are taking unconventional measures to fuse technology with their content.
With dozens of engines like Experiments in Musical Intelligence [EMI], OpenAI and Artificial Intelligence Virtual Artist [AIVA] that analyze and predict music patterns to compose tracks and lyrics, musicians can explore these collaborative avenues to eliminate concerns over time constraints and royalties, while expanding their creative platform.
Take for example musician Holly Herndon — a spearhead in avant-garde music movements.
With a doctorate from Stanford University’s Center for Computer Research in Music and Acoustics, Herndon has a long history of collaborating with machine learning in her instrumentals and backing vocals. Alongside her partner Mathew Dryhurst, she created an AI device named Spawn, capable of transforming human-voice inputs into various tunes.
Herndon first introduced this program in her 2018 single, “Godmother” — a collaborative piece that features electronic musician Jlin as well as the AI device.
The track itself sounds absolutely terrifying. Made up of some seriously unsettling and incomprehensible blasts, it seems just about right for a machine-powered piece.
But the level of innovation employed in her music-making process is undeniable. Herndon is blurring the lines between man and machine to deliver unique and one-of-a-kind performances.
In 2021, she announced her new project, Holly+ — an AI deepfake that will turn any audio into a piece sung in Herndon’s voice, and we’ve noticed that quite similar applications of such AI also exist in the world of cinema.
In the Disney+ Star Wars series, “Obi-Wan Kenobi,” Ukraine-based AI firm Respeecher generated 91-year-old actor James Earl Jones’ iconic Darth Vader voice. Based on an algorithm that examines archival recordings, the company also developed AI voicing of young Luke Skywalker in another Disney+ Star Wars series, “The Book of Boba Fett.”
In this sense, AI creates a direct bridge between time and content in which filmmakers can produce movies and shows without actors present in real time. This also includes reproductions of late celebrities.
This expansive technology, however, requires an extensive database, and it naturally poses an ethical concern over security and copyright issues. If these programs possess the ability to almost perfectly replicate and manipulate images and sounds, can deepfakes of random people float around the internet? Can AI copy the work of another artist under the guise that it’s original?
These are concerns that the tech community has yet to tackle, but from what we already know about AI, its benefits for the entertainment industry far outweigh its drawbacks.
It’s also safe to assume that this innovation will not replace human artists anytime in the near future. Rather than employing AI at the expense of human involvement, such advanced technology exists as a collaborative forum that cannot exist without either party. What that means is that at the high school level, don’t expect to see bots taking over the stage in the Performing Arts Center.
“I think there will always be a demand for human people to play these roles,” theater teacher Christian Penuelas said. “Having AI robots play these roles just kind of defeats the whole purpose of going to the theater and seeing it.”
While FN Meka was one example of a faulty attempt at incorporating artificial intelligence with music, continuous changes in the consumption and production of entertainment calls for increasing demands for innovative content.
That means AI is here to stay.