ElevenLabs' Ethical AI Music Model Debuts with Minnelli

In a significant development for the music industry’s engagement with artificial intelligence, legendary performer Liza Minnelli has released “Kids, Wait Til You Hear This,” her first new track in 13 years. The song, whose title is a nod to her upcoming memoir of the same name , is a surprising venture into the deep house genre. It was created as part of “The Eleven Album,” a project from AI audio firm ElevenLabs designed to showcase a creator-first framework for artist-AI collaboration . This release is notable not for using AI to clone a voice, but for pioneering a different approach: Minnelli’s own spoken-word performance is layered over a musical arrangement and instrumental backing generated entirely by AI.
This project provides a high-profile example of an artist controlled AI music generation model, framing the technology as a collaborative tool for arrangement rather than a replacement for human performance.
Key Points
- Liza Minnelli released an AI-assisted track where her authentic voice is paired with an AI-generated instrumental arrangement.
- The project utilizes ElevenLabs’ Eleven Music platform, which is trained exclusively on fully licensed music data.
- This model emphasizes artist control, with Minnelli retaining full ownership and all streaming revenue.
- The collaboration demonstrates a rights-cleared framework that addresses industry concerns over unauthorized AI use.
Human Vocals, Machine Orchestration
The core technical distinction of “Kids, Wait Til You Hear This” is the specific application of AI. Both Minnelli and her team have emphasized that the technology was used for the musical arrangement, not to synthesize or clone her iconic voice. The vocals are entirely her own, a crucial detail that reframes the conversation around AI in music production. In a statement reported by The Guardian , Minnelli clarified on her Facebook page, “What I will not allow this great company to do?
Create, clone or copy my voice! … We used AI arrangements. Not AI vocals … The shout outs are all mine!”
This clarification was reinforced by a representative who confirmed to Variety that the track features her actual performance. This human-in-the-loop process demonstrates a model of augmentation over replacement. By handling the complex instrumental production, the AI enabled the 79-year-old artist to explore an unfamiliar genre without compromising the vocal authenticity that defines her work. This shows how Liza Minnelli used AI for her new song as a tool, keeping her unique delivery at the creative center while the Liza Minnelli AI backing track not voice approach handled the underlying composition.

Licensed Data, Legitimate Creation
The project is built on Eleven Music, a generative audio model from ElevenLabs designed as a creative partner for musicians. The platform’s technical foundation and ethical framework directly address the copyright disputes that have troubled the generative AI space. Eleven Music converts natural language prompts into studio-quality compositions, but its key differentiator is its training data. The company built its model on fully licensed music through partnerships with industry players like independent publisher Kobalt Music and Merlin, which represents a global network of indie labels.
This licensed foundation ensures the generated music is “rights-cleared” and commercially viable from inception. As Sophia Noel of ElevenLabs told NBC News, “we didn’t steal or take any music in order to create this system.” For artists, the platform offers granular control, allowing for detailed edits to lyrics and instrumentation. It also integrates into professional workflows by enabling users to download up to six studio-quality stems for final mixing in a digital audio workstation (DAW), ensuring the AI’s output is a starting point, not just a final product.
Consent-Driven Creative Canvas
The release of “The Eleven Album” arrives as the AI music generation market is projected to reach $1.98 billion in 2026, providing a compelling counter-narrative to widespread industry anxiety. Concerns voiced by artists like Ed Sheeran and legal actions, such as the Tupac Shakur estate’s move against Drake for using an AI-generated voice, as reported by NBC News, stem from the unauthorized use of artists’ voices and work. The Liza Minnelli ethical AI music model offers a tangible alternative built on consent, control, and ownership.

Minnelli herself highlighted the importance of ownership in a statement to The Guardian, stating, “I grew up watching my parents create wonderful dreams that were owned by other people. ElevenLabs makes it possible for anyone to be a creator and owner. That matters.” Under this model, artists retain full authorship, and all streaming revenue goes directly back to them. This project serves as a high-profile case study, demonstrating practical use cases for AI in music: generating foundational ideas, experimenting with new genres, and streamlining production timelines, all within a legally and ethically sound framework.
Harmonizing Technology With Tradition
Liza Minnelli’s collaboration with ElevenLabs marks a notable development in the relationship between artists and artificial intelligence. By firmly separating AI-generated arrangement from human vocal performance and building the system on a foundation of licensed data, the project establishes a functional, rights-respecting blueprint. It moves the conversation from a debate over replacement to a demonstration of collaboration. This artist-centric model provides a clear answer to the industry’s ethical and legal concerns.
With a framework for consent and ownership now proven, what other legendary artists might use these tools to explore new creative frontiers?
Read More From AI Buzz

Vector DB Market Shifts: Qdrant, Chroma Challenge Milvus
The vector database market is splitting in two. On one side: enterprise-grade distributed systems built for billion-vector scale. On the other: developer-first tools designed so that spinning up semantic search is as easy as pip install. This month’s data makes clear which side developers are choosing — and the answer should concern anyone who bet […]

Anyscale Ray Adoption Trends Point to a New AI Standard
Ray just hit 49.1 million PyPI downloads in a single month — and it’s growing at 25.6% month-over-month. That’s not the headline. The headline is what that growth rate looks like next to the competition. According to data tracked on the AI-Buzz dashboard , Ray’s adoption velocity is more than double that of Weaviate (+11.4%) […]

Pydantic vs OpenAI Adoption: The Real AI Infrastructure
Pydantic, a data validation library most developers treat as background infrastructure, was downloaded over 614 million times from PyPI in the last 30 days — more than OpenAI, LangChain, and Hugging Face combined. That combined total sits at 507 million. The gap isn’t close. This single data point exposes one of the most persistent blind […]