The Price of Your Voice

The Price of Your Voice

Musing on why AI Clones are coming for your voice and how to hold the contractual line.

As always, this is a legal flavored existential crisis, not legal advice. If you like following along with my madness, consider subscribing so that you can descend into the void alongside me!

What’s the going rate for your voice?
Not your words… your sound.

That breath before a punchline.
The lilt that makes strangers lean in.
The steel that halts them mid-argument.

It’s yours… until it isn’t.

We are in the age of audio cloning. AI can capture your tone, rhythm, and hesitation, then drop you into scripts, songs, or audiobooks you never touched. Some of it feels like party-trick novelty; some of it, a deepfake weapon.

Either way, if you’re not naming the price of your voice, someone else will.

When the Law Says You Exist

The law treats your voice inconsistently, state by state.

California set the early precedent: your actual recorded voice and a convincing sound‑alike are both protected. In Midler v. Ford Motor Co., 849 F.2d 460 (9th Cir. 1988), Bette Midler refused a car commercial; the company hired an imitator, and the court ruled the mimicry could still be misappropriation. In Waits v. Frito‑Lay, Inc., 978 F.2d 1093 (9th Cir. 1992), a Tom Waits impersonation in an ad led to damages for false endorsement. Together, they made one thing clear: in commerce, your vocal persona can be as guarded as your image.

The point has only sharpened. In 2024, Scarlett Johansson publicly accused OpenAI of cloning her voice without consent for its “Sky” system, despite her prior refusals to license it. And if you want a haunting example, search YouTube for Alan Watts lectures. Many are genuine recordings, but others are AI-generated continuations of a man who’s been dead since 1973.

New York now bans unauthorized “digital replicas” under N.Y. Civ. Rights Law §50‑f, from faces to gestures to voices. In July 2025, Lehrman v. Lovo, Inc., confirmed that even an audio‑only AI clone can violate that right if it misleads the public into thinking you endorsed it. That case involved voice actors whose performances were scraped and synthesized without consent, then sold as “commercial‑ready.”

Tennessee went further with the ELVIS Act, back in 2024, treating synthetic voice imitations like stolen recordings, with civil and criminal consequences. It’s also one of the few states that recognizes your voice as protectable after death.

And that’s the break in the pattern: in most jurisdictions, your right of publicity dies with you. Unless you lived (or died) in a state with post-mortem protection, your heirs may have no claim when an AI resurrects your voice. Indiana grants 100 years, California 70, Washington 75, and Tennessee life plus 10. Everywhere else, your voice can be taken the moment you’re gone.

At the federal level, trademark law and the Lanham Act offer some recourse, but neither was built for clones. Copyright protects the recording, not the style or timbre. And here’s the twist: an AI-generated voice clone often isn’t anyone’s property by default. If it’s created from patterns without directly copying a recording, it can live in legal limbo, unowned but still able to harm you.

Which begs the question: if no one owns it yet, why not make the case that you should?

Quiet Theft in Every Breath

Whether you voice an animated character, narrate an audiobook, stream live, or cut demos, you’re leaving hours of clean, high-quality voice data. Exactly what AI training models crave.

Most contracts were written before cloning tech. They grant audio rights or likeness use without naming “voice” or “synthetic reproductions.” That gap lets companies argue they can imitate you, forever, without paying.

And if you think this only affects celebrities, think again. Smaller creators, niche narrators, and even those who’ve never worked on high-profile projects are finding their sound in places they never licensed. An AI doesn’t care about your market value; it cares about the clarity of your audio and the recognizability of your cadence.

Three Ways to Chain Your Echo

  1. Define It Broadly
    Include: “Voice; Vocal Persona; and any synthetic, simulated, or machine-generated reproduction thereof.” If a production partner builds a custom model from your data, stipulate that you own it.

  2. Separate Consent
    One clause for using your recordings, another, higher priced, for voice cloning. Require storage of derivative models/datasets in a way that can be independently audited.

  3. Add AI-Use Guardrails
    Ban training, cloning, or simulation unless licensed. Require disclosure for synthetic voice use and takedown within 48 hours if unauthorized. Treat unauthorized retention of the model/dataset as a breach.

And if ownership feels out of reach, fight for control: model escrow, deployment logs, and the ability to shut down unauthorized use at the source.

Where Voices Sell Themselves

From TikTok voiceovers you never recorded to AI-narrated audiobooks, the voice marketplace moves faster than the law. Contracts remain your first, and sometimes only, line of defense.

So, what’s the price of your voice?
And are you the one setting it?

All writings on this site are for informational and educational purposes only. Nothing here constitutes legal advice or creates an attorney–client relationship. Reading or interacting with this content does not form any obligation between you and the author or Clause & Affect PLLC. For advice about your specific situation, contact a qualified attorney licensed in your jurisdiction.

Not your lawyer. Yet.


Leave a Reply

Your email address will not be published. Required fields are marked *