October 5, 2024 9:54 am

The Estate of George Carlin Destroys AI George Carlin in Victory for Copyright Protection (and Basic Decency)
The Estate of George Carlin Destroys AI George Carlin in Victory for Copyright Protection (and Basic Decency)

The Estate of George Carlin Destroys AI George Carlin in Victory for Copyright Protection (and Basic Decency)

I often wish that I could still hear the late George Carlin riffing on the news. But I can’t, and it’s good that I can’t, because he died in 2008, and trying to pretend to digitally resurrect him would be obscene, even if you weren’t making a buck from it. 

Apparently the creators of an authorized posthumous AI-garbage version of a new George Carlin special never got to that point in their personal evolution, and in January of this year published  “George Carlin: I’m Glad I’m Dead,” part of a supposedly AI-generated podcast/YouTube series. This week they settled with Carlin’s estate and agreed to take it down and never republish it or otherwise try to exhibit it or profit from it. 

There were additional terms, apparently, all of them undisclosed as of now. According to a Rolling Stone, “The special begins with an explainer, with Dudesy [the AI host] clarifying that ‘what you’re about to hear is not George Carlin,’ and adding that in order to nail the comedian’s style it ‘listened to all of George Carlin’s material and did my best to imitate his voice, cadence and attitude as well as the subject matter I think would have interested him today.'” Although the show’s hosts Chad Kultgen and Will Sasso insisted to journalists in January that the entire special was generated by letting AI study oodles of pre-existing material, it was later revealed that “the fictional Dudesy character was not AI-generated and that Kultgen wrote the entire fake Carlin special rather than it being trained on previous work.” 

Carlin’s daughter Kelly decided, with evident exhaustion, to sue. Because the suit never went to trial, we were denied a discovery phase that might’ve provided some glimpses into the behind the scenes processes that allowed the hosts of the show in question—the supposedly AI-hosted podcast Dudesy—to virtually exhume Carlin’s corpse and force his jaw hinge of his skull to move up and down while a digital ventriloquist delivered inferior-grade imitation Carlin jokes in a voice that sounded like Ralph the Dog from “The Muppet Show.”

In all, though, the settlement still be considered a notable early victory in the ongoing battle to protect creative artists of all kinds against the uncompensated and unauthorized use of their work to train software that tech bros seem to desperately hope will make them permanently unemployed. (Did every executive who sunk billions into this stuff, and every tech guy currently using it to make keyboard-prompted “movies” that look like cutscenes dipped in latex, once lose a girlfriend to a film student? You gotta wonder.)

According to a Reuters story, “the lawsuit was among the first in the entertainment world related to “deepfakes” – convincing digital imitations of real people made possible by fast-evolving AI technology…Carlin’s daughter Kelly Carlin said in a statement that she was pleased that the case was resolved ‘quickly and amicably.’ The estate’s attorney Josh Schiller of Boies Schiller Flexner said the settlement will provide “a blueprint for resolving similar disputes going forward where an artist or public figure has their rights infringed by AI technology.” The suit described “I’m Glad I’m Dead” as “”a casual theft of a great American artist’s work.” 

It was also coldly arrogant in the manner of so many pronouncements from the tech sector, which from the Napster era onward has committed itself to devising weaselly PR-speak, exploiting legal loopholes, and basically purchasing US legislators so that they can create companies worth billions of dollars by infringing on copyrighted material.

YouTube is probably the biggest example of this, a behemoth (owned by Google now) that’s a living embodiment of the idea that behind every great fortune lies a crime. The company would not exist at its current size and scope if it hadn’t spent the first several years of its existence drawing global traffic by offering ripped songs and material from movies and TV; hiding behind the “notice and takedown” provisions of the 1998 Digital Millennium Copyright Act, which let platforms escape responsibility for what gets uploaded; and offering copyright owners the partial compensation of having their stolen content “monetized” by having ads attached to it, resulting in checks big enough to buy a cheeseburger with. Google too has long profited from reproducing copyrighted material to build traffic, gather user data from that traffic, and financially exploit it in various ways. It should not surprise anyone that Google is on the side of AI tech, offers its own versions (including Vertex AI, Duet AI, and Bard), and has promised to legally defend generative AI users against copyright claims

From an armchair lawyer standpoint, it seems a shame that the Carlin lawsuit didn’t move onto the discovery phase. Yes, it’s true that one of the Dudesy hosts ultimately fessed up to writing a script for the piece rather than, as he’d previously claimed, farming it out to a plagiarism machine. (Another interesting development: apparently the tech isn’t even ready for primetime. The news has been filled with stories recently about how “AI technology” is turning out to be a front for human labor mainly from the Global South, purchased at barely-above-slavery wages.) But the podcasters still had to create a fake Carlin to deliver their human-written script. If the suit had gone to trial, I am certain that the discovery process would have revealed that the software was fed videos of Carlin ranging over a wide timespan, including material copyrighted by the record companies that distributed his albums and the TV and home video companies (mainly HBO, a division of Time Warner Discovery) that released his comedy specials. 

Recent legal decisions in copyright suits against AI software-makers have already begun to chip away at tech’s insistence (which is laughable on its face) that there is no substantive difference between an aspiring art student studying a book of Rembrandt paintings in order to paint in the style of Rembrandt and a soulless digital machine imbibing and digesting millions of works of living artists and vomiting out a zombiefied visual slushy in response to keyboard prompts while the tech’s creators claim that the artists used in the training process aren’t owed anything. The more we find out about how the Gen AI sausage is made, the better the chance that this stuff will be properly regulated. I am even starting to think that perhaps that if things keep going this way, the living persons and companies that produced the creative work that AI is being trained on will be able to demand license fees or other payment, as retroactive partial compensation for their stolen labor. 

Make no mistake: there is a war going on, waged by the tech sector against individual human creative artists. It’s been going on for over 25 years, in different disguises. 

At first, the goal was to build companies and products on the backs of artists without paying them unless forced to, and when forced, to pay as little as possible. Now, the tactics have shifted into what appears to be an endgame phase. This endgame aims to prevent human-produced music, films, visual art, prose—even images and likenesses tied to popular “brands”—from enjoying any sort of copyright protection, so that the owners of technology that would not exist without the work of legions of unpaid artists can build their fortunes and still go to sleep at night feeling certain that they’ll never be regulated, much less punished, for theft of labor and copyright.

What would George Carlin have said about all this? We can speculate, but we’ll never know, because George Carlin has been dead for 16 years. But we do know this: the next time somebody tries to put words in a digitally resurrected version of his mouth, they’ll end up in court. 

That’s a net gain for humanity as well as the Carlin estate, and everyone who understands that stealing is bad. And that labor has a price.

I often wish that I could still hear the late George Carlin riffing on the news. But I can’t, and it’s good that I can’t, because he died in 2008, and trying to pretend to digitally resurrect him would be obscene, even if you weren’t making a buck from it.  Apparently the creators of an authorized posthumous AI-garbage version of a new George Carlin special never got to that point in their personal evolution, and in January of this year published  “George Carlin: I’m Glad I’m Dead,” part of a supposedly AI-generated podcast/YouTube series. This week they settled with Carlin’s estate and agreed to take it down and never republish it or otherwise try to exhibit it or profit from it.  There were additional terms, apparently, all of them undisclosed as of now. According to a Rolling Stone, “The special begins with an explainer, with Dudesy [the AI host] clarifying that ‘what you’re about to hear is not George Carlin,’ and adding that in order to nail the comedian’s style it ‘listened to all of George Carlin’s material and did my best to imitate his voice, cadence and attitude as well as the subject matter I think would have interested him today.'” Although the show’s hosts Chad Kultgen and Will Sasso insisted to journalists in January that the entire special was generated by letting AI study oodles of pre-existing material, it was later revealed that “the fictional Dudesy character was not AI-generated and that Kultgen wrote the entire fake Carlin special rather than it being trained on previous work.”  Carlin’s daughter Kelly decided, with evident exhaustion, to sue. Because the suit never went to trial, we were denied a discovery phase that might’ve provided some glimpses into the behind the scenes processes that allowed the hosts of the show in question—the supposedly AI-hosted podcast Dudesy—to virtually exhume Carlin’s corpse and force his jaw hinge of his skull to move up and down while a digital ventriloquist delivered inferior-grade imitation Carlin jokes in a voice that sounded like Ralph the Dog from “The Muppet Show.” In all, though, the settlement still be considered a notable early victory in the ongoing battle to protect creative artists of all kinds against the uncompensated and unauthorized use of their work to train software that tech bros seem to desperately hope will make them permanently unemployed. (Did every executive who sunk billions into this stuff, and every tech guy currently using it to make keyboard-prompted “movies” that look like cutscenes dipped in latex, once lose a girlfriend to a film student? You gotta wonder.) According to a Reuters story, “the lawsuit was among the first in the entertainment world related to “deepfakes” – convincing digital imitations of real people made possible by fast-evolving AI technology…Carlin’s daughter Kelly Carlin said in a statement that she was pleased that the case was resolved ‘quickly and amicably.’ The estate’s attorney Josh Schiller of Boies Schiller Flexner said the settlement will provide “a blueprint for resolving similar disputes going forward where an artist or public figure has their rights infringed by AI technology.” The suit described “I’m Glad I’m Dead” as “”a casual theft of a great American artist’s work.”  It was also coldly arrogant in the manner of so many pronouncements from the tech sector, which from the Napster era onward has committed itself to devising weaselly PR-speak, exploiting legal loopholes, and basically purchasing US legislators so that they can create companies worth billions of dollars by infringing on copyrighted material. YouTube is probably the biggest example of this, a behemoth (owned by Google now) that’s a living embodiment of the idea that behind every great fortune lies a crime. The company would not exist at its current size and scope if it hadn’t spent the first several years of its existence drawing global traffic by offering ripped songs and material from movies and TV; hiding behind the “notice and takedown” provisions of the 1998 Digital Millennium Copyright Act, which let platforms escape responsibility for what gets uploaded; and offering copyright owners the partial compensation of having their stolen content “monetized” by having ads attached to it, resulting in checks big enough to buy a cheeseburger with. Google too has long profited from reproducing copyrighted material to build traffic, gather user data from that traffic, and financially exploit it in various ways. It should not surprise anyone that Google is on the side of AI tech, offers its own versions (including Vertex AI, Duet AI, and Bard), and has promised to legally defend generative AI users against copyright claims.  From an armchair lawyer standpoint, it seems a shame that the Carlin lawsuit didn’t move onto the discovery phase. Yes, it’s true that one of the Dudesy hosts ultimately fessed up to writing a script for the piece rather than, as he’d previously claimed, farming it out to a plagiarism machine. (Another interesting development: apparently the tech isn’t even ready for primetime. The news has been filled with stories recently about how “AI technology” is turning out to be a front for human labor mainly from the Global South, purchased at barely-above-slavery wages.) But the podcasters still had to create a fake Carlin to deliver their human-written script. If the suit had gone to trial, I am certain that the discovery process would have revealed that the software was fed videos of Carlin ranging over a wide timespan, including material copyrighted by the record companies that distributed his albums and the TV and home video companies (mainly HBO, a division of Time Warner Discovery) that released his comedy specials.  Recent legal decisions in copyright suits against AI software-makers have already begun to chip away at tech’s insistence (which is laughable on its face) that there is no substantive difference between an aspiring art student studying a book of Rembrandt paintings in order to paint in the style of Rembrandt and a soulless digital machine imbibing and digesting millions of works of living artists and vomiting out a zombiefied visual slushy in response to keyboard prompts while the tech’s creators claim that the artists used in the training process aren’t owed anything. The more we find out about how the Gen AI sausage is made, the better the chance that this stuff will be properly regulated. I am even starting to think that perhaps that if things keep going this way, the living persons and companies that produced the creative work that AI is being trained on will be able to demand license fees or other payment, as retroactive partial compensation for their stolen labor.  Make no mistake: there is a war going on, waged by the tech sector against individual human creative artists. It’s been going on for over 25 years, in different disguises.  At first, the goal was to build companies and products on the backs of artists without paying them unless forced to, and when forced, to pay as little as possible. Now, the tactics have shifted into what appears to be an endgame phase. This endgame aims to prevent human-produced music, films, visual art, prose—even images and likenesses tied to popular “brands”—from enjoying any sort of copyright protection, so that the owners of technology that would not exist without the work of legions of unpaid artists can build their fortunes and still go to sleep at night feeling certain that they’ll never be regulated, much less punished, for theft of labor and copyright. What would George Carlin have said about all this? We can speculate, but we’ll never know, because George Carlin has been dead for 16 years. But we do know this: the next time somebody tries to put words in a digitally resurrected version of his mouth, they’ll end up in court.  That’s a net gain for humanity as well as the Carlin estate, and everyone who understands that stealing is bad. And that labor has a price. Read More