The very funny writer Simon Rich just wrote a very funny opinion piece for Time Magazine about the serious topic of AI and its threat to the livelihood of writers. As you probably know, this threat is one of the primary reasons hundreds of sunburned screenwriters have been walking around in circles outside movie studios for the last one hundred and one days. Naturally the writers want protections against computer code doing their jobs for them, and they are willing to actually go outside to fight for these protections. It’s a good fight and one worth having. Here’s the problem: we may have already lost.
The knock on AI has been that it generates clunky, mostly worthless creative work. Unfortunately, that’s no longer true. Rich’s article makes clear that some AI programs like the non-public “code-davinci-002” already do our jobs as well as we do. Consider a couple fake Onion headlines it was asked to generate:
“Budget of New Batman Movie Swells to $200M as Director Insists on Using Real Batman”
And
“Rural Town Up in Arms Over Depiction in Summer Blockbuster 'Cowfuckers'”
I’m sorry, but these are great jokes. They are certainly better than I would have come up with - and I am a very funny person. More terrifying to me than AI being able to craft a passable college admissions essay is its ability to write a good, subtle joke. I’d always thought that AI will never quite capture the nuance that makes comedy click. No more.
Ok, we can debate whether or not “Cowfuckers” is a subtle joke, but it requires a complex understanding of human nature to work. I don’t know how AI “understands” anything anymore than I understand the workings of my own mind. Nobody does. Not fully. Which makes it an even bigger bummer. We are up against a technology we created, which we don’t understand, doing the job of creativity – which we also don’t understand – at least as well as its current, human practitioners.
When this software becomes more widely available, why wouldn’t people use it?
They would.
Consider the toll taker. When I was growing up, any time we had to cross a bridge, we’d queue up in the toll line and fish around the car for dimes, which we would then give to a surly, unionized toll taker hand to hand. When was the last time you handed a dime to a toll taker? Exactly. Because machines eventually got good enough to replace the human. Goodnight union job. Goodnight pension. Goodnight moon.
When money is involved – and when is it not? – the cheaper and more profitable will almost always win out. If it’s cheaper to write the next Fast and Furious movie, AI will write the next Fast and Furious movie. (Separately, I suspect Vin Diesel was replaced by AI years ago.) Will the audience care who wrote the movie? No. Will AI do at least as good a job at creating outlandish scenarios for people to drive ridiculous cars through for the sake of “family”? It’s hard to believe it could do any worse.
I don’t think AI, no matter how good, will ever entirely replace writers just as mass-produced clothing hasn’t replaced haute couture. But it’s hard to imagine that we’re not in for a revolution in the way creative products get made.
In the short term, the WGA will almost certainly get some concessions in how movies and television shows are written. Perhaps we’ll even get a ban on using AI. But even if we gain those concessions, you’re mistaken if you think it’s game over. The computers are coming and, eventually, they will put many of us out of work.
What does it all mean for us lowly humans? I guess I’d look at chess for the answer. The best computers have been able to beat the best humans at chess for a couple decades now. Yet people haven’t stopped playing chess. If anything, chess has enjoyed a renaissance of late, particularly during the pandemic, when people had time to sit down and learn this beautiful game. Most did this knowing that they would never become good enough at it to defeat even a simple computer chess program. We do it because being the best at something isn’t always the point. We do the thing because doing the thing is its own reward. We’ll still write books and screenplays and cowfucker* jokes because those things are worth doing for their own sakes. I just don’t know whether we’ll be able to make a living at it.
In the coming years, I expect writers will adapt. We’ll learn to work with AI. It’ll make us better, the same way the chess programs have improved the human game. But to say we’ll be able to keep it out of the writer’s room is wishful thinking. Probably we won’t even want to keep it out. It will become an essential tool of the profession, like the word processor. There will be fewer of us. Lunches will be lonelier.
I guess I’m pretty sad about the whole thing. Not because of what it means for me personally as a writer but because of what it means for all of us as people. When I was still a young person giving dimes to toll takers, the fear was that machines would eventually take the jobs of autoworkers and coal miners and construction workers. Never that they would take the jobs of poets and novelists. We believed that whatever divine spark inhabited the human soul could never be replicated. We were wrong. Not only can it be replicated, it can be improved upon. A screenplay that used to take its writer months or years to create can now be “written” in the time it takes to input the parameters: action movie with family motif, car chases, Vin Diesel. We’ve reduced our humanity to a large language model. Maybe that’s all it ever was.
Incidentally, on November 24, 2004, almost two thousand union toll collectors went on strike in Pennsylvania. Among their demands: a three year “no layoffs” guarantee. Today, all motorists pay their tolls electronically. In Pennsylvania, there are no toll collectors at all. I don’t know what happened to them. Maybe they’re all at home playing chess.
*Microsoft Word accepted “cowfucker” as a word without a problem. The machines really are learning.
An article I just read from Futurism (https://futurism.com/ai-trained-ai-generated-data-interview) interviews the people who wrote a scholarly article on what happens when generative AI is trained on output from generative AI. They liken it to Mad Cow Disease and the comparison seems apt. Eventually, generative AIs will run out of non-generative AI text, pictures, videos, etc on which to train. This can even happen unwittingly as people post their AI creations that will then be scraped and used to train the next generation of generative AI.
As far as I understand it—which is admittedly not very much at all—this problem will become inevitable and there is no way good way of stopping it, especially as generative AI businesses seek to put out more “advanced” models to sell. While this doesn’t get to the heart of what you were writing about it, I do think it is important to consider when speaking about AI now. Unless there are massive changes made—and again, my lack of understanding couldn’t even begin to think what those changes might be—generative AI has a decent chance of eating itself whole.
"But today, war is too important to be left to politicians. They have neither the time, the training, nor the inclination for strategic thought." — General Jack D. Ripper, played by Sterling Hayden
"Gee, I wish we had one of them doomsday machines." —General "Buck" Turgidson, played by George C. Scott
"General Turgidson, I find this very difficult to understand. I was under the impression that I was the only one in authority to order the use of nuclear weapons." —President Merkin Muffley
"That's right, sir, you are the only person authorized to do so. And although I, uh, hate to judge before all the facts are in, it's beginning to look like, uh, General Ripper exceeded his authority." —General "Buck" Turgidson