Degenerative AI

An “AI bot in a dystopian wasteland” that I created on Canva, which uses an open source text-to-image AI engine called Stable Diffusion to auto-generate images.

My two worlds, technology and writing, have collided recently around the topic of “generative AI.” If that term is unfamiliar to you, I offer the following explanation:

Generative AI refers to a type of artificial intelligence that is capable of generating new content, such as images, text, or even music, that has never been seen or heard before. This is achieved through the use of algorithms that are trained on large datasets, allowing the AI to learn patterns and relationships between different types of data. Once trained, the AI can generate new content by using these patterns and relationships to make predictions about what might come next. Generative AI has a wide range of potential applications, from creating realistic virtual environments to helping artists and designers come up with new ideas. However, it also raises important ethical questions around issues such as ownership and control over the content that is generated.

The preceding paragraph was written, entirely and verbatim, by OpenAI’s ChatGPT tool, based on my prompt: “write me a paragraph description of generative AI.” And, I have to say, it’s a pretty good summary—even acknowledging the “important ethical questions” around this technology.

Within the technology world, the focus is decidedly more on the “wide range of potential applications” than the “important ethical questions.” With the implosion of Silicon Valley’s eponymous bank already in the rearview mirror (that was so mid-March!), excitement is building for this promising next engine of tech economy growth. Billions of venture capital dollars have flowed into hundreds of generative AI startups, headlined by Microsoft’s bet-the-farm $10 billion investment in OpenAI that justifiably has Google shitting its pants, facing an existential threat to its core search business (imagine a world in which a web search provided actually optimal results, rather than pages of hyper-SEO’ed click bait!)

Within the writing world, the focus is more on the “important ethical questions” (specifically, “will I be replaced by a bot?!”) than the “wide range of potential applications” (including automating the so-called occupation which, for most writers, barely earns them enough for rent and Ramen noodles). The reaction within the writing community to ChatGPT, Jasper AI, Copy.ai, and the dozens of other AI text writing apps can be summarized with one word: consternation. Why do you think this subject has garnered so much media attention? Journalists—the writers scratching out a living within a tiny-margin, revenue-contracting, rapidly-consolidating industry—might be on the front lines of AI replacement.

As we embark on a world in which journalism is further micro-niched to our particular opinions and preexisting biases, students auto-generate everything from term papers to college applications, and marketers are no longer creatively constrained from fully saturating our inboxes and social media feeds with their breathless updates, offers, and sales, I wanted to collect my thoughts here on both the hype and paranoia around this exploding space.

While the term “generative” is appropriate in the sense that it speaks to the technology’s ability to generate content, the antonym “degenerative” also feels appropriate for what the technology implies for the creative process. Merriam-Webster’s definition of Degenerative (of, relating to, involving, or causing degeneration) captures this implication poignantly as:

2: a lowering of effective power, vitality, or essential quality to an enfeebled and worsened kind or state

3: intellectual, moral, or artistic decline

The very concept that artificial intelligence can and should reduce human creativity to an algorithm is one that could only be conceived by computer scientists. I’m no luddite. I see compelling applications for AI in the creative space. After all, who wants to write that one-hundredth engagement-optimized Instagram post or the script for an episode in season 10 of Friends. I already leverage AI technologies myself, for example in proof reading, audiobook generation, and image creation (see my dystopian AI bot expelling a hair ball of content above!).

But I see several reasons why both the potential and threat of generative AI won’t be as significant as predicted by either the technologists or the creatives. And I’ll use a tried-and-true generative AI essay structure of first, second, and third arguments, to make my case.

First and foremost, the mass production of endless volumes of AI-generated content doesn’t really solve a need. Sure an underpaid copy writer dropping a throw-away post into the endless stream of social media content might be more productive. But the publishing industry overall already suffers from too much content, not a lack of it—churning out an estimated 4 million new books every year. The supply has completely overwhelmed the demand. Now we’re going to add AI-generated books to that too? My expectation is that readers will place more value on curated, high-quality, human-generated books. And that AI-generated books, will serve the long tail of the market—the reader who wants the 500th Harry Potter spin-off or the Fifty Fifth Shade of Gray. Crappy writing will be replaced by crappy AI writing.

Second, great content is not created by mimicking “patterns and relationships between different types of data.” That approach to creating content is reductive at best. The gatekeepers of the traditional publishing, entertainment, music and other creative industries have long tried to produce hits formulaically. That’s why so much content feels like an imitation of something else. Since we don’t know what will be a hit, let’s copy something that was a hit. Contrary to industry efforts to churn out blockbusters with the predictability of an assembly line, the content that tends to break through is unique, innovative, bold, and emotion-provoking. Then, when that rare gem of a book, movie, or song comes along, the cycle begins again with dozens of imitators jumping onto the trend. Producing content that is essentially the average of its mathematical inputs can really only ever be that: average

Third, and perhaps most important, the implicit underlying thesis of generative AI is that one can only derive enjoyment from the consumption of content, not the production. I, like the vast majority of writers, cannot provide an financial ROI for the task of writing. If I ever calculated my hourly wage, I would find it depressing. But that’s not why I write. I write because I enjoy it. The production of written content fulfills something for me. Writers, artists, musicians, and every other creative person takes great joy in the creative process itself—even with all the frustration, fatigue, and disappointment that comes with it. Why would I give that enjoyment away to an AI bot?

For better or worse, regardless of what we think about it, generative AI is coming, and coming rapidly. That much venture capital demands returns. While the rest of us hasten to set up some cursory guardrails, the best and brightest computer scientists will continue to evolve the technology and push the envelope of what is possible. As always, technology’s superlative advantage is speed—it simply moves faster than regulators, ethicists, or the public at-large can keep up. It’s halfway around the world and used by millions before the skeptics get their pants on.

And this reality is why I think it will be degenerative in another sense. Not that it will replace humans, just that it will further marginalize our value, fulfillment, and sense of self-worth. That it will lower our collective “power, vitality, or essential quality” and make us feel “enfeebled.” That it will worsen our sense of “intellectual, moral, and artistic decline” that already feels like a weight on the modern human psyche—particularly for the digital native generation who grew up with iPhones, Google, and Instagram so deeply entrenched in their lives.

At some level, this is the root fear around artificial intelligence overall—that it will render human intelligence obsolete. With its slow clock speed, undisciplined productivity, and faulty logic, our mammalian brain feels like a Motorola flip phone in desperate need of an upgrade. But maybe generative AI can also show us that “intelligence” isn’t the only thing that differentiates humans from technology. It’s also our unique capacity for emotion. And emotions may never be reduced to data, algorithms, and formulas. Though I’m sure someone will try.