In AI’s Crosshairs

Like everyone who has an email account, I receive a lot of spam. But last summer I received a personalized message about my novel Burner that caught my attention. The message was specific, referring to character arcs, storylines, and themes. It was insightful and flattering, highlighting aspects of my work I’m most proud of, and was clearly, it appeared to me, written by someone who had not only read but thoroughly enjoyed my novel.

The next day, I received a similar email, also from a Gmail address, offering to “amplify the reach” of my novel with “targeted visibility” and “strategic positioning.” Although the pitch was different, the obsequious wording and specific details sounded familiar. Little did I know this would be the tip of the iceberg.

Soon, famous authors were taking time out of their busy lives to reach out to me personally. From Lucy Foley who would “love to hear more about what you’re working on” to Emily Henry who found my novels “truly inspiring” to Dave Eggers hoping to “share and learn from each other.” Even Michelle Obama said it would be a “privilege to connect with a fellow author whose work I admire,” and Agatha Christie came back from the grave to invite me to showcase my book in her posthumous newsletter Author Manuscriptia in order “to reach a wide engaged audience.”

As laughable as these scams quickly became (which I wrote about at the time and was recently covered by The New York Times), what hasn’t been funny in the months since is their sheer volume and persistence. To-date, I have received upwards of 1,000 of these emails, typically 5-10 every day, rarely flagged as spam, all from Gmail addresses, all with highly personalized messages targeting me and my novels, and all obviously AI-generated.

Unsurprisingly, I’m not alone. The Writer Beware website uncovered an entire crime ring of AI-driven marketing scams that target authors. According to the site, the scams typically revolve around four categories:

  1. General marketing/PR offers (e.g. publicity campaigns, Goodreads promotions, Amazon optimization, etc.);

  2. Impersonations of book clubs offering to “spotlight” an author’s work;

  3. Private review “communities” with alleged readers who will provide Amazon reviews for “tips”;

  4. The aforementioned author impersonation scam to engage authors in paid editing or marketing services.

What is perhaps most disconcerting about this attack is how impossible it is to stop. The perpetrator could be a network of people using AI. It could be an individual using AI. Or it could be an army of AI bots acting autonomously based on publicly available information about me and my books. These operations no doubt originate off-shore, well out of the reach of U.S. law enforcement—as if U.S. law enforcement would do anything anyway. Furthermore, these scammers operate in plain sight, empowered by the platforms of big tech companies. Amazon fails to prevent the scraping of data on authors’ books, not to mention the proliferation of fake reviews. Meta allows public profile information on Facebook and Instagram to be harvested for targeted emails and AI-generated posts to proliferate. Google enables scammers to create Gmail addresses with names like agathachristieauthor@gmail.com and blast outgoing messages. And, of course OpenAI’s ChatGPT is used to micro-target personalized messages by the thousands.

Within the publishing world, as in every industry, AI is provoking a lot of anxiety, and lawsuits. Much of that concern has centered on the prospect of an onslaught of books authored by generative AI engines, resulting in a tsunami of junk submissions to agents and editors, an even more flooded market of books for publishers, and an encroachment on their livelihood for authors, not to mention a violation of their existing copyrights.

As concerning as the prospect of AI-authored novels is, I believe the real threat to authors, and, frankly, all of us, is the exploitation of AI for spam, scams, and manipulation. Almost two years ago, I wrote an article for Writer's Digest that predicted the biggest threat to authors from AI would not be how content is generated but in how it is discovered, something I labeled Discovery Bias. That future is one I see unfolding before our eyes.

What makes writing and reading literature such a uniquely human endeavor is being hijacked by AI bots. Whether it’s driven by unscrupulous publishers, deep-pocketed authors, or offshore book marketing scams, AI is being used to tip the publishing scales toward those willing to abuse it. AI-generated messages are filling our inboxes, clogging our social feeds, and flooding reader review sites. The biggest threat to authors is not copyright violation or remuneration, but a world in which the primary vehicle of book discovery—authentic, word-of-mouth reader recommendations—can be faked, monetized, and exploited at a massive scale. Not only are authors the target of scams, but we are also vulnerable to AI-generated extortion, in which funds are demanded not for positive reviews but to avoid negative ones.

Meanwhile, the tech platforms we authors rely on are too busy building out their own AI capabilities to bother putting safeguards in place to prevent such abuses. In this day and age, it’s impossible to sell books without Amazon, discover them without Google, or promote them without Facebook, Instagram, or Goodreads. But the human-to-human online forums we rely on are becoming increasingly compromised by AI every day. The frightening reality for many authors is an impossible choice between being coerced to participate in this AI-driven madness or risk being left behind.

Michael TriggComment