Sunday, 15 June 2025

LIBBY EMMONS: AI collapse, hallucinations mean we need more human-generated content—not less


AI work must be fact-checked more thoroughly than that written by humans—human writers know what they don't know but do not know what AI doesn't know.

ad-image
I'm an editor of news and opinion at Human Events and The Post Millennial, and I look upon the AI takeover of the written word with growing unease.

There are two main reasons for this, other than my love of humanity, which are that the simulation that is AI threatens to dumb down the regard in which we hold human consciousness and that an AI that learns from an archive of written words that is increasingly AI generated will become an entirely untrustworthy regurgitation machine.

In practice, opposing AI generation and not being subjected to its product are two very different things. 

A recent syndicated advertorial that ran in the Chicago Sun-Times, The Philadelphia Inquirer, and other papers featured a list of recommended summer reading—but several of the books were fake titles by real authors.

Isabel Allende's "Tidewater Dreams" was recommended. Still, readers who looked for it on bookstore shelves would find that it didn't exist—the same goes for Rebecca Makkai's non-existent "Boiling Point" and Min Jin Lee's fabricated title "Nightshade."

The writer commissioned with writing up the summer reading list used AI to do it and didn't fact-check the results before passing it off to his editor, who also didn't fact-check it—I wonder if any of them even read it before sending it to print and pawning it off on readers.

A basic rule of thumb for writers is that if you don't want to write it, readers don't want to read it, and the same goes for editors.

It's a breach of trust to pass off as one's own work a bunch of words generated by an AI that spits out lies that look very similar to the truth. 

Speaking on the TWIML podcast, OpenAI's Josh Tobin explained that one of the primary uses for AI is "deep research," and it is "quite good at finding information that is just kind of buried in the corner of the internet somewhere."

This sounds great, but it's fraught with problems, specifically for news writers and editors who stake their reputations on delivering facts. 

"Probably the bulk of what I think most people use deep research for is going broad and then synthesizing," Tobin said, "like, go read about this topic, and come back to me with a summary."

AI's deep research function cannot be trusted by writers and editors who are loath to either fact-check or read the words it generates.

AI work must be fact-checked more thoroughly than that written by humans—human writers know what they don't know but do not know what AI doesn't know.

None of the research generated by AI can be trusted enough to present to readers without a fact-check of every single obscure fact, all of which can be wrong.

As AI-generated work populates more and more of the internet, the work that AI draws on will be AI-generated, creating a closed loop without any new information or ideas. That "collapse" means we need more human-generated content, not less.

AI gives the appearance of thinking for itself, but it is a simulation of thought and creativity—only by degrading the concept of human thought can we imagine that what AI is doing is thinking. 

AI cannot distinguish fact from falsehood but simply sources existing content to create a simulacrum of unvetted research.

Our oldest stories give us a caution as to what happens when mankind attempts to simulate creation: Icarus, the Tower of Babel, Frankenstein, the Golem, even TikTok's lack of sentience in L. Frank Baum's Oz stories—in every case, the creation is not able to replicate the spark of creation.

It is still up to writers and editors to come up with ideas and research with an eye toward fact v. fiction; that job cannot be given to AI because AI, despite its mega intelligence, massive power drain, and seemingly limitless ability to scour the internet and cough up sensible-sounding words, is not capable of delivering reader-worthy content.

The Trump administration has gone all in on AI, creating more energy to power it and infusing every technology with AI, but that doesn't translate to creativity, and it definitely is not applicable to editorial content.

As a writer, I don't want to outsource my research to a chatbot; as an editor, I don't want to fact-check non-human research; and as a reader, I do not want to be subjected to AI-regurgitated ideas posing as creative thought—that's just not what any of us tune in for.

Source link