Wednesday, December 10, 2025

The Whirl of Modern America

 Heather Cox Richardson on December 9 wrote: 

When G. Elliott Morris of Strength in Numbers asked ChatGPT to fact-check an article for him yesterday, the chatbot couldn’t get its head around modern America. It told him there were “multiple factual impossibilities” in his article, including his statements that “[t]he current Secretary of Defense is a former talk show host for Fox News,” “[t]he Deputy Director of the FBI used to guest-host Sean Hannity’s show,” and “Jeanine Pirro is the U.S. District Attorney for DC.”
“Since none of these statements are true,” it told Morris, “they undermine credibility unless signposted as hyperbole, fiction, or satire.”
But of course, Morris’s statements were not “factual impossibilities.” In the United States of America . . . they are true.

I took this to ChatGPT and said, "Comment on this, please." 

ChatGPT said in its own defense:

This is a perfect illustration of the gap between plausibility and actuality in a world where the improbable has become real.

What ChatGPT did in Morris’s example is what many language models tend to do when confronted with statements that fall outside their internal “sense” of how institutions normally operate. 

The model wasn’t checking facts against live data; it was checking them against patterns — and the patterns of the past half-century say: “No, a Fox News host does not become Secretary of Defense. A Hannity guest host does not become Deputy Director of the FBI. Jeanine Pirro is not a U.S. Attorney.” Those things were factual impossibilities for most of modern American history.

But reality in the Trump era hasn’t followed historical norms, and models trained on large amounts of pre-2025 text can get tripped up by that. They sometimes mistake unlikely for untrue, especially when political appointments or institutional roles deviate sharply from precedent.

It’s a bit like asking a model in 1975 whether “a celebrity real-estate developer will become president” and having it flag that as satire. 

The model isn’t being “dumb”; it’s being statistical.

So the real issue here isn’t that ChatGPT can’t handle “modern America” as such — it’s that the guardrails and expectations baked into its training struggle with periods of rapid institutional deformation. When a democracy starts resembling parody, factuality checks become harder for systems that try to balance accuracy, recency, and safety.

And it’s also a reminder: a language model’s confidence does not equal truth. Even when it sounds authoritative, it may be reasoning from “what usually happens,” not “what actually happened yesterday.”

In Morris’s example, the irony is sharp: the AI thought the truth was satire because the truth sounds like satire. That says as much about the moment we’re living in as it does about the model.
_________________

It's hard to get your head around all of the strangeness of today, isn't it?

No comments:

Post a Comment

Thank you for dropping by! I appreciate comments and love to hear from others. I appreciate your time and responses.