Hannah Natanson, Jeff Stein, Dan Diamond, and Rachel Siegel of the Washington Post reported today that staff associated with the “Department of Government Efficiency” are using artificial intelligence to eliminate half of the government’s regulations by next January. James Burnham, former chief attorney for DOGE, told the reporters: “Creative deployment of artificial intelligence to advance the president’s regulatory agenda is one logical strategy to make significant progress” during Trump’s term.
How do the technology billionaires go around with confidence that their AI is so great when it obviously is not?
I have been playing with ChatGPT and Microsoft's Copilot for months now. I have pushed ChatGPT with various questions. I have examined it about space, time, reality. I have asked it stupid questions, not stupid questions, and everything in between.
About 90 percent of the time, ChatGPT does a good job.
It's the other 10 percent of the time that is the concern. At least that much, maybe more, ChatGPT in particular simply hallucinates and makes up stuff. It fills in "facts" that aren't even there. It creates fiction out of thin air.
CoPilot, which is a lesser AI, doesn't generally do this, and for research or what-have-you, it works well. It's not as in-depth as ChatGPT, but I don't expect it to be because I know it's more for home and public use.
Regardless, these things are not cut out to reform the federal government.
AI models do not think. They parrot, repeat, and possibly anticipate, but they do not think. They cannot perceive that cutting air pollution controls, say, would make asthmatics out of a certain percentage of the population, and outright kill some of us who already walk around with an inhaler.
They are no better than the programmers who program them and the data they use to do that.
This is a marketing issue. The tech billionaires are so sure their product is great that they're trying desperately to sell it for what it is not: a "human" brain.
What they think it can do and what it actually can do are not the same thing, and this is not going to bode well for the population.
These billionaires and the companies they run have a huge financial stake in making AI seem like a revolutionary tool that can do everything from write poetry to manage economies. The more the public and governments believe that narrative, the more funding, stock value, and influence those companies gain. It's not about truth. It's about sales.
The public in general, and even some of the executives in these big companies, do not understand how large language models work. They think it’s intelligent in a human way, but it is really just advanced pattern matching and prediction based on training data. An AI model doesn't "know" anything. It doesn’t "understand" laws or ethics. Not the way I do. And not the way you do, either.
These same people also believe that every complex human or political problem, whether that is poverty, racism, bureaucracy, or inefficiency, can be "solved" with software. This is an incredibly flawed way of thinking, but these self-made "men" see themselves as the smart guys who should be in charge of everything.
And if you're a wanna-be authoritarian in charge of a dying democracy, and you want to rapidly dismantle regulations and other things that your guy pals dislike, then AI offers a convenient tool.
It can also be the scapegoat. The leader can claim efficiency and modernization while gutting environmental, labor, and consumer protections. And if things go wrong, he can blame the AI model.
AI models have no accountability. No one has yet sued open.ai because ChatGPT told them they had cancer when they didn't, or vice versa, or whatever it might take to force such lawsuits to come into play. Even so, the AI model itself isn't going to go to jail, and most likely neither will the programmers. They'll just say, "oops" and that will be the end of it.
AI models are not accurate or nuanced enough to handle legal, regulatory, or ethical interpretation. I have experienced its flaws in a myriad of ways in the last several months. It can hallucinate facts, miss tone, misunderstand nuance, or completely misread human intent. Now imagine that happening with laws on toxic waste disposal, disability rights, or air travel safety.
AI is powerful, but it is not magic. Nor is it wise. It has no wisdom except, again, what the programmers give it. Using AI to "reform" the government is dangerous. Not because the technology itself is dangerous, but the hubris of those who wield it without humility or caution can cause great damage.
When billionaires or government officials use AI as a hammer to smash through democratic safeguards, the public must push back and demand human oversight, transparency, and ethical guardrails.
Great article! I totally agree!
ReplyDelete