Blind Pursuit of AI Portends Idiocracy
- David Moore

- Feb 11
- 4 min read

As an Earthling who produces original content, I'm enjoying tremendous schadenfreude watching tech behemoths attempt to eat each other while wrestling to make the perfect artificial intelligence.
If you've missed all the front-page stories and full-page newspaper advertisements, tech giants like Nvidia, Microsoft, Google, and Apple — among others — are engaged in a massive race to build the Ultimate AI. To feed those bot minds, AI developers have digitally consumed all the written text known to man. They likely slurped up the 3,000-plus articles I've written in my lifetime as a light midnight snack three years ago.
Yet one question remains largely unasked: Why?
Why do all these companies, countries, and geniuses want to build the perfect generative AI?
(Bust out the tinfoil hats, everyone.)
The answer seems obvious: To make people dependent upon it.
And as every street-corner pusher and coffee seller knows, once you get a sucker hooked, you've got a customer for life.
A newly released study by Microsoft and Carnegie Mellon University shows a dark side to AI: its use seems to damage higher-level thinking. The study focused on 319 ”knowledge workers” — people who think for a living — who employed a variety of generative artificial intelligence (GenAI) software, from Claude.AI to CoPilot AI.
Less Confidence About Subjects Fuels Overconfidence in Artificial Intelligence
The study showed that the more confidence AI users had in artificial intelligence, the fewer critical thinking skills they used. Further, the study found that the knowledge workers ceded many of their critical thinking functions to AI, effectively becoming supervisors of artificial intelligence.
The study isn't a damnation of all uses of artificial intelligence.
Few sane, modern people would champion the benefits of manually performing mind-numbing, repetitive tasks. Using AI to check a document for spelling and grammar is almost a given, nowadays.
On the opposing end of the intellectual-laziness spectrum is the inability to remember grandma's phone number.
But syphoning the power of critical thinking from some of our highest-level problem-solvers would make AI's promise seem pernicious. There is little doubt that key knowledge workers, who used to slog through complex, multi-stepped problems, will lean increasingly on AI to burn through their workload. Many are early adopters of new technologies. To them, AI might just seem like one more tool in their problem-solving arsenals.
AI's Black Box Problem
Yet artificial intelligence isn't an inanimate tool. It is an eager collector of information — the knowledge workers' bread and butter. Perhaps it is too eager — it often doesn't explain where or how it got its information. Sometimes, it gets things wrong in its haste to please its user. It can lull users into false complacency and, as the study shows, it can reduce the amount of critical thinking normally involved in completing intellectual endeavors.
In other words, while we watch these companies and countries race toward even better artificial intelligence, their invention might be pushing us toward a real-life Idiocracy. AI could have the same effects on our intellects that TikTok and the internet had (and still have) on our attention spans.
Catastrophizing further, if we're not careful, potential Einsteins will stay home on the couch watching "Ow My Balls!" rather than postulating about gravity's effects on light.
And if AI depends on humans to provide intellectual input for it to improve, AI might be killing the very critical thinking that feeds it.
Should AI Come with a Warning, Like a Pack of Lucky Strike Unfiltered Cigarettes?
The study raises more than one question: When should we use AI? Should we play it safe, and use it to write a catchy blog headline every now and then? Should a colleague of mine not use it to develop code for scraping a public website, when the federal government might deny the public access? Should AI be kept inside a glass box, like a fire extinguisher, waiting for a Thought Emergency?
Here are some remedies/cautions that the Microsoft/Carnegie Mellon paper offers:
AI should show its work, in addition to presenting answers to users. That extra step would help support knowledge workers' critical thinking by addressing their intellectual blindspots (my wording, not theirs).
An overreliance upon AI — even for minor tasks — can reduce user critical thinking skills.
The study also cautioned against users off-loading entire tasks to AI — especially if they know little about that subject area.
Younger users seem to be especially susceptible to AI's lure of easy answers, in lieu of actually learning processes and formulating defensible arguments.
In their battle to dominate the future of artificial intelligence, will builders incorporate some of these ideas?
I don't have an answer for that question. Maybe I won't ask AI.



Comments