AI, a new bulwark against stupidity and misinformation: optimism or utopia?
What if artificial intelligence became the ultimate tool for clarifying and structuring the debate of ideas? As it progresses and chatbots become more prevalent…
We know that misinformation circulates constantly.
Sorting out truth from falsehood requires time, energy, and often an incredible amount of patience. Debunking nonsense always takes longer than stating it. This is Brandolini’s law, for those who are unfamiliar with it.
A phrase thrown out casually can take hours, even years, to deconstruct, with no guarantee that the effort will succeed. Sometimes, it even gives rise to systems and can profoundly change society and common practices.
What if AI changed the game?
Beyond the harmful effects we might fear, what if it became an instant clarification tool?
What if it could analyze, contextualize, and refute erroneous ideas within seconds before they even spread?
Of course, such technology would raise a fundamental question: the nature of what feeds it and who defines what is true or false, real or not?
Unless one day it becomes capable of this task in a way that is widely accepted.
Scientific facts are one thing…
- but not everything is explained or demonstrated by science,
- but ideologies establish themselves based on a contextual and temporal vision,
- but interpretations, beliefs, and personal convictions
... are another.
An idea may seem absurd to some and obvious to others. Even our basic definitions, like "common sense," are open to debate.
Our relationship with beliefs shapes our world, our reference points, our identity.
And yet... let’s imagine.
"Earth is flat" → Immediate refutation with scientific studies to back it up.
"Vaccines have 5G possibilities" → Rigorous analysis of scientific data.
"My son's soccer coach is a reptilian" → Immediate biological verification.
And what about more subtle, nuanced, and controversial ideas—those that require real debate rather than a simple factual correction?
A world where AI played this role of "facilitator of discernment" would push everyone to refine their reasoning, structure their arguments, and be more open to questioning their own beliefs.
An expected consequence? Perhaps people would be more mindful of what they say—not out of fear of being corrected, but out of an emerging concern for honesty and intellectual rigor.
Well, not only that—competition is ingrained in human nature.
The best hunter-gatherer, the best football player (even in Inca times, there were team sports), not to mention the Olympic Games, but let’s move on.
AI would not make humanity more intelligent, but it could create an environment that fosters higher standards, rigor, nuance in discourse and thought, and a greater pursuit of excellence.
And not the kind of excellence I recently read about—the kind that pushes you to simply repeat the same thing over and over to master it. Because that, for someone truly demanding, is laughable.
The impact would be colossal, both personally and collectively.
- The ego would be challenged.
- Lazy thinking would become a social handicap.
- Open-mindedness would become a necessity rather than an option.
But all of this hinges on one essential condition: that AI itself is well-trained, well-calibrated, and does not turn into an automatic factory of approximations.
Because if poorly fed, it could just as easily tell us that penguins eat Tyrell’s chips or that Napoleon invented pizza.
So, are we on the verge of a world where it would be impossible to have a flimsy discussion without being flagged by built-in fact-checking alerts? Even before clicking "publish"?
Maybe one day, a simple message exchange or a comment under a post would trigger a warning:
"Warning: This statement is incorrect in this form. Would you like to understand why and access a neutral, sourced analysis?"
Maybe solution-oriented coaches will still try to explain that understanding is useless?
We could also get excited about the idea that, for those who enjoy adding humor to life, a little pop-up would appear—fully customizable, of course:
"Hey, you might be onto something, but you should probably reread that—there’s a problem!"
Yes, the message shouldn’t be entirely negative—that wouldn’t be good for morale. But since we can’t just ignore the negative, or no corrections would ever be made, we are forced to phrase it like this, aren’t we?
And who knows… maybe, in time, we’ll manage to reduce misinformation by a factor of 60.