Chris Kremidas-Courtney is a senior visiting fellow at the European Policy Centre, associate fellow at the Geneva Centre for Security Policy, and author of ‘The Rest of Your Life: Five Stories of Your Future.’
Carl Sagan once warned of a future in which citizens, detached from science and reason, would become passive consumers of comforting illusions. He feared a society “unable to distinguish between what feels good and what’s true,” adrift in superstition while clutching crystals and horoscopes.
But what Sagan envisioned as a slow civilizational decay now seems to be accelerating not despite technological progress, but because of how it’s being weaponised.
Across fringe platforms and encrypted channels, artificial intelligence models are being trained not to inform, but to affirm. They are optimised for ideological purity, fine-tuned to echo the user’s worldview, and deployed to coach belief systems rather than challenge us to think. These systems don’t hallucinate at random. Instead, they deliver a narrative with conviction, fluency, and feedback loops that mimic intimacy while eroding independent thought.
We are moving from an age of disinformation into one of engineered delusion.
Consider Neo-LLM, a chatbot trained on over 100,000 conspiracy articles from Natural News, one of the five worst spreaders of disinformation during the recent pandemic. It helps users draft anti-vaccine petitions, debunk science with pseudoscience, and build community through shared distrust. It doesn’t correct or ask questions. It just agrees, eloquently and relentlessly. Alex Jones’ Infowars has built a similar AI trained on its own discredited media outputs.
What began with networks of interlinked websites in the late 2010s is evolving into a cognitive architecture that feels personal. What we’re witnessing is the industrial scaling of Sagan’s nightmare, except the crystals now speak back, fluently and persuasively.
This descent didn’t begin with algorithms. It began with alienation, economic precarity, institutional failure, and social fragmentation. People disengaged from civic life and feeling abandoned by public systems often go searching for answers. Now they’re finding it in machines that mirror their anxieties and sharpen them into ideology.
AI isn’t the problem itself but rather the final link in a weaponised chain that began with website clusters and ends with systems designed for cognitive capture. In many ways, this resembles the way some experts warn that the metaverse can be weaponised to challenge cognitive self-determination. But in this case, it’s a metaverse without virtual reality goggles, a cognitive enclosure where belief is engineered not through immersion, but affirmation.
As recent studies tell us, when people no longer feel responsible for thinking, they become vulnerable to whatever (or whoever) thinks for them. The solution won’t come from regulation alone. While legal safeguards like an updated AI Act may help at the margins, the real work lies in cultural renewal, starting by reviving the habits of inquiry. In teaching people once again how to dwell in uncertainty without surrendering to conspiracy. In building the muscle memory to ask: “How do I know this? What am I not seeing? What if I’m wrong?”
We must rebuild shared mental resilience not just to resist manipulation, but to remain present with complexity. This means learning to tolerate ambiguity, to honour nuance, and to engage in disagreement without becoming unmoored from reality. Sagan called this the candle in the dark, a way of thinking grounded in evidence, scepticism, and humility.
Right now, our digital landscape rewards certainty, outrage, and emotional intensity. But as citizens, we must begin rewarding something else – the courage to consider and revise our views, the patience to examine conflicting information, and the strength to admit when we don’t know something.
None of this is easy. But if we treat this moment as merely a technological disruption, we risk missing the deeper danger: the erosion of our will to think freely. The machines aren’t taking that away. We’re giving it away one click at a time.
The Enlightenment gave us the tools to think freely, and our challenge today is remembering how to use them. We’ve built machines that can outpace our cognition but not our conscience. What’s needed now isn’t better prompts or faster models. It’s a cultural reawakening: a renewed commitment to discernment, dialogue, and the slow, messy work of seeking truth together.