Will We Get Dumber?
On the Temptation to Abdicate Our Thinking to AI
The first affirmation of the Humanist Manifesto III reflects a life stance guided by critical reasoning. It offers an epistemic compass—how humanists “know” about anything and frame an understanding of our inner and outer worlds.
“Knowledge of the world is derived by observation, experimentation, and rational analysis. Humanists find that science is the best method for determining this knowledge as well as for solving problems and developing beneficial technologies. We also recognize the value of new departures in thought, the arts, and inner experience—each subject to analysis by critical intelligence.”
(Humanist Manifesto III, 2003)
This statement predates the world we now inhabit—a world in which algorithms mediate our social interactions, shape our news, and increasingly think for us. In 2003, the Manifesto could not have foreseen the speed at which we would weave artificial intelligence into daily life: from the prosaic (proofing documents, choosing playlists) to the profound (offering life advice, diagnosing illness, or proposing moral choices).
So the question arises: What is it to be human if we outsource our critical thinking to machines?
The Temptation of Convenience
Our relationship with AI is evolving swiftly. I’m not sure if I’m underreacting or overreacting to the issue of AI assuming our thinking for us. It’s already clear that AI has enormous potential to enhance the human condition, from taking over mundane tasks to collaborating in scientific breakthroughs.
Specifically regarding critical thinking, however, is it necessarily bad to accept an AI-generated answer instead of one we create ourselves or conceive through collaboration with others? If the result is valuable, isn’t that what matters?
Perhaps not. The danger isn’t that AI will seize control of our thinking; it’s that we’ll gladly surrender it.
This isn’t a Luddite lament. I value these tools. The problem isn’t (just) the technology, it’s our disposition toward it. We’re tempted by ease, by convenience, by the frictionless promise of instant answers. Yet human flourishing isn’t found in arriving efficiently at conclusions. It lives in the process; that slow, uncertain, sometimes frustrating act of grappling with questions, contradictions, and doubt.
Thinking critically isn’t just how we shape knowledge, it’s how we shape ourselves. To abdicate that process is to forfeit a crucial part of our becoming our full self.
The Difference Between Product and Process
We live in a results-oriented culture that prizes outcomes: grades, metrics, deliverables. Technology mirrors this bias, streamlining everything, and us, toward the “answer.” But a life aimed only at the product risks becoming instrumental, a means to an end.
The harm is not simply in using AI-generated outputs; it’s in becoming passive recipients of rather than active participants in meaning-making. The process of thinking – the wrestling, revising, and reflecting – is how growth occurs. Good answers are not the same as good lives.
Cognitive Offloading and the Self
We already offload memory to our devices, be it phone numbers, directions, or birthdays. Now we’re offloading judgment. We risk becoming intellectually sedentary. And here’s the deeper loss: thinking is not just cognition. It’s identity.
As Descartes famously declared, “Cogito, ergo sum”—I think, therefore I am.
To think is to be. To assert agency, to craft our worldview, to inhabit fully our individuality. When we transfer that task to AI, we don’t just become lazier thinkers; we become fainter selves.
Staying Human
AI can be a remarkable resource, but it must remain just that: a tool, not a surrogate. The goal is not to shun its capacities, but to retain our own, such as doubt, reflection, analysis, imagination.
“Negative Capability, that is when man is capable of being in uncertainties, Mysteries, doubts, without any irritable reaching after fact & reason.”
~ John Keats, Letter to George and Tom Keats, December 1817
In an age overflowing with answers, we might do well to remember the value of not knowing. The poet John Keats memorably described this uncertainty as a necessary negative capability.
To live humanly is to live with ambiguity, curiosity, and wonder; to question and to care. So I maintain my humanist commitment to reason, to inquiry, to the messy and magnificent work of being human. I don’t want to find myself knowing all the answers and yet not knowing who I am.
Sidebar: In considering the question of what happens if humans blindly yield critical thinking to machines, I couldn’t help be reminded of the Little Britain comedic skits featuring the character Carol who input data into her computer and unquestioningly went along with whatever it said, regardless of the obvious, common sense response and the need for a little humanity.
Here’s one amusing snippet!



I don't think AI will make any of us dumber - we do a good job of doing that to each other! Long before AI tools emerged all of us forgot so much! I'm wary of raising expectations that we should remember everything with or without tools, forgetting stuff is very humanizing