You directed traffic, not thought
You have a decision to make. A career question, a conversation you need to plan, something you can’t quite frame. So you open an AI assistant and type it out. The response comes back coherent, well-structured, probably better organized than what you’d have written in the same time.
You read it. You nod. You close the tab.
But you never actually thought about the problem. You managed a process. You reviewed a draft. The thinking happened somewhere. Just not in you.
That gap matters. Not because AI is bad (it’s often extraordinary) but because the habit of outsourcing thinking isn’t neutral. Over time, it does something measurable to the mind that does less of it.
What the GPS did to your hippocampus
In 2000, researchers at University College London published a study on London taxi drivers. Drivers who had spent years memorizing 25,000 streets had measurably larger hippocampi than the general population. The brain had physically grown to meet the demand.
GPS arrived. And the reverse turned out to be true. When people stopped navigating from memory, that capacity started to diminish. The brain doesn’t maintain what it no longer needs.
Psychologists call this cognitive offloading: delegating mental work to external systems. Notebooks, calculators, calendars. All do it. Used carefully, offloading frees up attention for harder problems. Used as a default, it quietly erodes the capacity it was supposed to free.
The parallel with AI-assisted reasoning isn’t speculative. It’s already being documented.
What the research shows
A 2025 MIT study looked at output from people who used AI heavily and people who didn’t. The heavy users converged. Same frameworks, same arguments, same conclusions. Coherent, yes. But not particularly original.
A Harvard study found something more specific: students who relied on AI for analysis became less able to find weaknesses in their own reasoning over time. Not because the AI was wrong. Because the students had fewer opportunities to practice the habit of finding the cracks themselves.
The problem isn’t that AI reasons badly. It’s that when AI does the reasoning, your reasoning gets less exercise. Same mechanism as the taxi driver who switched to GPS and stopped building the map. The capacity doesn’t vanish. It just quietly stops being maintained.
This connects to something much older. Your phone already exploits the fast, automatic mind. AI takes it further: instead of hijacking your attention, it replaces the thinking entirely.
The dependency trap
Here’s the part most discussions miss.
The real risk isn’t that AI gives you wrong answers. It’s that, over time, you lose the ability to tell when it does.
Evaluating an argument takes many of the same cognitive skills as constructing one. Stop building, and you stop recognizing what good building looks like. The GPS user doesn’t just forget the route. They lose the ability to notice when they’re going the wrong direction.
Most writing about AI and productivity focuses on what it helps you produce. What it’s quietly preventing you from developing is a harder question, and a more interesting one. As we’ve written before, the problem was never the tool. It was always what the tool was replacing.
“The risk isn't that AI gets things wrong. It's that you gradually lose the ability to notice when it does.”
The skills AI cannot build in you
Some things AI can simulate. It can’t develop them in you.
Cross-domain pattern recognition, for instance: noticing when something from evolutionary biology explains a business problem, or when a mathematical paradox maps onto a social dynamic. That doesn’t come from using AI efficiently. It comes from regular, wandering exposure to ideas across genuinely different fields.
Tolerance for ambiguity is another. The ability to sit with an unresolved question and not immediately reach for a resolution. There is research on how uncomfortable this is and why people reflexively avoid it. AI is built to resolve. Holding the absence of a clean answer is a muscle that AI, almost by design, doesn’t train.
Then there’s divergent thinking: ideas that are genuinely surprising rather than statistically likely. AI produces something close to the weighted average of everything it’s seen. Original thinking, by definition, is an outlier.
These capacities show up differently depending on how you naturally think. Different thinker types exercise them in different ways, but all of them require raw material that only range can provide.
Range is the antidote
A 2025 MIT Sloan study found that people who regularly encountered ideas from unrelated fields, philosophy, biology, linguistics, mathematics, outperformed subject-matter specialists with equivalent IQ on novel-connection tasks. Not because they knew more. Because they’d seen more kinds of things.
This isn’t new. It’s an old observation that AI has made newly urgent.
When you rely on AI to synthesize and explain, the synthesis happens in the machine. The connections between fields, the unexpected resonances, the moment when something from Tuesday suddenly clarifies Friday. Those require a mind that has held both thoughts. They can’t be retrieved on demand. The raw material has to have been there first.
What you carry matters as much as what you read. Scanned ideas don’t build these connections. The research on retention points to the same thing: passive consumption leaves almost nothing behind.
Put enough ideas in from enough different places, and they start connecting on their own. The range is the point.
The practice
This isn’t an argument against AI. The tools are genuinely powerful, and refusing to use them is its own kind of stubbornness.
The point is narrower. AI works best as an amplifier. It extends what you already know how to think. It can’t substitute for building a mind worth amplifying.
Keeping your thinking sharp isn’t complicated. It means regularly encountering ideas you didn’t ask for, from fields you don’t specialize in, and sitting with them long enough for something to happen. Not scanning them. Carrying them.
This is what a thinking practice actually looks like in daily life. Not elaborate. Just consistent.
One idea, from somewhere unexpected. Every day. Quietly. That’s not a small thing.
Common questions
Does using AI make you a worse thinker?
Research suggests heavy reliance on AI for reasoning tasks can reduce independent critical thinking over time. The mechanism is cognitive offloading: when you delegate thinking to an external system, the mental capacity for that type of thinking receives less practice. This doesn’t mean AI makes everyone a worse thinker, but habitual reliance without deliberate counterbalance is associated with measurable effects on originality and self-assessment.
How does AI affect critical thinking skills?
AI primarily affects critical thinking through two mechanisms. It reduces how often you construct an argument from scratch, which is one of the main ways the skill develops. And it tends to produce convergent, statistically likely answers, which can narrow the range of ideas you engage with through your own reasoning. Forming your own view before consulting AI is one of the most effective counterbalancing habits.
How do you stay mentally sharp while using AI tools?
Form your own view before consulting AI. Expose yourself regularly to ideas from fields you don’t specialize in, since cross-domain pattern recognition is a capacity AI cannot develop in you. Carry ideas through your day rather than scanning them. One thought, from somewhere unexpected, every day is a more meaningful practice than it sounds.
THINKING STYLES
What kind of thinker are you?
12 questions. No right answers. One surprisingly accurate result. Find out whether you're a Lens Shifter, a Still Observer, a Paradox Mind, or one of nine other patterns.
Take the free quizSupratim Dam
Marketer turned iOS developer. Built One Good Thing alone in two months from Madrid, using Claude Code and an obsessive amount of research. Previously founded and sold a creative media agency.