As reliance on AI deepens, experts warn of a slow erosion of critical thinking skills. Writing for the Wall Street Journal, reporters Paul Rust and Nina Vasan detail four strategies to preserve cognitive strength while using generative AI tools.
"Like muscles, our cognitive skills weaken when unused," the authors write. Offloading entire tasks to AI may reduce problem-solving and critical reasoning abilities.
A recent study of nearly 1,000 students highlighted this pattern. During practice sessions, "students using ChatGPT to solve math problems initially outperformed their peers by 48%," the authors write. But when tested without AI, "their scores dropped 17% below their unassisted counterparts." What looked like an advantage quickly became a liability.
Other fields show similar risks. For example, research has found that relying too much on autopilot can dull pilots' manual flying skills. To mitigate this issue, the Federal Aviation Administration encourages pilots to fly some segments of their route manually to maintain their proficiency.
Psychologists call these "mastery experiences," the struggle and satisfaction of solving difficult problems, and say they are "the single strongest predictor of self-efficacy." When ChatGPT skips those hurdles for us, the capacity fades and dependence deepens, the authors write.
They suggest outlining your approach before opening generative AI tools. "Think through the problem and jot down ideas; bullet points suffice," the authors write. "Then ask the model to elaborate or polish."
The ease of AI answers can undermine memory. "Have you noticed," the authors write, "that after asking Perplexity or ChatGPT for answers, you can articulate concepts fluently while viewing the screen, but an hour later, clarity evaporates and you fumble explaining them to colleagues?"
Evidence supports that observation. In a preliminary MIT study, the researchers found that participants who used ChatGPT to write essays had greater difficulties recalling their work. This finding echoes the "Google effect," where easy access to information reduces memory retention.
One way to counter this is to shift from passively receiving answers to actively learning. Instead of asking for finished solutions, "Turn the bot into a Socratic tutor," the authors write. "Instead of 'Give me the answer,' try 'Guide me through the problem so I can solve this on my own.'"
Research backs the approach. In one chemistry study, "a modified version of ChatGPT that withheld direct solutions and supplied only incremental hints fostered greater engagement and learning outcomes than the default model," the authors write.
AI overreliance doesn't just weaken skills — it can also distort judgment. Earlier this year, two separate studies from the Swiss Business School and Microsoft found that heavy reliance on AI was correlated with worse critical thinking skills.
In one behavioral experiment, "volunteers forfeited a larger cash prize simply because an AI warned them not to trust their human partner, even after evidence showed cooperation would pay," the authors write.
AI's blind spots can also reinforce stereotypes. In one study, "participants were shown AI-generated images of 'financial managers'; 85% of the AI-images depicted white men," the authors write. Afterward, participants were more likely to associate the role with that identity, even though in reality fewer than 45% of financial managers are men or white.
Borrowing from medicine and aviation, Rust and Vasan recommend "cognitive forcing tools" such as diagnostic timeouts and mental checklists. When reviewing AI output, they suggest pausing and asking, "Can this be verified? What perspectives might be missing? Could this be biased?"
Research shows such metacognitive strategies pay off. "Workers with stronger metacognitive skills become more creative when using ChatGPT," and students trained to ask reflective questions "demonstrated higher levels of critical thinking," the authors write.
Finally, the most direct safeguard may be abstinence, at least temporarily. "The most direct path to preserving your intellectual faculties is to declare certain periods 'AI-free' zones," the authors write. "This can be one hour, one day, even entire projects."
This has broader implications for schools and companies. "Companies investing heavily in AI productivity tools may inadvertently be undermining their workforce's long-term capabilities," the authors write. For educators, bans risk leaving students "digitally illiterate in an AI-saturated world," while unrestricted access may erode the very abilities education aims to build.
The authors argue for balance. "None of this argues of course for shelving generative AI. Few of us would trade Wikipedia for a paper encyclopedia or swap Excel for an abacus. The challenge is cultivating a mindful relationship with the technology; one that captures its leverage without letting it hollow out our faculties."
Generative AI "can be a partner, muse, and accelerator," the authors write. "But without deliberate boundaries, this omnipresent assistant won't just help us write, it will become the author while we, the humans, merely click 'send.'"
(Rust/Vasan, Wall Street Journal, 9/3)
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
You've reached your limit of free insights
Never miss out on the latest innovative health care content tailored to you.
You've reached your limit of free insights
Never miss out on the latest innovative health care content tailored to you.
This content is available through your Curated Research partnership with Advisory Board. Click on ‘view this resource’ to read the full piece
Email ask@advisory.com to learn more
Never miss out on the latest innovative health care content tailored to you.
This is for members only. Learn more.
Never miss out on the latest innovative health care content tailored to you.