The Scientist Who Predicted AI “Psychosis” Now Warns of a Deep Cognitive Debt

When Danish psychiatrist Søren Dinesen Østergaard first cautioned in 2023 that AI tools might aggravate psychosis-like symptoms in vulnerable people, his warning sounded extreme to some. Fast forward to 2026, and his concern has broadened: he now argues that heavy dependence on AI is quietly building a kind of “cognitive debt” among students, researchers, and professionals—a slow erosion of our ability to think deeply, remember well, and judge critically.

This isn’t a call to abandon AI. It’s a call to use it wisely. In this article, we’ll unpack what Østergaard means by AI-driven cognitive debt, how it connects to mental health risks like “AI psychosis,” what evidence we have so far, and how you can protect your own thinking skills while still benefiting from powerful tools.

Abstract illustration of a human head surrounded by AI and digital interfaces
Psychiatrist Søren Dinesen Østergaard warns that over-reliance on AI may be quietly reshaping how our brains work. Image: Futurism

What Is “AI Cognitive Debt” and Why Is Østergaard Worried?

The phrase “cognitive debt” describes what happens when we routinely outsource thinking to machines without “paying back” that shortcut with practice and effort. Much like financial debt, it doesn’t hurt immediately. In fact, it can feel helpful and efficient—until the interest adds up.

Østergaard’s concern, as reported in 2026 coverage, is that knowledge workers and students who consistently lean on AI for:

  • Drafting emails, essays, and research summaries
  • Designing experiments or analysis plans
  • Interpreting data or literature
  • Generating ideas and arguments

may gradually lose:

  • Attention span for deep reading and complex tasks
  • Working memory for holding and manipulating information
  • Critical thinking for challenging, checking, and refining ideas
  • Intuition developed from wrestling with hard problems
“Each time we choose the shortcut, we skip a micro-opportunity to train our cognitive muscles. Over years, at scale, that avoidance can reshape how we think, learn, and even who becomes capable of pushing knowledge forward.”

From Østergaard’s perspective as a psychiatrist, this isn’t just an efficiency issue. It’s a possible risk factor for:

  1. Increased vulnerability to anxiety and depressive thinking when AI is unreliable or unavailable.
  2. Greater susceptibility to distorted or delusional beliefs if AI outputs aren’t checked against reality.
  3. A widening gap between those who can still think independently and those who cannot.

From “AI Psychosis” to Cognitive Debt: How the Concerns Evolved

In his earlier work, Østergaard described how intensive engagement with generative AI could potentially amplify psychotic symptoms—particularly in people already at risk. These were not claims that AI causes psychosis in healthy people, but that:

  • Highly vivid AI chats can blur the sense of what’s real vs. simulated.
  • AI “voices” can feed into pre-existing paranoid or grandiose narratives.
  • Late-night, socially isolated AI use can worsen sleep and stress—both risk factors for psychosis relapse.

Since then, psychiatrists have begun publishing case reports describing individuals whose delusions incorporated AI systems—seeing them as conspirators, prophets, or personal companions.

The newer cognitive debt warning is broader. It applies not just to people with severe mental illness, but to:

  • Researchers who let AI write or interpret their work.
  • Students who use AI to draft assignments from start to finish.
  • Professionals who offload every tough email, strategy, or decision outline.

Østergaard’s grim forecast is not a single dramatic event but a slow cultural shift: a world where fewer humans can sustain independent, high-level thinking without AI scaffolding—and where those who can are concentrated in narrow elites.


How AI Might Reshape Our Brains: The Science Behind Cognitive Debt

While long-term AI-specific studies are only just beginning, related research in psychology and neuroscience offers plausible mechanisms for how AI reliance could alter cognition.

1. The “Use It or Lose It” Principle

Cognitive skills behave a lot like muscles: they atrophy when not used. Studies on GPS navigation, for example, show that people who always rely on turn‑by‑turn directions often have weaker spatial memory than those who navigate more actively.

By analogy, if AI:

  • Summarizes every article for you
  • Generates every outline and argument
  • Suggests every step of a research design

your brain may get fewer chances to practice:

  • Complex reasoning
  • Concept integration
  • Careful reading

2. Shallow Processing and “Click-Through” Thinking

When AI offers quick, polished answers, it’s tempting to skim rather than deeply engage. But decades of memory research show that deep processing—connecting ideas, questioning assumptions, explaining concepts in your own words—is what cements learning.

Over time, habitual skimming of AI output may build a pattern of:

  • Lower retention of complex material
  • Overconfidence in understanding
  • Reduced tolerance for cognitive effort

3. Emotional Dependence and Anxiety

AI tools can also meet emotional needs: reassuring, validating, entertaining, or simply filling silence. For some people, this can lead to:

  • Using AI for comfort instead of building human support networks.
  • Struggling emotionally when systems are offline or restricted.
  • Difficulty tolerating uncertainty without immediate algorithmic reassurance.

These patterns may not be disorders on their own, but they can interact with pre‑existing vulnerabilities, especially in anxiety, depression, and psychosis‑prone individuals.

Person working at a laptop surrounded by abstract brain graphics
Our brains adapt to the tools we use most often—AI included. Long‑term patterns of use matter.

Østergaard’s Grim Forecast: What Could Happen If We Ignore Cognitive Debt?

Futurists sometimes imagine spectacular AI catastrophes. Østergaard’s vision is subtler and, in some ways, more unsettling. If current trends accelerate without guardrails, he and other critics foresee:

  1. A shrinking pool of original thinkers.
    Many people may remain technically “highly educated” on paper, but only a minority will regularly practice the deep, slow thinking needed for breakthroughs in science, policy, and art.
  2. Scientific echo chambers shaped by AI.
    If scholars lean too heavily on AI to propose hypotheses, write papers, or interpret data, the models’ built‑in biases could reinforce certain theories and marginalize others—subtly steering entire disciplines.
  3. Rising vulnerability to misinformation and delusion.
    As fewer people habitually cross‑check and critically analyze information, misleading AI outputs—whether accidental or malicious—could more easily seed conspiracy thinking or distorted beliefs.
  4. A new cognitive inequality.
    Those who can afford excellent education, time to think, and AI‑literacy training may cultivate AI as a tool, while others become increasingly dependent on AI as a crutch.
“The danger is not that AI will think for us, but that we will forget how to think without it.”

How to Use AI Without Losing Your Edge: Practical Strategies

You don’t need to abandon AI to protect your brain. The goal is to treat AI like a smart colleague, not a replacement for your mind. Here are evidence-informed strategies you can start today.

1. Reserve a “No‑AI” Core for Every Task

For important work—an essay, analysis, proposal—define a part you will do entirely yourself. For example:

  • Write your own first draft of the core argument, then use AI only for clarity edits.
  • Sketch your research question and method before asking AI for refinements.
  • Summarize a paper in your own words before checking your understanding with AI.

This preserves the “heavy lifting” your brain needs for growth while still leveraging AI’s strengths in polishing and formatting.

2. Turn AI Into a Sparring Partner, Not an Answer Machine

Instead of asking “Write this for me,” try prompts that force you to engage:

  • “Here is my outline—what weaknesses do you see?”
  • “Challenge my assumption that X causes Y. What alternative explanations exist?”
  • “I think the key mechanism is A. Help me find studies that support or contradict this.”

Then, critically review every suggestion. Highlight what you accept, what you reject, and why. This reflection step is where real learning happens.

3. Protect Deep Work Blocks

Research on productivity and learning consistently shows that uninterrupted deep work improves reasoning and retention. Try:

  • Scheduling 60–90 minutes of AI‑free work once or twice a day.
  • Turning off notifications and using a simple text editor or notebook.
  • Only consulting AI after you’ve wrestled with the problem yourself.

4. Use AI to Build Skills, Not Bypass Them

AI can be a powerful tutor if you use it deliberately. Examples:

  • “Quiz me on these key concepts, increasing difficulty as I improve.”
  • “Explain this research article as if I’m a beginner, then as if I’m advanced.”
  • “Show me step‑by‑step how to derive this equation, but don’t move on until I answer each step.”

This transforms AI from a shortcut into a personalized training partner.

Student studying with a laptop and notebook, looking focused
The healthiest AI use keeps you actively thinking, not passively consuming.

Common Obstacles—and What Real People Do to Overcome Them

Even when we understand the risks, changing habits is hard. Here are frequent struggles people report, and how they navigate them in practice.

Obstacle 1: “AI Just Saves So Much Time”

Many graduate students describe feeling pressure to publish, teach, and apply for grants simultaneously. One PhD student (let’s call her Lina) started asking AI to draft entire literature reviews “just to keep up.”

Within a year, she noticed she:

  • Struggled to recall details of key studies.
  • Felt less confident defending her work in seminars.
  • Relied on AI even for basic emails.

With her supervisor, she shifted to a new rule: AI could help with language and structure, but not with the first pass of reading or outlining. After several months, she reported feeling slower at first—but ultimately more in control of her ideas.

Obstacle 2: “Everyone Else Is Using It—If I Don’t, I’ll Fall Behind”

This fear is especially strong in competitive fields. But remember:

  • In the short term, heavy AI use may increase output.
  • In the long term, people who can think critically with and without AI are likely to be the most valuable.

One useful mindset shift:

“I’m not competing to write the most words; I’m competing to generate the clearest, most original thinking.”

Obstacle 3: Emotional Reliance on AI Companionship

Some people use AI chat as a late‑night confidant. While occasional use is usually harmless, relying on AI as your primary emotional support can weaken real‑world connections.

If this sounds familiar, a gentle step is to:

  • Limit emotional conversations with AI to certain times or durations.
  • Pair each AI “venting” session with one human interaction (message a friend, join a group, or talk to a therapist).
  • Ask AI to help you script how to open up to someone you trust in real life.

What Schools, Universities, and Employers Can Do

Østergaard’s warning is not just about individual choices. Institutions that shape how we learn and work have a major role in preventing AI‑driven cognitive debt.

1. Design AI‑Resilient Assessments

  • Use oral exams, presentations, and in‑class writing where independent thinking is visible.
  • Assess the process (notes, drafts, reflections) rather than only polished products.
  • Teach students to document how they used AI, not just whether.

2. Teach AI Literacy as a Core Skill

Students and staff benefit from explicit training in:

  • AI limitations, hallucinations, and bias.
  • Critical evaluation of AI‑generated content.
  • Ethical and mental‑health‑aware AI use guidelines.

Early programs in medical and engineering schools are already piloting such curricula, integrating AI literacy into research methods and professionalism courses.

3. Protect Time for Human‑Only Collaboration

Workplaces can:

  • Set aside AI‑free brainstorming sessions where teams generate ideas before consulting tools.
  • Encourage mentorship and peer review over purely AI‑mediated feedback.
  • Reward quality of insight and learning, not just speed of output.
A diverse team collaborating around a table with laptops and notes
Institutions can integrate AI thoughtfully while still nurturing deep, independent thinking.

A Quick Self‑Check: Your AI Use “Before and After” Small Changes

Use this simple comparison to reflect on how you’re currently using AI and what might shift if you adopt a few protective habits.

Typical High‑Risk Pattern

  • AI drafts most emails, essays, or reports.
  • You skim outputs and rarely deeply rewrite.
  • AI is your first stop for any confusion.
  • Late‑night AI chats replace some social contact.

Lower‑Risk, Brain‑Building Pattern

  • You draft core ideas yourself, then refine with AI.
  • You question and annotate AI outputs.
  • Deep work blocks are AI‑free.
  • AI complements but does not replace human support.
Side by side comparison of a cluttered desk and an organized workspace
Small shifts in how you use AI can turn it from a crutch into a catalyst for better thinking.

Moving Forward: Staying Human in an AI‑Accelerated World

Østergaard’s forecast is undeniably grim—but it’s also a prompt for action. We still have time to choose what kind of thinkers we become in an AI‑saturated world.

You don’t need to reject AI to safeguard your mind. You only need to:

  • Use it consciously instead of automatically.
  • Protect spaces where your brain does the hard work.
  • Stay curious, skeptical, and connected to real people.

If you’re a student, researcher, or professional, your thinking skills are among your most precious assets. Treat AI not as a replacement for them, but as a tool to refine and extend them. The future Østergaard fears is not inevitable—but avoiding it will take deliberate practice, honest reflection, and a culture that still values deep human thought.

Your next step today:

  1. Pick one task you’ll do with minimal AI support this week.
  2. Notice how it feels—slower, maybe harder, but also more satisfying.
  3. Gradually build a personal AI use plan that protects your attention, memory, and mental health.

Our tools are changing fast. Our responsibility is to make sure our minds don’t quietly shrink to fit them.