VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
In VZ framing, the point is not novelty but decision quality under uncertainty. Three cognitive illusions prevent AI users from realizing that they are thinking less clearly than before. Macnamara’s research identifies all three. The real leverage is in explicit sequencing, ownership, and measurable iteration.
TL;DR
AI creates three illusions: the illusion of depth (you think you understand something because the AI explained it), the illusion of breadth (you think you’ve reviewed everything because the AI provided many results), and the illusion of objectivity (you think you’re thinking objectively because the AI has no feelings). All three are supported by research. Together, these illusions create a comfortable but dangerous cognitive blind spot, where superficial productivity undermines the foundations of true understanding.
Why Does the Mirror Lie in the Morning?
I stand in front of the mirror in the morning. The mirror reflects what it sees. But what it sees is not who I am. Just the surface. The mirror doesn’t lie—but it doesn’t tell the truth either. It reflects, without context.
AI does the same thing with your thinking. It is a reflective surface that projects back your own data and questions in a processed, seemingly coherent form. The problem isn’t that it lies. The problem is that it reflects too perfectly without revealing the structure behind the mirror, the light source, or the distortion of your perspective. It becomes a tool that reinforces what you already think or what you receive, rather than one that challenges you.
The First Illusion: Depth (or Why Do You Think You Understand?)
According to Macnamara’s research, AI users systematically overestimate their understanding. When an AI explains something, the reader feels like they understand. But the feeling of “understanding” is not the same as actual understanding.
The illusion of depth: confusing easy access with understanding.
If you get an answer to a complex question in ten seconds, your brain concludes: “That was easy, so I understand.” But what you received was an answer. Not a thought process. It was the difficulty of thinking that led to understanding—and AI has taken away precisely that difficulty. It’s like receiving a pre-assembled, complex LEGO set. You see the end result, but since you didn’t build it step by step, you don’t know its internal structure or the critical connections, and you can’t reproduce or modify it.
The Cognitive Dead End: Confusing “Easy” with “Trivial”
Our brain’s basic mode of operation—the use of heuristics—traps us here. The feeling of cognitive ease is often linked to truth and understanding. As the corpus quote indicates: “We may not know exactly what makes things cognitively easy or difficult. This is how the illusion of familiarity arises.” [UNVERIFIED] The AI’s response is smooth, well-structured, and immediate. This fluidity paralyzes our critical reflexes. We don’t ask, “How do I know this is true?” Because it feels true. Instead of the metacognitive check—“Do I really understand this?”—a sense of immediate satisfaction takes hold.
This is not a new phenomenon. When someone explains something to us, we, too, adopt the appearance of understanding. AI, however, industrializes this process and accelerates it exponentially. Another excerpt from the corpus perfectly illustrates this mental automatism with a visual analogy: “What is happening here is a true illusion, not a misunderstanding of the question. We knew the question referred to the size of the figures in the picture.” [UNVERIFIED] We know exactly what we should be paying attention to (the 2D dimensions), but the heuristic of 3D interpretation (the sense of depth) automatically and convincingly takes over. The AI’s explanation is also a kind of “3D heuristic” in thinking—it offers a quick, convenient mental model that obscures the true structure of the underlying, flat data.
The second illusion: breadth (or why do you think you’ve seen it all?)
AI provides many results. Five summaries, three perspectives, ten references. The feeling: “I’ve looked at everything.”
But what AI provides is not coverage of the entire field. It is AI-filtered coverage of the field. You get what the training data contains. What it doesn’t contain—that’s a blind spot. Training datasets are massive, but they are finite and historical. They cannot contain tomorrow’s groundbreaking article, the resistance movement forming on the periphery, or the culturally specific, non-English perspective. AI doesn’t know what it doesn’t know. It only generates statistically probable answers from the existing corpus.
The illusion of breadth: confusing quantity with completeness.
The digital Canaan, where every promised land is in sight
An analogy for the phenomenon: a massive, automatically searchable library whose catalog contains only works from certain publishers. You, as a researcher, nod contentedly at the 50 results, all of which reinforce the main narrative. You don’t know that in the rear wing of the library building, in a locked cabinet, there are maps depicting an entirely different continent. The AI is the catalog. Not the library.
This bias is particularly dangerous in the context of extremist or conspiracy theories as well. A user might prompt: “Give me a summary of why the Earth is flat.” The AI, creating an illusion of comprehensiveness, can summarize a bunch of historical and modern claims on this topic, complete with source citations. The user believes they have “reviewed the material.” In reality, they have only reviewed the corpus of answers to the question, not the complete, refuting body of knowledge in astronomy, physics, and geodesy. The AI is the filter, not the light.
The Third Illusion: Objectivity (or Why Do You Think You’re Impartial?)
AI has no feelings. It doesn’t get angry, it doesn’t love, it doesn’t fear. That’s why the human brain—which has been monitoring source bias for millennia—automatically considers an AI’s response more reliable than a human opinion. We treat human opinions with suspicion: “What does this person want? What’s their agenda?” Such questions never arise with AI. It is emotionless, and therefore objective. This is a fatal fallacy.
But AI isn’t objective. It’s selective. It carries the biases of its training data—not in the form of human emotion, but in the form of statistical patterns. The bias isn’t intentional; it’s mathematical. If the training data predominantly concerns a certain demographic group, language, culture, or historical narrative, the AI’s responses will be statistically biased in that direction. This is not malice, but a pattern. Yet the effect is the same: bias poured through a seemingly neutral channel.
The illusion of objectivity: confusing emotional detachment with impartiality.
The Myth of “TruthGPT” and the Illusion of Validity
The corpus quote highlights a modern pipe dream: “These days, some may hope that AI will deliver something like this, as Elon Musk announced in 2023: ‘I’m going to start working on something I’m calling TruthGPT or maximum truth-seeking AI…’” [UNVERIFIED] This desire for an external, infallible decision-making mechanism is rooted in a deep human psychological tendency. However, AI is not a source of external truth. It is merely a highly complex reflection of internal data. The coherence it offers is often mistaken for validity.
“Because of CSALAL, it only takes into account the evidence currently available to it. Since coherence lends confidence, our subjective trust in our opinions reflects the coherence of the story created by Systems 1 and 2. The quantity and quality of evidence don’t matter much, because even weak evidence can be used to construct an excellent story.” [UNVERIFIED] This “illusion of validity” perfectly describes what happens when we receive an AI response. The response is coherent, well-structured, and confident. Our brain (Kahneman’s System 1) interprets this coherence as validity and the confidence as accuracy. The AI provides a story, not the full picture of the uncertainty of the evidence behind the story.
The Triple Spiral: How Do Illusions Reinforce Each Other?
The three illusions do not operate independently but synergistically, creating a self-reinforcing, closed cognitive system.
- Beginning: You have a complex question (e.g., the causes of a geopolitical conflict). The AI immediately provides a seemingly deep, well-structured explanation (Depth Illusion). You are no longer curious.
- Expansion: Then you ask, “Give me more perspectives.” The AI lists 8 different historical and economic factors (Breadth Illusion). You think you’ve got the full picture.
- Consolidation: The tone of the response is neutral, fact-based, and emotionless (Illusion of Objectivity). You interpret this as a sign of impartiality. Now you feel that you understand deeply, are broadly informed, and have reached an objective conclusion.
At this point, your thinking is practically closed off. Why would you ask further critical questions? The AI has answered everything. This spiral carries significant risks in decision-making, knowledge acquisition, and even in forming personal opinions. The blind trust we previously placed in “experts” (which did include the possibility of source criticism) is now being transferred to a seemingly all-knowing, neutral black box.
What is the practical antidote? Building conscious metacognition
The solution isn’t to abandon AI. It’s to consciously monitor your own thinking while using AI. Metacognition—thinking about thinking—is the only antidote. This requires specific exercises that build our critical reflexes during AI interactions.
Practical steps for developing metacognition:
- The “Before” Rule: Before asking AI a question, describe the topic in your own words, based on your current understanding. This establishes a baseline and later allows you to compare: Has my understanding changed, or did I just receive more information?
- Deliberately slowing down your search for sources: Don’t click on the links to sources suggested by the AI. First, ask yourself: What kind of source would I expect to fully explore this topic? This activates your immune system against the illusion of breadth. Only then should you compare your expectations with the AI’s suggestions.
- The “confrontational” prompt: To break through the illusion of objectivity, ask for direct counterarguments. Example: “Summarize argument X. Now formulate the strongest possible counterargument from the perspective of Y, as if you were its advocate.” This forces the AI to demonstrate a shift in perspective even within its own limitations.
- Decoding the explanation: When you receive an AI explanation, try to explain it in your own words to an imaginary student—without quoting the AI’s text. This exercise (a variation of the “Feynman technique”) immediately exposes the illusion of depth. If you can’t do it, you haven’t really understood it.
As the corpus suggests regarding the Müller-Lyer illusion: “There is only one thing we can do to resist the illusion: we must learn to doubt our impressions of the lengths of the lines when there are arrows at the ends of the lines. By applying this rule, we will be able to recognize the pattern of the illusion.” [UNVERIFIED] Exactly the same applies to AI. We must learn to question the impression of easy explanations, numerous results, and a neutral tone. We must recognize the pattern of the illusion in order to resist it.
Key Takeaways
- Three AI illusions: depth (you think you understand it), breadth (you think you’ve seen it all), and objectivity (you feel it’s impartial). Together, these create a closed cognitive system.
- The trap of depth: The feeling of cognitive ease (AI’s smooth explanation) is linked in the brain to understanding and truth. The lack of difficulty, friction, and self-construction signifies a lack of true understanding.
- The limit of breadth: The volume offered by AI is the statistical coverage of a predetermined, historical dataset, not the entirety of the world. Blind spots are inevitable.
- The myth of objectivity: Emotional detachment is not the same as impartiality. AI carries statistical biases, and we easily confuse coherent narratives (“the illusion of validity”) with objective truth.
- The Only Antidote: Conscious metacognition—actively observing and guiding our own thought processes while using AI. This requires practical steps, deliberate slowing down, and self-reflection.
Frequently Asked Questions
What three illusions does AI create? AI creates three interrelated cognitive illusions: 1) The illusion of depth, when you mistake the analysis derived from easy access for true understanding. 2) The illusion of breadth, when you equate the large volume of AI-filtered information with complete coverage of the topic. 3) The illusion of objectivity, when you interpret an emotionless presentation as a sign of the opinion’s impartiality and validity.
How can you tell if you’re under an illusion? Two quick tests: 1) The Reproduction Test: If, without the AI, you couldn’t explain what you’ve “learned” in your own words in a way a child could understand, then you’re trapped in the illusion of depth. 2) The Counterargument Test: If you are unable to raise a solid, source-backed counterargument to a view presented by the AI, then you are under the combined influence of the illusions of breadth and objectivity. The best way to verify this is through conscious, preliminary reflection (“What do I think about this?”) and the deliberate search for counterpoints.
Isn’t promoting objectivity precisely the AI’s job? AI is a tool, not a judge. Its task is to efficiently organize and shape information based on trained patterns. “Objectivity” is a human, ethical concept that encompasses context, values, and objectives. AI can strive for statistical impartiality, but that is already a programmed ethical choice. True objectivity never arises from a single source, but from the critical discussion of diverse, even conflicting sources, through human judgment.
Related Thoughts
- AI Brain Fry: This Is Not Burnout – When constant interaction with AI exhausts your capacity for deep thinking.
- The Jevons Paradox: Why We Work More with AI – How increased efficiency can lead to heightened expectations and workload.
- Capacity-Hostile: It’s Not That You’re Lazy – How modern tools overload our cognitive capacity, and how AI contributes to this.
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
Reality is just well-trained data.
Strategic Synthesis
- Translate the thesis into one operating rule your team can apply immediately.
- Monitor one outcome metric and one quality metric in parallel.
- Run a short feedback cycle: measure, refine, and re-prioritize based on evidence.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.