VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
From the VZ perspective, this topic matters only when translated into execution architecture. As DeMarco and Lister wrote in 1987: projects fail not because of technology, but because of a lack of honest communication. In 2025, AI is repeating the same mistake. Its business impact starts when this becomes a weekly operating discipline.
TL;DR
TL;DR: Most projects fail not for technical reasons, but because of a failure to have honest conversations. Flow is not a luxury, but a prerequisite for productivity—the cost of restarting after interruptions is measurable and accumulates. The prompt culture of the AI era repeats the same pattern: there is a gap between virtuosity in prompting and actual expertise, which technology cannot bridge. True discipline—whether in Zen or in programming—is built up over decades, and this is what lies behind the apparent magic.
The Long Night of Bits
Most software projects fail not for technical reasons, but for human ones: the avoidance of honest conversation, the regular disruption of the flow state, and the apparent substitution of expertise with tools. Tom DeMarco and Timothy Lister’s 1987 book Peopleware documented this first—and the prompt culture of the AI era repeats the same pattern in a more subtle form.
Thirty years ago, during one week in December, I spent four nights at the office. I didn’t go home. Coffee, Coke, cigarettes, code. The financial closing deadline was approaching, the system was unstable, and I was the one who would “fix it.”
I remember the glow of the monitor at four in the morning. The empty room. The clacking of the keyboard. The flow—the joy when a module finally came together. The feeling that I was a hero.
Then I read Peopleware. And I realized: I wasn’t a hero. I was a victim of system failure.
The “long night of bits” is not a heroic feat. It’s a symptom. A sign that something has gone seriously wrong in the organization, in the processes, in human communication. It’s like when, in [CORPUS — book:26485c0e_Quantum thief], the characters wake up in an empty, geometric forest, their memories incomplete, and no matter how they reach for their exomemory, they find only a blank wall. Overtime is also a kind of empty wall: an illusion that we are working, while the system that would enable real work has collapsed.
The Fundamental Paradox of Peopleware
Tom DeMarco and Timothy Lister described the tech industry’s most brutal truth in 1987: most of our projects fail not because of technological problems, but because of human factors. Not because the architecture is bad. Not because the framework is slow. It’s because we avoid honest conversations.
“People Over” isn’t a motivational poster. It’s a fundamental principle of a production system.
Flow is a production requirement, not a luxury. If you interrupt a developer, you don’t lose 5 minutes—you lose 15–20 minutes while they return to the mental state they were in. The cost of context switching is measurable. And it adds up. If there are 6–8 such interruptions per day, the developer spends 2–3 hours a day reloading. Not writing code. Rebooting. Imagine a pilot’s situation: you can’t suddenly divert their attention in the middle of a landing to another, non-urgent question from air traffic control, and then return to the landing. The mental demands of software development require a similarly focused state.
Overtime is usually not a sign of dedication, but an escape. When someone stays late at night, it’s often not because they love their work so much. It’s because they can’t work during the day—meetings, interruptions, chaos. The night is a refuge. This isn’t heroism. It’s a failure of the system. It’s like when, in [CORPUS — book:eb434ae0_Childhood’s End], Rupert places his glass on an invisible tray and presses an invisible button to get his beer on time—the technology is magical, but behind it lies the organization, order, and discipline that we don’t see. Overtime is a visible symptom of the absence of an invisible system.
Avoiding honest conversation is more costly than any poor architectural decision. When we don’t say that the deadline is unrealistic, that the specification is contradictory, that the team is burned out—we lose weeks, months. We aren’t accumulating technical debt. We’re accumulating human debt. This communication debt accrues interest in the most brutal way, because it isn’t hidden in a piece of code, but in the team’s dynamics, in the lack of trust, in the culture of fear.
Why Doesn’t AI Replace Expertise?
It’s 2025 now. AI has arrived. Agents have arrived. Prompt engineering has arrived. And it’s the same old story: there’s a new tool that’s going to solve everything.
“You don’t need to understand the system, just write a good prompt.” “You don’t need to know SQL; GPT will write it.” “You don’t need to know the domain; Claude will figure it out.”
This is exactly the same illusion that Peopleware debunked thirty years ago: we believe that technology can replace human communication, deep expertise, and honest conversation.
It doesn’t.
AI is an amplifier. If you understand the domain, if you’re familiar with the structures, if you know what you’re looking for—then AI is a brutal accelerator. But if you don’t understand what you’re doing, then AI just produces more sophisticated bullshit that’s harder to spot. A scene from [CORPUS — book:26485c0e_Quantum thief] illustrates this perfectly: “Don’t let them quantum-type you,” the driver warns. Quantum-typing here refers to a kind of transformation, a rewriting of reality. AI can similarly rewrite the reality of our shortcomings into an image that is seemingly flawless but fundamentally false.
What happens when AI smooths over the gaps?
The danger of AI isn’t that it gives wrong answers. The danger is that it smooths over the gaps.
You write a prompt. It’s vague. You can’t ask a precise question because you don’t understand the domain well enough. But the AI responds. Structured. Detailed. Convincing. And you accept it—because you have no framework to verify it. It’s like asking for directions in a foreign city, and even though the description sounds good, you don’t know that locals never go there because the neighborhood is dangerous. The surface is smooth, the content is empty.
You ask for an SQL query. You don’t understand the semantics of JOIN. The AI writes a LEFT JOIN. You run it. It works. But you don’t realize it isn’t filtering the data leak it’s supposed to. It just doesn’t show up in the test data. It goes into production. Two months later, the trouble starts. The cost of the missing expertise is hidden now, but it grows exponentially. Another scene from [CORPUS — book:eb434ae0_Childhood’s End] is instructive here as well: when Karellen, Earth’s overseer, takes off his dark glasses to adjust to different lighting conditions—true vision requires adaptation, not mere appearance. The AI-enhanced glasses hide the eyes from harsh light, but they do not address the fundamental requirements of vision.
And now we believe that if we write enough prompts, this gap will disappear. It won’t disappear. It will deepen. The virtuosity of prompting is just another layer to mask the gaps, another invisible board on which we place the glasses of our problems.
The ingrained discipline
I learned to program on the ABC80. Then the HT1080Z. Then the Spectrum. Then C64. Then the Jackson method. Then taxonomy. Then NLP. Then statistics. Then machine learning.
Why does this matter? Because every layer instills discipline. Not technique. Discipline.
ABC80 taught me that memory is finite, and you are responsible for it. The Jackson method taught me that structure is not an option, but a requirement. Taxonomy taught me that categories aren’t arbitrary—there’s a logical order behind them. Statistics taught me that randomness isn’t chaotic—there are patterns, you just have to find them.
These aren’t facts. They’re ingrained reflexes. When I write a prompt, I’m not just writing words—I’m building a question structure. What I’ve learned over decades. Behind the prompt lies an awareness of memory management, a sense of structure, the ability to categorize, and pattern recognition. This is the “quantum” knowledge that cannot be prompted, because AI doesn’t build it in—it merely mimics its manifestations.
And when someone says that “prompt engineering is a new profession”—what I see is this: they believe the surface is enough. That if you phrase things well enough, the AI will bring out from the depths what you don’t know. It doesn’t bring it out. It hallucinates it. The real profession is acquiring the discipline that lies behind it.
Zen as Discipline
Zen is not mysticism. It is the practice of discipline. When you sit in zazen and a thought arises—you don’t suppress it. You return to your breath. Again. Again. A thousand times. This is not relaxation. This is discipline. The point is the return, not perfect emptiness. Exactly the same thing happens when you return to the flow after an interruption: the practice of discipline to realign yourself.
It’s the same code. When you’re designing a complex system and the idea of a “quick fix” comes up—you return to the structure. What is the goal? What is the compromise? Again. Again. This return to the structure is the core of discipline. Peopleware described it too: the best developers aren’t the fastest. They’re the most disciplined. Those who return to the structure when it would be easier to skip it. Those who ask questions when it would be easier to make assumptions. Those who, like the characters in [CORPUS — book:26485c0e_Quantum thief] standing before a cathedral-like building made of glass and light, know that behind the seemingly chaotic, organic arches there is structure and intent—it just needs to be understood.
Key Takeaways
- The primary cause of project failure is human, not technological. The cost of avoiding honest, difficult conversations (about deadlines, burnout, and conflicts) outweighs any technical debt. This is the fundamental truth of Peopleware, which has not changed even in the age of AI.
- The flow state is a prerequisite for productivity, and interruptions to it cause measurable economic damage. Every interruption represents not only wasted time but also the cost of a 15–20-minute mental reset. The accumulation of these can mean several hours of lost capacity each day.
- AI is an amplifier, not a substitute. It can amplify existing expertise and discipline, but it cannot make up for missing fundamentals. There is a gap between the art of prompting and deep domain knowledge that technology alone cannot bridge.
- The greatest risk of AI is glossing over gaps. When we mask knowledge gaps or structural deficiencies with convincing but superficial answers, we build a layer that appears smooth on the surface but is rotten from the bottom up—a structure that will eventually collapse at a much higher cost.
- True expertise is ingrained discipline that develops over decades. From early computers to modern methodologies, every layer incorporates a kind of mental discipline and reflex. This “quantum” knowledge cannot be replicated with a prompt; it only generates hallucinations.
- The solution lies in cultivating discipline, not in the magic of tools. Whether it’s the practice of zazen or returning to structure during coding, the key is deliberate practice and the ability to make difficult choices. The Peopleware teams also demonstrated this discipline.
Frequently Asked Questions
If AI doesn’t replace expertise, how does it actually help?
AI is an enhancer, not a replacement. If you understand the domain, if you’re familiar with the structures and know what you’re looking for, AI is a brutal accelerator: you search faster, filter faster, synthesize faster. It’s like having an excellent assistant who knows your methods and your vocabulary. But without real expertise behind it, AI just produces more sophisticated bullshit that’s harder to spot. Peopleware teaches the same thing: the tool doesn’t replace human communication and deep understanding; it merely opens a new channel for circulating existing knowledge (or masking a lack of knowledge).
Why does interruption cause such a significant loss of productivity?
The flow state cannot be switched on and off like a light switch. When you interrupt a developer or knowledge engineer, they don’t lose 5 minutes, but 15–20 minutes, until they return to the mental state they were in. This isn’t laziness; it’s cognitive neurobiology. Working memory has to reload the context, the logical chain, and the planned next steps. If there are 6–8 such interruptions per day, that’s 2–3 hours of wasted capacity daily. This is not a subjective feeling, but a measurable, documented phenomenon that DeMarco and Lister described as early as 1987, and which has since been confirmed by numerous studies.
How can you tell if AI in an organization is smoothing over deficiencies rather than strengthening capabilities?
The surest sign: the artistry of prompt engineering receives more attention than the verification of responses. If a team takes pride in its prompts but lacks a framework for validating outputs (e.g., independent testing, peer review, domain expert approval), that is a classic example of covering up shortcomings. Another sign: when the act of asking the question becomes more important than the decision that should be made based on the answer. If everyone is preoccupied with the perfect prompt but no one takes responsibility for the consequences of the generated content, the organization isn’t strengthening its capabilities—it’s hiding its lack of expertise behind a technological veil.
Can this “internalized discipline” be learned quickly?
No, just as Zen meditation or virtuoso-level playing of a musical instrument cannot be learned quickly. Discipline is the product of practice and time. However, it can be practiced consciously. You can start by spending a minute before writing your next prompt to sketch out the logical structure of the problem on paper. Or by having a colleague review every AI-generated code snippet—not for syntax, but for intent and boundaries. The key is to intentionally slow down and seek feedback, resisting the immediate magic of the prompt.
Related Thoughts
Zoltán Varga - LinkedIn Neural • Knowledge Systems Architect | Enterprise RAG architect PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership The code compiles. The people don’t.
Strategic Synthesis
- Map the key risk assumptions before scaling further.
- Monitor one outcome metric and one quality metric in parallel.
- Run a short feedback cycle: measure, refine, and re-prioritize based on evidence.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.