VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
From the VZ perspective, this topic matters only when translated into execution architecture. Peter’s Principle, Parkinson’s Law, the Gestalt Field — 60 years of organizational ailments that AI doesn’t cure, but accelerates. Ten rules for survival. The real leverage is in explicit sequencing, ownership, and measurable iteration.
TL;DR
TL;DR: AI doesn’t put a tool in your hands; it reorganizes the organizational landscape—and shatters the illusion that you were the system. The old gatekeeper roles didn’t signify value; they filled a gap in the system, which is now becoming a source of friction. The key to survival isn’t protecting the figure, but consciously designing the field.
Snow in Prague, six in the morning
I’m sitting on the stairs, the cold stone seeping through my jeans. A street stretches out before my eyes, quiet and white. The snow muffles the sounds, but not completely—something is still rumbling deep within the city. The smell of chimney smoke and fresh pastries hangs in the air, mingled together. On the wall of the house across the street, the remains of a spotlight hang, burned out, its bracket rusted. It once highlighted something, illuminated it. Now it’s just a dark spot in the snowy light. I watch as a piece of ice melts from it, slowly, drop by drop. I wrap myself in my warmth and think about how much that spotlight must have seen—and how different it looks now, when the object itself has become the source of light instead.
The Reflector That Turns Back
The figure-ground logic of Gestalt psychology shows why AI doesn’t simply “take away jobs,” but rearranges the organizational field: those who were gatekeepers become friction; those who design the field survive. These ten rules are the Gestalt map of organizational survival—from the Peter Principle to Parkinson’s Law, from reward hacking to unfinished Gestalts.
There is a moment in every organization’s life when the set comes to life. When what was previously the background—the system, the process, the infrastructure—suddenly steps onto the stage and begins to speak. It doesn’t ask for the floor. It doesn’t ask for permission. It simply starts making decisions, and in the process, it doesn’t apologize. The audience is surprised, because until now they thought they were the protagonist.
In 2026, artificial intelligence does exactly this. It is no longer a tool, but a casting director. It assigns who will be the character and who will be the background—and in the process, it doesn’t care that you’re still searching for your old leading role. You could call this “automation,” but anyone who sees it as a tool is watching the new casting of reality while still holding onto the old script.
The essence of the story is simpler and more uncomfortable than we’d like. The system that was once a backdrop is now becoming the protagonist. And you have two options: cling to your old role and slowly turn into friction—or start designing the field and set the spotlight yourself.
And now for the survival part. It won’t be a walk in the park.
1. Recognize when you were the figure—because if you don’t name it, the system will, and that’s more painful
One of the most fundamental concepts in Gestalt psychology is figure and ground. The figure is what you’re currently in contact with—the thing, person, or decision point that stands out from the background and captures your attention. That’s why you focus your energy on it, and why it seems “important.” The ground is the entire context that makes this possible: rules, access points, tools, unspoken norms, rewards, fears, and that subtle organizational gravity that determines where the figure emerges. The figure, then, is not necessarily a person—but a focal point of attention that sometimes happens to be a person.
Think of it as a theatrical performance. The spotlight isn’t shining on you because you’re the best actor. It’s because someone directed it there. In old organizations, leadership was often unconscious—the system’s flaws determined who held a “power-granting” position.
Status typically came from four sources:
| Source | Meaning | Example |
|---|---|---|
| You put it together | Information was concentrated with you | “Only I know how this board works” |
| You were the gatekeeper | Everything had to go through you | “Without me, the request won’t reach the decision-maker” |
| You were the interpreter | The translation depended on you | “Only I understand what the boss wants” |
| You were the bottleneck | You set the pace | “If I don’t do it, no one will” |
These are all contact boundary roles. The contact boundary is the point where a question becomes a decision, a complaint becomes a task, an exception becomes a rule—and where someone decides what matters. Whoever held this power was considered a “valuable person,” a notion many confused with actual value.
In the old world, roles of the “I have the spreadsheet” and “it won’t work without me” variety were figures produced by the field because the system was incomplete. The gap created a contact boundary, and the guardian of that boundary became a figure. Artificial intelligence attacks precisely this: it lowers the point of contact and makes the question-answer path more direct. From this point on, the gatekeeper is not a hero, but friction—and the field begins to eliminate that friction.
Laurence J. Peter—author of the Peter Principle (The Peter Principle, 1969)—described precisely this dynamic in the fundamental law of hierarchiology: “Supercompetence is often more unacceptable than incompetence… because it disrupts the hierarchy and thereby violates the first commandment of hierarchical life: the hierarchy must be preserved.” In 2026, the reverse is also true: the gatekeeper competence that once protected you is now becoming disruptive—because the system no longer requires the kind of slowing down that you called “expertise.”
If you’ve been living off the fact that “it wouldn’t work without you,” then you’re not losing a job now, but a source of status. This is an identity shift, not task optimization—and if you don’t say this out loud, your body will say it for you: through stress, irritability, and cynicism.
Retroflexion (a form of impasse in Gestalt therapy, where you turn your energy against yourself instead of your environment) is the default defensive stance in such cases: if I can no longer be a figurehead, at least I’ll blame myself, because that’s still something. The organization likes this because it’s cheap. Burnout is cheap too—as long as it doesn’t show up in the next quarter.
The tragicomedy begins when many people confuse their own value with the prominence granted by the field. When the figure disappears, they don’t experience it as “the way things work has changed,” but as “I am ceasing to exist.” That’s why they flee back to the old background, because there they can become a figure again—only the company pays for the nostalgia.
Why does every AI implementation bring a new layer of errors?
Every new system or technology creates a new layer of errors. This is system dynamics; it has nothing to do with poetry. Some of the old errors disappear, but in exchange there is a new class of errors, a new interface, new misunderstandings, new “that’s not what we meant,” new “you can’t see it, but it’s there.” The volume of problems is transformed at most—it does not decrease.
C. Northcote Parkinson articulated as early as the 1960s what we must rediscover today: “Work expands to fill the time available.” (Parkinson’s Law, 1957.) In the age of machine intelligence, this is modified such that work expands to fill the available intelligence simulation. More decision points, faster cycles, greater output—and exponentially more opportunities for systematic misunderstanding.
With machine intelligence, this happens even faster because it incorporates a new cognitive layer into the operation. It provides not just a process, but a simulation of thought—and this gives rise to new types of errors. The system is capable of misinterpreting with great confidence. It can generate a perfectly plausible justification for a bad decision. It can optimize a “good” metric toward the wrong goal if the domain has poorly defined the objective. In such cases, the system isn’t broken; it’s doing exactly what the domain configured it to do—while you’ve been telling yourself a different story about what’s happening.
This is what’s called reward hacking in engineering thinking: optimization isn’t fulfilling your goal, but its own metric—and the gap between the two becomes the organization’s new blind spot.
This is the point where “the system doesn’t do what it says it does” types of misconceptions come into play, and the organization begins to celebrate its own misunderstanding with the same enthusiasm it previously used to celebrate its hands-on heroes.
3. Shift from figure to field—because in 2026, the greatest power lies in deciding what can become a figure
In the old world, the figure was the hero. In the new world, the hero quickly becomes noise if there is no field behind them. The field designer doesn’t stand in the spotlight; instead, they set up the spotlight. Less spectacular from the outside, much more stable from the inside—because the system works even when you don’t feel like being a hero.
There is a third trap between the figure-ground dilemma and field design: confluence (in Gestalt therapy, the phenomenon where the boundary between the individual and their environment blurs). When someone “uses” machine intelligence so well that they no longer know which thoughts are their own. The question and the answer merge, the boundaries between decision and suggestion blur, and suddenly there sits someone in a meeting who no longer represents their own judgment, but rather the end result of a well-formulated question-and-answer chain. This is not efficiency—it is a loss of identity in an elegant package.
Sound familiar? The same thing happens when a student “works” with ChatGPT: they don’t learn the material, but rather master the technique of asking questions. The question itself becomes the product, not the answer.
The question, therefore, is not “what should I do,” but rather in what operational space I allow machine intelligence to operate: where should human decision-making occur, where should automatic execution take place, where should feedback be provided, and where should verifiable accountability exist. Anyone who understands this is not a “user” but an operational architect—and this role is rarely eliminated, because there is nothing to replace it: the architect is the field itself.
Where do errors migrate when AI takes over the process?
The error does not disappear, but rather shifts—and it tends to move to places where sensors are weak, where accountability can be obscured, where the logic is “too complicated,” where the control panel is merely for show, and where the organization prefers to remain in the shadows.
Parkinson’s classic observation on the bike-shedding effect (also known as Parkinson’s Law of Triviality) is given new life here. The original example: a committee approves a 6000 billion forint nuclear reactor in two and a half minutes, but spends forty-five minutes debating a 680,000 forint bike rack. Why? Because no one can speak knowledgeably about the reactor, but everyone can about the bike rack.
By 2026, decisions on nuclear reactors will be made by machine intelligence: they’re too complex to debate, too fast to stop—and the organization will vent its need for control on bike rack-level problems, while the real risks and costs pile up unseen.
The classic scene: at the executive meeting, everyone praises the new machine report. The CFO nods silently. That afternoon, he compiles his own spreadsheet by hand, because “that’s the sure thing.” The fault isn’t in the system—the fault is that no one dares to say that the dashboard isn’t measuring what we want to believe.
That’s why, after the introduction of machine intelligence, most “surprises” aren’t model errors, but data field errors. Bad boundaries, bad feedback, bad permissions, bad exception handling, bad measurements—and the classic: the team “upstairs” praises the system, while “downstairs” they work around it manually. The error has moved; it’s just that the reports look nicer now.
5. The Myth of the Safety Net — and Why the “Safety Layer” Will Show Up at the Worst Possible Moment
“One more check, just to be safe” is the favorite phrase of 2026, and it mostly serves the same function as the spreadsheet used to: it brings back the familiar gatekeeper figure, which is why it’s reassuring. The problem is that backup systems typically fail in a way that isn’t “backup-like.”
In practice, this looks like you build in an extra approval loop, which makes it “safe”—then it turns out that this very loop becomes the bottleneck, this very loop becomes the blind spot, and this is exactly where the wish-fulfilling feedback appears: the kind of response that shows you what you want to believe, not what actually is.
This mechanism is deeper than it appears at first glance. In cognitive science, this is called confirmation bias , but the Gestalt interpretation is more accurate: the organism organizes the field so that it reflects the old figure. Control isn’t there to protect—it’s there to reassure. The two are not the same.
The machine doesn’t lie—but if you ask the wrong question, it will give you exactly the answer that lets you sleep soundly. In 2026, transparency means that the system does not protect you from your own questions.
6. Don’t protect manual labor—protect judgment, because the pattern is already a machine, but responsibility is still human
Protecting manual knowledge is often an internalized rule (introjection—in Gestalt therapy, the phenomenon where someone uncritically internalizes an external belief and treats it as their own): the internalized belief that “work” is work precisely because it hurts. Machine intelligence does not respect this creed—and won’t even smile at it. At most, it quietly comes up with a better solution, all the while documenting its steps.
Peter used the concept of professional automatism to describe the state in which “administrative paperwork becomes more important than the goal for which it was originally intended.” In 2026, professional automatism takes on a new form: asking questions becomes more important than the decision that should have been made. The process becomes the product. The workflow becomes the result.
It’s like when a restaurant’s menu is more appealing than the dinner itself. The chef is enthusiastic about the concept, but the meat remains raw. It’s the same in an organization: prompt engineering (the technique of asking questions) is more exciting than the decision that should be made based on the answer.
What remains is judgment. The ability to tell when something is true, when it’s essential, when it’s risky, when it’s immoral, when it’s too expensive, when it’s too early. You can tell where the line is. The ability to “not use” is a true leadership quality here, not pickiness. If you can’t articulate this, then you’re actually outsourcing your own decisions—only in the meantime, you feel like you’re in control because you pressed the button.
7. In 2026, speed is often deflection—which is why slowness is valuable when you’re in genuine contact
In Gestalt, rushing is often deflection (a defense mechanism where someone avoids genuine contact by moving on quickly). People are fast because they don’t want to face the real decision point. In 2026, “quick decision” often means we’ve quickly avoided responsibility, then celebrated how nimble and awesome we are.
According to Parkinson’s Second Law, “a bureaucrat wants to multiply subordinates, not rivals.” In the age of machine intelligence, this shifts so that the leader wants to multiply outputs, not decisions. More reports, more dashboards, more “insights”—and meanwhile, fewer and fewer genuine stances.
This is the latest act in the theater of productivity. In the Excel era, the output was the spreadsheet. In the AI era, the output is the prompt chain. The leader proudly shows that they generated twenty reports by eight in the morning—but if you ask which one they based their decision on, there is silence. Output is not value. Decision is value.
Value is the slow, well-formulated question that gets to the heart of the matter. It is the rare perspective that doesn’t offer the obvious solution. It is the uncertainty that is framed—not fragmentation, but precision. Here you can compete with the machine, because the machine gives answers quickly, but you can tell which questions are worth answering and which are just noise.
Meanwhile, with machine intelligence, everything is faster, so errors also run through the system more quickly. Speed is not something to boast about, but a burden—and that burden brings forth new types of errors. At this point, the organization either panics and retreats to manual slowing down to make the old layer of errors “stable” again—or it matures and learns that slowness is a value, because here, slowness means you’re consciously holding the field, not that you’re dragging your feet.
What is the difference between an AI user and an operational architect?
The figure-human “does” workflows. The survivor “maintains” workflows. These are not the same thing. Maintaining does not mean you do everything yourself, but rather that the boundaries of contact are clear, decision points are visible, feedback works, and exceptions are not secret sanctuaries but managed phenomena.
Parkinson described the phenomenon of talent-killing (injelitis)—the organizational disease in which mediocrity reproduces itself: “If the head of an organization is second-rate, he will ensure that his immediate colleagues are all third-rate; and they, in turn, will ensure that their subordinates are fourth-rate. Soon a veritable competition in stupidity will develop.”
In the age of machine intelligence, this dynamic accelerates: the system generates its own blind spots more quickly and rewards those who do not disrupt the process more rapidly. The algorithmic version of talent destruction: AI doesn’t reward the best, but those most easily automated—and whoever doesn’t stick out from the pattern moves forward faster. This isn’t a conspiracy. It’s the natural gravity of the field.
And here comes the punchline. Returning to spreadsheets is often not about cost reduction, but restoring the gatekeeper. We have to bring back the gatekeeper because without them, someone doesn’t feel like they matter. This is humanly understandable, organizationally expensive, and strategically an own goal—because the trend in the field isn’t toward more gates, but fewer. In fact, the goal should be to have no gatekeeper at all.
9. In 2026, transparency is not an attack, but clarity—and clarity always takes away someone’s “magic”
Ambiguity used to confer status. In the shadows, the gatekeeper appears greater, and exception handling seems like a heroic act. In 2026, the shadows are more like suspicion. The logic of the field shifts: what was previously “complex expertise” is now “opaque risk.”
Peter said of this: “Competence—like truth, beauty, and contact lenses—is in the eye of the beholder.” In 2026, the beholder is increasingly often an algorithm, and the algorithm does not accept ambiguity. The algorithm does not respect authority—only patterns. If your “expertise” consisted of no one else understanding what you were doing, then the algorithm does not expose your knowledge. It exposes the fact that your knowledge was not shareable.
Unfinished business is the hidden engine of burnout in organizations. Decisions left open, “we’ll do it later,” undocumented exceptions, “we don’t know, but it works”—these all generate background noise that tires the attention. Machine intelligence turns this background noise into a figure, and this makes many people feel that “now everything is visible.” Yes—that’s the point. The operation is visible. The pose, less so.
10. “No” will be the new guiding principle—and one final thought that will either set you free or drag you back into the old pattern
With machine intelligence, “let’s do one more” is cheap. This will fill the world with solutions, and in the process, there will be far fewer good decisions. A good decision requires boundaries: where we don’t use machine intelligence, why not, what risks are involved, what legal reasons, what ethical reasons, what human reasons. This is careful planning, not resistance.
In Gestalt theory, the boundary is the quality of contact. A good boundary does not close off, but clarifies—and this results in less noise, less projection, less scapegoating, and fewer “system errors,” when in reality the lack of a boundary is the error.
John Gall — author of Systemantics (1975) — said: “Nurture your mistakes.” In 2026, this translates into practical language: you don’t feel ashamed of your mistakes, but rather systematically incorporate them into the field’s knowledge. You don’t erase them, but document them. You don’t cover them up, but plan for the next layer of errors—because whoever acknowledges in advance that new errors will come with the new system becomes the architect. Those who deny it will later play the hero amidst the errors they themselves have created—and they will call this “experience.”
The real position in 2026 is knowing where the bugs will form, what layers of bugs are coming, which bugs are worth squashing quickly, which are worth “studying as program bugs,” and where you need to say no.
The closing statement that cannot be skipped
Machine intelligence does not take your job away, but rearranges the field—and in doing so, it takes away the illusion that the system was you. The old persona was not a “value,” but a byproduct of a field configuration, and when the configuration changes, the byproduct changes too.
If you can handle this, then suddenly you don’t have to be a hero—it’s enough to build a good field. If you cling to the lukewarm security of the old figure, then you’re left with manual firefighting, heroic exhaustion, and that empty stare on Monday morning when the machine has long known what you’re still explaining—because your old identity resides in the explanation.
And the machine is patient. It waits for you to finish.
Why is the Gestalt approach the best framework for organizational changes involving AI?
Gestalt is strong where most AI discussions fall short. It doesn’t ask “what is true,” but rather what suddenly becomes important in a given situation, what gets the spotlight, where energy flows, and what remains in the background while still guiding things. In other words: it doesn’t argue about the map, but looks at where you are right now.
This is figure-and-field logic. The figure is what you look at, around which your day and your self-image are organized. The field is what produces this: rules, accesses, rewards, fears, habits—that silent infrastructure you don’t notice until it’s rewritten. Artificial intelligence rewrites precisely this—which is why it doesn’t simply speed things up, but redefines what matters. That’s why it hurts as if it were personal, even though it’s mostly a field-level realignment.
The logic of contact boundaries provides the map for survival. It shows where the human “gate” used to be, where status emerged from obscurity, and what happens when the question–answer pathway slides further down into the system. This is where the classic misunderstanding arises: we believe we are losing our jobs, when in fact we are often losing a source of status—that is, a leading role assigned by a field. And if we do not voice this, the body speaks for us: in the form of irritability, cynicism, and fatigue.
Gestalt is cheerful because it strips away the unnecessary tragedy from personal drama. It doesn’t say that “someone is malicious,” but rather that “under these conditions, such figures tend to emerge,” and if the conditions change, the old formation falls apart, and the organization instinctively wants to bring it back. In this language, a crisis is not an apocalypse, but a long-delayed cleanup emerging from the background. It’s like when they finally turn on the lights in the warehouse. It’s not pretty—but at least you can see.
Key Ideas
- Artificial intelligence is not a tool, but a casting director — it redefines the field in which the organization operates, thereby changing who takes center stage and who remains in the background.
- The “it won’t work without me” mindset is not a value, but a byproduct of a gap in the system — and what was once a solution to a gap now becomes a source of friction, because the system lowers the threshold for interaction.
- Every new system brings a new layer of errors — errors do not disappear, but move to where sensors are weak and responsibility can be shifted. Those who take ownership of the new layer of errors upfront are the architects; those who deny it will later play the hero.
- Speed is often a distraction — true value lies in the slow, well-formed question that gets to the heart of the matter, and in the ability to know which questions aren’t worth answering.
- Transparency is not an attack, but hygiene — unfinished gestalts generate background noise, and machine intelligence turns this background noise into figures. What previously operated in the shadows is now visible.
- “No” will be the new language of leadership — a good boundary does not close off, but clarifies, and this results in less noise, less projection, and less scapegoating.
- Heroes are replaceable. The field designer remains.
FAQ
Our organization is now introducing AI systems. What is the first step recommended by the Gestalt approach?
You shouldn’t start with the technology, but with mapping the field. We need to look at where the current boundaries of contact lie—who the gatekeepers are, where the “I’m in charge” types are, and where status has emerged from the shadows. This mapping isn’t done to shame people, but so the organization can see which roles will be disrupted as the question-and-answer pathway shifts further down into the system. If you don’t do this in advance, then AI implementation won’t be a technical project, but an identity crisis—only no one will call it that; instead, they’ll label it “resistance” or a “change management problem.”
What is the difference between an AI user and an AI environment (operational architect)?
The AI user “performs” workflows—writes prompts, generates reports, requests solutions. The AI environment (operational architect) “maintains” workflows—it determines where human decision points should be, where automatic execution should occur, where feedback should be provided, and where verifiable accountability should lie. The difference is not technical but positional: one can be replaced because its capability lies in the prompt, which can be taught. The other cannot, because its capability lies in the structure of the domain, which is a matter of experience and judgment. In Parkinson’s terms: the user is a subordinate, the operational architect is a designer—and the system cannot replace the designer, because the designer determines what needs to be replaced.
How can we distinguish true safety control from figure restoration?
A simple test: if the system works the same way after the control is removed, but someone feels uncomfortable, it was figure restoration. If, after removing the control, the system actually functions worse—more errors, more risk, measurable deterioration—it is a genuine security layer. In practice, most “just one more approval loop for safety’s sake” fall into the first category: they do not protect the system, but the gatekeeper’s identity. The Gestalt approach doesn’t say to eliminate all controls—it says to be honest about which controls protect and which ones merely reassure.
Related Thoughts
- AI Burnout: Brain Fry — when the structure of attention collapses due to a shift in the field
- FOBO: Fear of Becoming Obsolete — a closer look at the identity crisis of losing one’s place
- Zuboff and the smart machine — when automation is not a question, but system logic
Key Takeaways
- AI doesn’t just automate work; it fundamentally reorganizes the organizational landscape: those who previously added value by serving as information gatekeepers or interpreters may now become sources of friction. The key to survival is conscious organizational redesign, not clinging to old roles.
- Recognize and name the source of your past role (e.g., “only I know,” “it won’t work without me”). If you fail to do this, the change will cause an identity crisis and stress, because you’ll lose a source of status, not just a task.
- In old organizations, gatekeeper positions often compensated for the system’s shortcomings rather than creating real value. As Laurence J. Peter points out regarding the functioning of hierarchies, AI renders these artificially created barriers obsolete.
- The introduction of AI always generates new layers of errors and misunderstandings—this is system dynamics. The task is not to achieve a perfect system, but to continuously plan how these new challenges can be managed in the field.
- During change, retroflection (turning stress against oneself) is a common trap. Instead, the focus should be directed toward redesigning contact boundaries: how to make question-and-answer processes more direct and efficient with the help of AI.
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
The system was never the backdrop. You were.
Strategic Synthesis
- Translate the thesis into one operating rule your team can apply immediately.
- Use explicit criteria for success, not only output volume.
- Use a two-week cadence to update priorities from real outcomes.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.