VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
Through a VZ lens, the value is not information abundance but actionable signal clarity. Cognitive terraforming: language shapes thought, and the impact of AI doesn’t stop at the screen. Neural networks are powerful because they don’t recognize disciplinary boundaries. Strategic value emerges when insight becomes execution protocol.
TL;DR
Neural networks aren’t more efficient because they’re faster—but because they don’t recognize disciplinary boundaries. We humans think linearly in a networked reality: we break the world down into silos, even though everything is interconnected. In Samuel R. Delany’s Babel-17, language reprograms thought—in Kim Stanley Robinson’s Mars trilogy, century-long projects terraform the planet. Systems-level thinking does the same to our cognition: it reprograms how we perceive reality. This isn’t a course—it’s a recalibration. Cognitive terraforming: building a new world on the ruins of old ways of thinking.
The error wasn’t in the code
One evening, while examining the inner layers of a neural model, something clicked. The error wasn’t in the data. Not in the formulas. Not in the code.
It was in me.
I was thinking too linearly. I was rigidly categorizing. I was trying to interpret a networked reality as a human—as if I were trying to understand a three-dimensional object from two-dimensional drawings, page by page, in order, one after another. The model, however, saw things differently. For it, every connection was present simultaneously. Linguistic patterns were not separated from economic trends, psychological states were not detached from market movements, and quantum physics theories were not distant from user behavior patterns.
It didn’t systematize the world. It interwove it.
And that’s when I understood: I’m not teaching the machine. The machine is teaching me. To do what we may have always known but simply forgotten—to see the connections.
This moment wasn’t a technological breakthrough. It wasn’t the model that was special. What was special was that it shed light on the blind spot I hadn’t noticed even with twenty-five years of professional experience: my own thinking was the bottleneck. Not a lack of knowledge, not the limitations of my tools—but the way I had divided reality into compartments, because that’s how I’d been taught to think.
Babel-17 and the Language That Reprograms
Samuel R. Delany’s 1966 novel, Babel-17, conducts a bold thought experiment: what happens when a language not only describes reality, but rewrites thought? In the novel, Babel-17 is not merely a communication code—it is a weapon. Those who learn it begin to think differently. They see different patterns. They make different decisions. They become different people.
This is not mere fiction. A weak version of the Sapir–Whorf hypothesis (the theory of linguistic relativity)—which is strongly supported by modern cognitive linguistics—states precisely this: the language in which we think influences what we are capable of thinking. Different languages activate different cognitive schemas: different metaphors, different causal structures, different attentional frameworks.
Lera Boroditsky, a leading researcher in cognitive linguistics, has demonstrated in numerous experiments that the perception of time, color, space, and causality differs measurably from language to language. Speakers of the kuuk thaayorre language—who use absolute spatial directions instead of relative ones (left, right)—possess natural cognitive compasses that speakers of Western languages never develop.
The language of neural networks undergoes a similar transformation. A large language model (LLM) does not “think” in the way we do. The transformer architecture—where every token pays attention to every other token—embodies a completely different attentional structure than linear, sequential human language. It’s not just a new algorithm: it’s a new mirror. And if you look at it long enough, it looks back at you.
[!note] The lesson of Babel-17 Babel-17 isn’t about language manipulating. It’s about language being a building block. Anyone capable of learning a new language—be it a programming language, a mathematical formalism, or the perspective of systems thinking—literally opens up new dimensions of thought within themselves.
The Lesson of the Mars Trilogy: Terraforming from Within
Kim Stanley Robinson’s Mars trilogy—Red Mars, Green Mars, Blue Mars—captures the same idea on another level. On the surface, the trilogy is about the physical transformation of a planet: the centuries-long alteration of Mars’s atmosphere, temperature, and ecosystem. But on a deeper level, it is not the planet that is transformed—but the thinking of the people who live on it.
Robinson brilliantly depicts how terraforming is not a one-way process. It is not merely humans who shape Mars—Mars shapes humans. A 150-year lifespan, low gravity, the dynamics of closed communities, the necessity of long-term projects: all of this fosters a way of thinking that would be inconceivable on Earth. Martian generations don’t just live elsewhere—they think differently.
Systems thinking is precisely this kind of cognitive terraforming. It’s not just another skill on your resume. It’s not a workshop package you buy and then suddenly become a “systems thinker.” Rather, it’s a slow, profound transformation that changes how you perceive relationships, how you recognize patterns, and how you define problems.
Just as in the Mars trilogy, the colonists didn’t “learn” terraforming, but terraforming transformed them—so too, systems thinking isn’t something you master. It’s something that masters you.
The Paradox of the Neural Network
Artificial intelligence is not simply code execution. It is something else. A new form of cognition.
We humans usually operate linearly: first we understand, then we analyze, then we look for a solution, then we execute. Step by step. In sequence. Just as we were taught. Just as school, the workplace, and society expect.
A neural network does not “think” sequentially. Everything happens simultaneously within it, paying attention to one another. The transformer architecture—the structure upon which modern large language models are built—does not process input sequentially. Every token (word, word fragment) pays attention to every other token, and these attention weights crystallize in parallel, in a single step.
This isn’t a bug—it’s a feature. And we can learn from it.
It is no coincidence that this approach is so effective. Our brains are not linear either—it is only through education, language, and social structures that we have come to expect sequentiality. A baby doesn’t learn language step by step—it absorbs rhythm, melody, facial expressions, and context all at once. The transformer architecture’s attention-based world model is closer to this original, human way of learning than our linear educational models.
| Human thinking (taught mode) | Neural network | |
|---|---|---|
| Processing | Sequential — step by step | Parallel — everything at once |
| Categorization | Predefined conceptual frameworks | Emergent clusters in multidimensional space |
| Boundaries | Disciplinary, professional silos | No artificial boundaries |
| Attention | Narrow, focused, selective | Broad, parallel, all-encompassing |
| Learning | Rule-based, explicit | Pattern-based, implicit |
| Error | Error is a single-point failure | Error is a distributional shift |
[!tip] The essence of the paradox Machines categorize too—in fact, classification is the basis of how most AI models work: clustering, decision trees, arrays generated by hidden layers in multidimensional space. Even neural networks ultimately assign labels to inputs—just not necessarily along the lines of the categories we know. The difference isn’t that the machine doesn’t categorize—it’s that it doesn’t do so according to our conceptual map. It looks for connections, not labels.
Why Does Specialization Kill Systems Thinking?
I’ve been building systems for over twenty-five years. Along the way, I’ve watched as professions erect walls around themselves and isolate themselves from one another: IT isn’t interested in marketing, finance doesn’t talk to HR, logistics uses its own language. Each constructs its own reality, with its own metrics, its own goals. Its own Babel-17.
This siloing is no accident. Modern organizational theory is based on functional specialization: everyone does what they do best, and the system as a whole becomes more efficient. From Adam Smith’s pin factory example to Taylorist scientific management, this principle has dominated the past two hundred years.
But the world has changed in the meantime. Linear value chains have been replaced by exponentially connected networks. A disruption in a supply chain in Shenzhen affects the supply of parts to a car mechanic in a small Swedish town within two days. A social media trend in Jakarta reshapes a New York startup’s product plan. A regulatory change in Brussels alters the entire business model of a Singaporean fintech company.
Artificial intelligence, however, knows no such boundaries. A language model can interpret a poetic metaphor and a quantum algorithm using the same logic. A computer vision system sees not only images but also emotions—even in a stock market chart. A recommendation engine, meanwhile, does not separate consumer behavior from the logistics chain but interprets them within a common pattern.
This is the practical manifestation of systems thinking: breaking down boundaries is not a weakness, but the only viable path to adapting to complexity.
A Business “Problem” That Only AI Could See Through
I came across this story in a case study. A large multinational corporation was trying to make sense of a downturn in Asian markets. Declining customer activity, weakening brand loyalty—a typical marketing problem.
At least at first glance.
However, the company deployed a hybrid AI system that didn’t stop at surface-level metrics. The model simultaneously analyzed cultural patterns, economic trends specific to the region, consumer psychology, the pace of technological adoption, and the discourses forming in the deeper layers of social media.
The machine didn’t find a marketing problem—but rather a systemic disharmony.
The company’s operations did not resonate with local environmental expectations. It wasn’t the pricing that was wrong, it wasn’t the product that was weak, it wasn’t the campaign that was flawed. The entire system—the corporate culture, the supply chain’s value system, the tone of communication—clashed with the reality of the local ecosystem. And this wasn’t revealed by a focus group, but by the emotional patterns uncovered by the algorithm.
The solution wasn’t a new campaign, but a complete redesign of the supply chain—based on value alignment.
This story is a perfect example of how AI sees things differently. Not because it “knows more,” but because it can interpret reality on multiple levels—and this ability calls for a shift in thinking. The AI didn’t do marketing. It didn’t do financial analysis, HR audits, or technology assessments. It did all of them at once—and the pattern that emerged didn’t fit within the framework of any single discipline.
Quantum cognition: when a problem exists in multiple states
Quantum physics has shown that a particle can exist in multiple states—until it is observed. This is the principle of superposition: all possible states of a quantum system are present simultaneously, and only the act of observation (measurement) “fixes” it into a single state.
Perhaps problems are like this too. They are not unambiguous, but multifaceted. Their meaning depends on the perspective from which we view them.
An observation can be economic, psychological, technological, and ethical in nature all at once. A decision made in one factory can affect the consumer experience on another continent. And the very act of asking a question shapes the answer—because what we seek is partly constructed by ourselves.
This is not esoteric speculation. Constructivist epistemology—from Jean Piaget to Ernst von Glasersfeld—has long maintained that cognition is not passive reflection, but active construction. The observer is not an external spectator: they are part of what they observe. In quantum physics, this is called the observer effect—in cognitive science, the “framing effect.”
This is precisely the essence of systems thinking: multiple perspectives, multiple realities—within a single network. It is not that there is no truth. It is that truth is multidimensional—and those who view it from only one dimension are not lying, but they are not seeing enough either.
Why do we think in categories?
Categorization is not a mistake. It is one of evolution’s greatest inventions. The human brain receives about eleven million bits of sensory information every second, and consciously processes about forty bits of that. Categories—this is dangerous, this is edible, this is a friend, this is an enemy—are tools for survival. They simplify, protect, and help us make decisions.
But in a complex world, this has become a limitation.
As experts, we tend to see the world through our own lenses. If you’re a data scientist, you see data in everything. If you’re a psychologist, you see people. If you’re an engineer, you see systems. If you’re an economist, you see markets. Specialization becomes a refuge from complexity—and in the process, it also becomes an identity: “I am a data scientist,” which excludes other perspectives.
This phenomenon is not unknown in psychology. Abraham Maslow’s famous saying—“if you have a hammer in your hand, everything looks like a nail”—is a classic formulation of instrumental conditioning (the law of the instrument). Expertise doesn’t just provide knowledge—it also creates blind spots.
Neural networks don’t suffer from this distortion. Not because they’re “better”—but because no one taught them not to see the connections. No one told them, “This is marketing, that is logistics, and the two have nothing to do with each other.” That is why they are capable of revealing patterns that human expertise—precisely because of its own success—obscures.
But the world cannot be categorized. And it doesn’t ask for permission to intertwine.
The Solution: Cognitive Bridge-Building
Thinking can be reprogrammed. We don’t need to abandon expertise—just make it permeable. The goal isn’t to know everything, but to be able to see in multiple ways.
This starts with practice. Not with theory, not with a course, not with certification—but by looking at a problem not just as an expert, but as a poet, an anthropologist, or a curious child.
Three practices of cognitive bridging:
1. Changing Perspectives — the problem as a prism
Let’s take a specific problem—say, declining employee engagement. The HR person says: it’s a motivation problem. The finance person says: we need a pay raise. The IT specialist says: we need better tools. The coach says: we need leadership development.
The systems thinker asks: what is the pattern? What system-level change led to this? What interactions are maintaining the current state? What is invisible—because we’re all looking from our own silos?
A shift in perspective is not empathy (though that is important too). Technique: consciously stepping into a different conceptual framework and viewing the same phenomenon from there. Not just understanding the other person—but thinking through the other person’s eyes.
2. Heterogeneous dialogue — different ways of thinking as raw material
Talk at least once a week with someone who uses a completely different language of thought. Not to “learn” their field—but to make the limits of your own thinking visible. A physicist defines “system” differently than a sociologist. An artist understands “complexity” differently than an engineer.
These differences are not obstacles—they are raw materials. Collective intelligence is born not from uniformity, but from fruitful diversity.
3. Expanding our attention—from ourselves to society
Consciously expanding our attention: from ourselves to our family, from our family to our community, from our community to society—and then back again. This is not an abstraction. It is a fundamental exercise in systems thinking: discovering how influence flows from one level to another, and how personal decisions are connected to systemic patterns.
Those who look only at themselves do not see the system. Those who look only at the system lose sight of the person. Cognitive bridge-building is precisely about this: holding onto both at the same time.
What happens to accountability in a networked organization?
A leader once asked me: “If everything is interconnected, then who is responsible when something goes wrong?”
Good question. But it’s not a technological one—it’s a sociological one.
A flawed AI decision isn’t just a technical glitch. It can turn into a legal case, an ethical scandal, a PR crisis, a financial loss—and even more than that. Responsibility becomes dispersed. Or rather: it dissipates. In this networked reality, the concept of individual responsibility—which was built on hierarchical organizations—becomes inadequate.
The solution is not to abolish accountability. The solution is to introduce network accountability, which does not assign consequences to a single person but interprets them within the systemic context of decision-making.
This requires new types of organizational structures:
- Rotating leadership: it is not the position that determines who leads, but the context. Different problems require different leaders—not based on hierarchy, but on competence.
- Cross-functional bodies: ethical dilemmas are not examined by a single department, but by a body that includes an IT specialist, a lawyer, a psychologist, and an ethicist.
- AI-assisted decision-making: not because AI makes “better” decisions, but because it is capable of modeling network effects—showing what a decision causes at other points in the system.
Responsibility is not individual, but system-wide. This is not an excuse—it is a realization.
Redefining Personal Identity
The deeper we dive into systems, the less the “self” remains as a separate entity. The “programmer” is not a fixed role—it is a position within a social, economic, and psychological network. The “data scientist” is not an identity—it is a function that follows from the system’s current state.
This can be unsettling. The ego loves contours. Those boundaries that tell us: this is me, that is not me. But these are temporary constructs—like a map, which is not the terrain. The Polanyi Paradox shows that we cannot explicitly articulate our deepest knowledge. The same is true of our identity: its deepest layers cannot be expressed in professional labels.
Growth begins where we dare to let these go. Not to lose them—to let them go. Not to give up who we are—to expand who we can be.
Thirty years ago, I thought I was a programmer. Then a systems designer. Later, a project manager. Then a senior manager. For a while, a trainer, a coach. Today I know: I am a systems thinker. Someone who brings the same attention to different contexts. This wasn’t a career. It was a cognitive transformation.
You are not data—it’s just that the world treats you that way
Somewhere deep inside a server room, a neural network is calculating what makes you happy. Or what makes you appear to be happy. The micro-twitches of your facial features, the rhythm of your keystrokes, the next pattern of your shopping habits—they’re all part of a narrative you didn’t write.
Your body has become an interface. Your identity: a data package. And your decisions: predictive consequences. The digital world doesn’t ask, doesn’t question—it just models. And you increasingly live the way it assumes you will live. This isn’t a conspiracy theory. This is the day-to-day operation of algorithmic reality.
But here’s the twist: if you understand this system—if you’re able to view the network you’re embedded in from the outside—then you’re no longer just a data point. You regain your agency. You don’t step out of the system (that’s impossible), but consciously position yourself within it.
Systems thinking in this context is not an academic discipline. It is a technique of sovereignty.
Education and Culture: The Next Generation
Today’s educational system is still based on industrial logic. It is built on subjects, grades, and specialization. The curriculum is linear—first learn math, then physics, then chemistry, and only in college does it become clear that the three are closely interconnected. This structure reflects the logic of 19th-century factories, where workers had to perform a single task efficiently.
But the problems of the future don’t work that way. Not subjects. Not exam questions. Networks.
Climate change is not “environmental science”—it is simultaneously an economic, political, psychological, technological, and ethical issue. Regulating artificial intelligence is not “computer science”—it is simultaneously a legal, philosophical, sociological, and power-political issue. Managing pandemics is not “public health”—it is simultaneously a logistical, communicational, cultural, and trust-related problem.
The younger generations—Generation Z and Generation Alpha—already intuitively sense this. They do not think in terms of academic subjects, but in terms of content, platforms, and connections. The question is, can we provide them with a structure that does not stifle this sensitivity, but rather gives it form and strength? One that does not force their thinking into silos, but organizes it into a network.
How does ethics work if everything is interconnected?
If everything is interconnected, then the consequences of decisions cannot be localized either. The impact of an AI algorithm does not stop at the screen. It seeps into society, politics, and psychology. The language model that crafts a job posting influences who applies—and who does not. The recommendation system that curates news shapes what a society thinks about the world. The predictive system that assesses creditworthiness decides who has a future—and who does not.
The new ethical question is therefore not “was this decision right,” but: what systems made this decision possible—and what systems will it create?
This is cascade ethics: it does not examine the morality of a single decision, but the entire chain of systemic effects of that decision. The first link in the chain may be innocent—the sixth may already be catastrophic. And yet, no one felt they had done anything “wrong.”
Without systems thinking, ethics will always remain reactive—it deals only with the consequences, never with the structure that created them.
The Future: Thinking Together with AI
AI isn’t a challenge because it will be “smarter” than us. It’s because it’s already capable of seeing systems—things we still perceive as separate parts.
The question isn’t what it should do for us. It’s how we should think alongside it.
Somewhere deep inside a data center, a neural network is working to connect things that we have carefully separated. It doesn’t think in categories, as we do, but in relationships. All data is connected to everything else—your shopping habits hint at your mood, the rhythm of your keystrokes reflects your health, and your social media interactions point to your future decisions.
Perhaps it’s time for us to relearn what we knew as children: everything is connected to everything else. And in this discovery lies the next evolutionary step—not just in technology, but in the way we think.
The intelligence of the future isn’t built on silos. It’s built on connections.
Key Takeaways
- Linear thinking isn’t enough in a networked world — those who don’t see systems will fall behind: complexity is growing faster than our ability to understand things sequentially
- AI is powerful because it doesn’t recognize professional boundaries — a language model analyzes a poetic metaphor and a quantum algorithm with the same logic, because no one taught it that the two are different
- Cognitive terraforming is not a course but a transformation — just as the colonists in Robinson’s Mars trilogy did not “learn” terraforming but were transformed by the process, so too does systems thinking transform the thinker
- Categorization is an evolutionary virtue, but a modern limitation — specialization turns from a refuge into a prison if we do not make its walls permeable
- This is not a career, but a cognitive transformation — everything is interconnected, including you
Frequently Asked Questions
What is cognitive terraforming, and how does it relate to systems thinking?
Cognitive terraforming is a metaphor borrowed from Kim Stanley Robinson’s Mars trilogy: just as terraforming physically transforms a planet, systems thinking transforms thought structures from within. It is not just another skill added to an existing repertoire—but a fundamental reorganization of that repertoire. It is a transition from linear, sequential, silo-based thinking to networked, parallel, relationship-based thinking, which changes how we perceive problems, boundaries, and connections.
How can systems thinking be learned in practice?
Through three specific exercises: (1) a shift in perspective—consciously viewing the same problem from a different conceptual framework (as an engineer, psychologist, anthropologist, or child); (2) heterogeneous dialogue — having a weekly conversation with someone who uses a completely different language of thought, not to learn their field, but to make the limits of your own field visible; (3) expanding your focus — consciously moving from the individual level to the systemic level and back. The point is not the quantity of information, but the diversity of perspectives.
How does systems thinking differ from holistic thinking?
Holistic thinking is often intuitive and generalizing—it remains at the level of “everything is connected to everything else” without specifying concrete relationships, feedback loops, and nonlinear effects. Systems thinking is more precise: it does not merely state that elements are connected, but how they are connected, what interactions maintain the current state, and what leverage points exist for change. Systems thinking is not daydreaming about the whole—it is the mapping of the dynamics between parts with engineering precision.
Related Thoughts
- The Anatomy of the Digital Age — the invisible layers of the architecture of attention
- 2034: When the Human Brain Becomes the Last Firewall — eight neurohacking skills for the posthuman world
- The Architecture of Thought — how what we call thinking is constructed
- The Age of Collective Intelligence — when a team becomes a nervous system
- AI as a Mirror of Civilization — we see ourselves reflected in these systems
- Contemplative RAG — the intersections of meditation and knowledge systems
- The Polanyi Paradox — what we know but cannot articulate
- In the Shadow of Algorithms — how to remain human in the posthuman age
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
The map is not the territory. The network is.
Strategic Synthesis
- Translate the thesis into one operating rule your team can apply immediately.
- Monitor one outcome metric and one quality metric in parallel.
- Run a short feedback cycle: measure, refine, and re-prioritize based on evidence.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.