Skip to content

English edition

In the Shadow of Algorithms: How to Stay Human

A woman in the market doesn’t make decisions—she follows the algorithm’s recommendations. Three digital castes are emerging, and metacognition is the only means of defense.

VZ editorial frame

Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.

VZ Lens

Through a VZ lens, the value is not information abundance but actionable signal clarity. A woman in the market doesn’t make decisions—she follows the algorithm’s recommendations. Three digital castes are emerging, and metacognition is the only means of defense. Its business impact starts when this becomes a weekly operating discipline.

TL;DR

A new caste system is taking shape: those who understand algorithms make the decisions—those who don’t are subject to them. The solution isn’t to reject technology, but to consciously practice metacognition (thinking about thinking). The most radical human act today is not pressing the off button, but the ability to ask questions.


Early Morning Market at Fővám Square

Algorithmic influence is not a conspiracy theory, but a systemic feature: the business model of surveillance capitalism is behavioral prediction. Today, three digital castes are taking shape—the algorithm creators, the algorithm understanders, and the algorithm-driven—and the dividing line between them depends not on technological access, but on understanding.

Saturday morning at six o’clock, on the lower level of the Fővám Square Market Hall. The vendors are still arranging their crates; the scent of fresh bread and peppers mingles in the air. The woman standing next to me is looking at her phone—the Lidl app is recommending something she “will surely love” because she bought it last week. She doesn’t look up. She doesn’t hesitate. She puts it in her basket.

At this moment, no purchase has taken place. A prediction has come true.

For months, an algorithm has been tracking what she buys, when she buys it, and the order in which she moves through the aisles. And now—at a Saturday morning market, where spontaneous discovery is supposedly the whole point—she has done exactly what a model predicted.

In Frederik Pohl 1953 novel The Space Merchants (The Space Merchants), advertising ruled the world—companies didn’t sell products, but manufactured desires. In Connie Willis Doomsday Book, time travelers never arrived where they intended to go—the system always intervened. Today, algorithms represent the perfect synthesis of these two dystopias: advertising has become so precise that before you even know you need something, you’ve already bought it. And the system always intervenes—we just don’t notice it because the intervention disguises itself as a personalized recommendation.

We’ve never been fully autonomous—but this is different

Humans have never been completely free agents. Culture has shaped us, social norms have guided us, and linguistic structures—as Benjamin Lee Whorf and Edward Sapir described as early as the 1930s—have regulated the frameworks of our thinking. School, family, religion: all were filters through which the world reached us.

But there was one crucial difference: these filters were slow.

Tradition shaped us over decades. Language changed over generations. Social norms evolved slowly, through debates, revolutions, and compromises. You had time to realize they were shaping you. You had time to resist, if you wanted to.

In the digital age, this shaping has taken on a new quality. It is not tradition that defines who we are—but real-time predictions. The algorithm does not wait decades, as tradition does. It wants to decide for you right now. And most of the time, it does.

Shoshana Zuboff, a Harvard professor, has dubbed this surveillance capitalism: an economic model in which human behavior is not merely observed, but is raw material. The behavioral surplus—every click you make, every moment you spend on a page, every purchase you make—becomes a data point in a profile that often predicts your decisions more accurately than your own intuition.

Zuboff’s key statement: “Human experience is unilaterally declared a free raw material, which is transformed into behavioral data.”

This is not science fiction. It is the reality of our everyday lives. And in this reality, something is gradually disappearing that we may not even have known existed: the possibility of spontaneous decision-making. Freedom is being transformed: into menu items, settings, personalized recommendations. The illusion of choice is perfectly simulated—while the options are pre-filtered by an invisible hand.

The Birth of the Silicon Soul

Artificial intelligence is no longer just a tool. It is an interpretive framework.

Think about it: GPS doesn’t just navigate—it transforms our ability to orient ourselves in space. A translation app doesn’t just translate—it transforms the motivation for language learning. ChatGPT doesn’t just answer—it transforms how we phrase our questions.

In the world of deep learning, models have emerged that are capable of anticipating human decisions. AI doesn’t “think”—it patterns. It recognizes patterns in your behavior faster than you do yourself. At the intersection of data science and psychology, a new, unnamed discipline is taking shape: let’s call it algorithmic psychology. In this framework, the subconscious is no longer a hidden depth, but a data model. Not Freud’s couch, but a preference map represented in a vector space.

graph LR
    A["Human behavior<br/>(clicks, scrolling, purchases)"] --> B["Data point<br/>(behavioral surplus)"]
    B --> C["Prediction model<br/>(machine intelligence)"]
    C --> D["Personalized recommendation<br/>(nudge)"]
    D --> A
    style A fill:#4a5568,stroke:#e2e8f0,color:#e2e8f0
    style B fill:#2d3748,stroke:#e2e8f0,color:#e2e8f0
    style C fill:#1a202c,stroke:#e2e8f0,color:#e2e8f0
    style D fill:#2d3748,stroke:#e2e8f0,color:#e2e8f0

This is a closed loop. Your behavior generates data, the data feeds the model, the model influences your behavior, and the new behavior generates more data. There is no way out—unless you are aware of the loop.

What happens to the brain when we outsource our thinking?

Technology helps us while simultaneously robbing us. This isn’t cynicism—it’s a neurological fact.

Research on neuroplasticity (the brain’s ability to adapt) shows that the brain physically rewires itself to its environment. If you regularly navigate without a map, the hippocampus (the center of spatial memory) grows—this was demonstrated by MRI scans of London taxi drivers’ brains. If you rely entirely on GPS, the hippocampus shrinks.

Now think about what happens when you outsource your thinking.

It’s not about arithmetic—we outsourced that to calculators long ago, and no one shed a tear. It’s not about spelling—spell-checkers have been around for a long time, and they do a good job. But rather the structure of thought: how you formulate a question, how you weigh arguments against each other, how you get from “I don’t know” to “maybe that’s right.”

If the environment in which your brain operates is becoming increasingly machine-like—what does the brain do? It adapts. It becomes a machine. Not metaphorically. Functionally.

The spectrum of outsourcingWhat we loseExample
Memory outsourcingIntentional recall“I went to the store, but I don’t know why”
Outsourcing navigationSpatial orientationWe get lost in our own city without GPS
Outsourcing decision-makingAbility to evaluate“The algorithm recommended this, so it must be good”
Outsourcing questioningCuriosity“Ask ChatGPT” instead of thinking it through
Outsourcing judgmentMoral reasoningAutomatic moderation decides what is acceptable

The last two lines are critical. When we outsource decision-making and judgment, we don’t lose a skill—we lose what makes us human.

How does the algorithm reshape social stratification?

Society isn’t simply being digitized. It’s being redistributed.

The UNESCO 2024 AI Literacy report warned: a lack of AI literacy creates a new type of digital divide that is not about access—but about understanding. The question isn’t whether you have a smartphone. The question is whether you understand what your smartphone is doing to you.

Three castes are emerging:

CasteCharacteristicsPosition of Power
Algorithm CreatorsThey design, build, and fine-tune the systemsThey write the rules
Algorithm UnderstandersThey recognize the mechanisms and navigate them consciouslyThey are able to resist and adapt
Algorithm-drivenDo not recognize the influenceThe system is invisible to them — and that is precisely why it is effective

Predictive policing (where algorithms “predict” where crimes will occur) is already in use in the United States. Automated HR systems (ATS—Applicant Tracking Systems) are already screening resumes before a human eye sees them. Targeted ads no longer just want to sell you something—they want to shape your opinion.

These are not visions of the future. These are the structures of the present. They invisibly regulate who can do what—and more importantly: who can think what.

Philosophical Resistance: The Question as a Weapon

Artificial intelligence does not ask questions. It estimates.

This is the deepest difference between human and machine intelligence. AI optimizes for an objective function: how to keep you on the screen longer, how to make you buy more, how to get you to click on the next video. There is no doubt in it. There is no moment when you stop and say, “But why?”

Human existence, on the other hand, is built on questioning and doubt. Socrates wasn’t looking for answers—questions. Philosophy isn’t the accumulation of knowledge, but the conscious mapping of not-knowing. And this is precisely what algorithms cannot model: deliberate uncertainty. That moment when someone forgoes an effective answer because a good question is more important to them.

Freedom doesn’t disappear in this world. It simply becomes a feature. A menu option in the settings. “Turn off personalization”—as if that would solve the problem. True freedom isn’t a click. True freedom is the ability to recognize that your options have already been filtered, and to act consciously against it.

Why is metacognition the best defense against algorithmic influence?

If there is one skill that protects you in the algorithmic age, it is metacognition: thinking about thinking.

It’s not a mystical concept. It’s a practical skill. It means that while you’re making a decision, you’re able to take a step back and ask: “Why am I deciding this way? Based on what? Who or what steered my attention here?”

Psychological research consistently shows that metacognitive ability—the ability to reflect on one’s own thought processes—correlates with resistance to manipulation. Not because people with metacognition are smarter, but because they are slower. They pause before reacting. They ask questions before accepting.

In an algorithmic context, this is a lifesaver. Because algorithms are optimized for quick, automatic reactions. For not thinking—just clicking, liking, buying, scrolling. Metacognition breaks exactly this: it introduces a deliberate pause between the stimulus and the reaction.

Stimulus (recommendation, news, notification)
 |
 v
+-----------------------+
|  Automatic reaction   | <- This is what the algorithm wants
|  (click, scroll) |
+-----------------------+
 |
   METACOGNITIVE PAUSE
 |
 v
+-----------------------+
|  Conscious deliberation    | <- This is what you can do
|  "Why do I want this?"   |
|  "Who benefits from this?"  |
|  "Is this my decision?"  |
+-----------------------+
 |
 v
   Intentional action
   (or intentional inaction)

This isn’t Luddite anti-technology. It’s algorithmic self-defense.

Hybrid identity: between human and machine

The future isn’t anti-human. It’s just different.

The new human identity isn’t purely biological, nor purely mechanical—it’s a partnership. But as with any partnership, the question remains: who’s in charge? Who sets the terms?

Art has been searching for the answer for decades. Brian Eno and his generative music, where humans set the rules but machines create the specific composition. The net art movement, where creators deliberately subverted the logic of algorithms. Digital literature, where the text is non-linear—the reader’s choices shape the narrative.

Education, design, ethics: all are searching for new languages. Languages in which there is still room for questions, doubt, a sense of humor, and pain. These qualities cannot be scaled—and that is precisely why they are human. An algorithm can optimize for happiness, but it cannot suffer. It can generate jokes, but it has no sense of humor. It can simulate empathy, but it cannot be hurt.

A posthuman (post-human) existence does not mean that humans will disappear. It means that humans will redefine themselves—not in opposition to machines, but in relation to them.

The question is not whether you are human or machine. The question is what kind of human you want to be in a machine-driven environment.

The Last Hour of Free Thought—or the First

The title is intentionally dramatic. But the stakes are real.

It’s not that algorithms are evil. It’s not that technology is bad. It’s that we live in a system that isn’t optimized for awareness—but for engagement, conversion, and retention. And if you’re not aware of this, the system will optimize you.

Four things you can do — today, right now, right here:

Learn to interpret the systems that influence you. You don’t need to know how to code. You need to understand the logic: why do you see the content you see? Why do they recommend what they recommend?

Build critical awareness. When a recommendation seems “natural,” that’s the most dangerous moment. That sense of naturalness means the algorithm is working perfectly.

Question the “personalized” experience. Personalization doesn’t serve your interests—it serves the platform’s interests. The two sometimes coincide. But not always.

Be present when you make a decision. Not on autopilot. Not based on your phone’s recommendations. But with the awareness that the decision is an act—not a reflex.

Key Takeaways

  • Algorithmic influence is not a conspiracy—it is a systemic feature. The business model of surveillance capitalism is behavioral prediction
  • The digital caste system is not about access, but about understanding: those who understand algorithms can decide; those who don’t, have decisions made for them
  • Metacognition—thinking about thinking—is the most effective tool for algorithmic self-defense
  • The most radical human act is not rejecting technology, but consciously asking questions

Frequently Asked Questions

What is surveillance capitalism, and how does it affect my daily life?

Surveillance capitalism is a term coined by Harvard professor Shoshana Zuboff to describe an economic model in which tech companies use human behavior as a raw material. All your online activity—clicks, scroll time, purchasing patterns—becomes “behavioral surplus,” which is used to build predictive models. These models predict what you will do, and this prediction itself is the product sold to advertisers. In your daily life, this means that recommendation systems, news feeds, and targeted ads don’t hit the mark by chance—but rather, based on models of your behavior, deliberately.

How is the digital caste system taking shape?

The traditional digital divide was about access: do you have an internet connection, do you have a device? The new divide is about understanding. According to a 2024 UNESCO report, the lack of AI literacy creates a new type of inequality. Three tiers are forming: those who design the algorithms (and thus the rules), those who understand how they work (and navigate them consciously), and those who have no idea they are being influenced by them. For the third group, the algorithm is invisible—and it is precisely this invisibility that makes it effective. The issue is not technical: it is ethical and educational.

Does metacognition really help against algorithmic influence?

Yes, but not as a magic shield. Metacognition—thinking about thinking—means you’re able to reflect on your own decision-making process: why am I deciding this way, what’s influencing me, where did this impulse come from. This doesn’t make you immune to manipulation, but it introduces a deliberate pause between the stimulus and the reaction. Algorithmic systems are optimized for quick, automatic reactions—metacognition breaks precisely this pattern. Practical step: before accepting a “personalized” recommendation, ask yourself: “Is this my decision, or the algorithm’s?”



Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
Your autonomy is a feature, not a bug.

Strategic Synthesis

  • Identify which current workflow this insight should upgrade first.
  • Set a lightweight review loop to detect drift early.
  • Review results after one cycle and tighten the next decision sequence.

Next step

If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.