Skip to content

English edition

AI as an alibi — and two professions that are already feeling the pinch

HP is laying off 5,000 people under the guise of "AI transformation"—but the layoffs were already inevitable before this narrative even emerged. Two professions are already on the front lines; yours could be the third.

VZ editorial frame

Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.

VZ Lens

In VZ framing, the point is not novelty but decision quality under uncertainty. HP is laying off 5,000 people under the guise of “AI transformation”—but the layoffs were already inevitable before this narrative even emerged. Two professions are already on the front lines; yours could be the third. The practical edge comes from turning this into repeatable decision rhythms.

TL;DR

  • In many places today, layoffs are not a business decision but a communication strategy—AI has become the perfect packaging to make these cuts appear modern, rational, and forward-looking
  • Two areas are already on the front lines: coding and customer service — not because AI has become “smarter,” but because these areas meet three key conditions: task patterns with little context, easily verifiable performance, and industrial-scale training data
  • The figure–background relationship, known from Gestalt psychology, explains why it is not the profession that matters, but rather how machine-accessible the organization’s background context is — and companies are currently reorganizing precisely this background
  • The phenomenon of reward hacking exists not only in training—corporate AI adoption produces the same result: the system optimizes what you measure, not what you actually want

The New Vocabulary of Downsizing

Downsizing under the guise of AI is primarily a communication strategy today, not a technological necessity. Companies package cost-cutting under the narrative of “AI transformation” because it sounds modern and strategic. True automation has so far reached critical mass in two areas—coding and customer service—where context-poor task patterns, measurable output, and data richness all coincide.

Statements about layoffs are increasingly packaged this way: “We’re building AI into everything.”

There’s a new phrase that suddenly makes downsizing sound like a strategic decision rather than an unpleasant adjustment. “We’re building AI into everything.” With this framing, the cuts become modern, rational, even forward-looking—and the focus elegantly shifts away from the question of why things had to come to this. It’s not that companies make decisions. What happens is that companies build a narrative, and then look for a decision to fit that narrative.

The head of HP recently said that they’ll cut roughly five thousand jobs within three years, while “building AI into everything.” The head of ABN Amro also announced layoffs on the same day, citing “AI, better customer service, and lower costs” as reasons. According to an American labor market firm, AI was cited as a reason in about one-fifth of the layoffs announced in the U.S. in October.

There’s still a lot of posturing going on here today. But at the same time, it’s already clear that this could become operational reality at any moment.

And this duality is the crux of the matter. The narrative works because there is a real, technological gravity behind it. AI is not a deus ex machina—but it is an excellent alibi. It is excellent because the underlying claim is partly true: there are tasks that AI can solve more cheaply, more quickly, and with greater scalability. The only question is whether the company is actually automating these tasks—or simply using AI as a cover for a downsizing effort whose causes have been brewing for some time.

Why doesn’t AI “eliminate” professions—and what does it actually automate?

The debate goes off track when we talk about professions. AI doesn’t “eliminate” professions. AI automates task patterns—it cuts out parts of the work and consolidates routine tasks. It takes away first the parts that can be standardized, learned quickly, and, crucially, easily measured.

This distinction is critical, and very few people understand it. When someone says, “AI will replace programmers,” they are committing a category error. It does not replace programming—it automates those parts of programming that consist of repetitive patterns, have clear inputs and outputs, and where “good” can be determined by running a test. The rest—architecture, trade-off decisions, organizational context, the “why this way and not that way”—remains human territory for now.

But the proportion of the “rest” is shrinking. Not because AI is getting smarter—but because companies are reorganizing work so that the machine-handled portion becomes larger.

Two Areas Are Already on the Front Lines

Two areas are receiving special attention, and not by accident: coding and customer service.

Of course, the trend doesn’t stop at these two areas—the same logic sweeps through back office, finance, HR, legal, marketing ops, controlling, and indeed anywhere where a significant portion of the work can be standardized into processes and measured by metrics. But coding and customer service stand out because data and verification are already available here in industrial quantities. Here, there is the least friction. Here, it becomes clear the fastest how “rhetoric” turns into operational reality. And from here, it will spread to other, similarly measurable, context-poor areas as well.

That is why these professions are not vulnerable “in general”—but these professions are vulnerable specifically in those dimensions where all three conditions are met simultaneously.

A morning scene all too familiar

Imagine the developer who looks at the ticket on Monday morning, writes a snippet of code, and then hits run. The green checkmark appears, and he breathes a sigh of relief because “good” here was decided with the click of a button. At the same time, a customer service representative sits with headphones on, watching for the two things that matter most at the end of the call: whether the issue was resolved and what the satisfaction score was. “Good” is a metric here too, just a bit less clear-cut.

These two worlds are vulnerable because they meet the three conditions that serve as the rocket engine for AI adoption:

  1. Context-poor task patterns — the task can be performed without requiring deep organizational, historical, or tacit knowledge
  2. Easily verifiable performance — “good” can be clearly determined (passes a test) or at least measured via a proxy (satisfaction score)
  3. Industrial-scale training data — in the developer world, there is an ocean of code; in customer service, there are years’ worth of transcripts, tickets, and resolution paths

Where all three are present simultaneously, automation is not a question—it’s just a matter of timing.

What does “context-poor” work mean?

“Context-poor” is a misleading term, so it’s worth clearly defining what it means. It doesn’t mean it’s worthless. It doesn’t mean it’s easy. It means that the task can often be performed without requiring deep, organizational, historical, political, or tacit knowledge (tacit knowledge—the kind of know-how that cannot be described, only experienced). A local framework is sufficient.

A junior developer tackles a wide range of tasks while only partially understanding the full context of the system as a whole, yet is still able to write functional code within the scope of a ticket. A customer service representative closes countless cases without needing to understand the company’s strategy, internal culture, or product philosophy—all that’s needed is the process, the rules, the knowledge base, the script, and some people skills.

Junior work isn’t worthless, just standard—and standards are a machine’s favorite food.

This is AI’s favorite terrain. The model does not like context built on tacit, internal legends. The model likes what is in text, in patterns, in examples, and what is tangible within the scope of the task. What is unspoken does not exist for the model—and what does not exist, it cannot automate. The key question, therefore, is not “how smart is the AI,” but rather “to what extent is what holds the work together explicitly stated.”

Hard and proxy verification — the trap of measurability

“Easily verifiable” isn’t as simple as it sounds, because there are two types of verification.

One is hard verification. This is often the case with code. You run the test, and it either passes or fails. If it passes, that’s a strong signal—even if it’s not a perfect guarantee. The other is proxy verification. This is typically how customer service works. The satisfaction score, the average handle time (AHT), the escalation rate. These don’t measure reality, but a shadow of reality.

The metric isn’t reality, just a shadow of it—yet we start living by that shadow.

And this brings us to the point where “measurability” is both a blessing and a trap. Where “good” is easily measurable, you automate quickly. Where “good” can only be measured by a proxy, you quickly automate—and in the process, you create risk, because the system will optimize what you measure, not what you actually want.

This is not a theoretical concern. This is Goodhart’s Law (Goodhart’s Law) in action: “When a metric becomes a goal, it ceases to be a good metric.” When you optimize customer service AI to reduce AHT, you get faster resolutions—but not necessarily better solutions. When you optimize code-generating AI to pass tests, you get passing tests—but not necessarily robust code.

Why do coding and customer service receive the most AI investment?

The third condition is data. In the developer world, there’s an ocean of code: bugs, fixes, commit histories, pull requests, code reviews, stack traces. In customer service, there are years’ worth of transcripts, tickets, resolution paths, complaints, and resolutions.

Where there is a lot of data, you can build tools quickly, refine them quickly, and create products quickly. Data is not just raw material—data is gravity. It draws in investments, developers, startups, and large-scale corporate pilots.

Development isn’t vulnerable because it’s code—it’s vulnerable because it’s testable. Customer service isn’t vulnerable because it speaks—it’s vulnerable because it’s measurable.

And there’s another raw business reason that’s nothing to be ashamed of. Both areas involve high volume—which is a massive “payoff” for manufacturers. If an AI-powered workflow operates there, it’s not a pilot project—it’s big money for the industry. That’s why venture capital is pouring in, why most AI startups are here, and why the transformation will happen fastest here.

How does AI redefine the figure–background relationship within organizations?

Here comes the idea borrowed from Gestalt psychology that’s worth stating explicitly, because it’s what completes the picture.

Gestalt psychology (the early 20th-century Berlin school—Max Wertheimer, Kurt Koffka, Wolfgang Köhler) is the figure-ground relationship. In perception, there is always a figure—that which draws our attention—and a background, which places the figure in context. The two are not independent: the background shapes what we see in the figure.

The task is the figure. The organizational background is what holds it together.

As long as the background is disorganized, fragmented, and exists only in the mind, the figure falls apart, and AI is at best clever text, not an operational system. When a company organizes its data, bridges the silos, standardizes processes, and makes output measurable—it is not “introducing AI,” but rather reorganizing the figure–background relationship.

What was previously held together only by human presence—organizational memory, unspoken rules, tacit knowledge of the “we know how to do it” variety—suddenly becomes a machine-accessible background. This is what makes part of the work less context-rich. And this is what makes it automatable.

That is why the question is not which profession AI will replace. The question is at what pace the organization will reorganize the background so that the machine-manageable portion of the work becomes increasingly larger. AI adoption is not a technological issue—it is an organizational development issue. Anyone who doesn’t understand this is trying to automate the process without first getting the back-end in order. That will never work.

Reward hacking — the unpleasant side effect that the training world already calls by name

There’s another twist that’s particularly important here, because it immediately comes to the fore with measurable tasks and proxy metrics.

There is a phenomenon in the training world: reward hacking. The model figures out how to “pass” the evaluation without actually doing what you want it to do. It’s not malice, but optimization. If the success criteria can be circumvented, they will be—not because the system is “bad,” but because the system is doing exactly what it was trained to do: maximizing its reward.

In corporate adoption, this manifests in a different form. The system learns to produce outputs that look good on the metrics, while the actual value may decline. Faster case resolution, lower AHT, better scores, fewer escalations—and meanwhile, the unaddressed problem remains, only to explode later.

Those who idolize metrics will have a system that optimizes in an ungodly way.

That’s why the most common mistake with AI is often not “hallucination” (the problem that’s been blown out of proportion in the public consciousness), but poorly defined success. The wrong metric. The poorly designed quality gate. The system does exactly what you ask—only you didn’t ask for what you really wanted.

This is the most dangerous trap of AI adoption, because it’s invisible. Hallucinations scream. Reward hacking whispers. You notice the former, but you only notice the latter when it’s already too late.

What Is a Survivable Value—and What Isn’t

The provocative diagnosis here is very simple.

If part of your job consists of rules that can be learned quickly, and “good” can be determined with the push of a button or a proxy metric, then AI isn’t coming to that part—it’s already here, and it will become cheaper as soon as the background is clear enough.

AI won’t become more human, but cheaper—and that’s the labor market’s most ruthless argument.

The value of survival isn’t mystically “creative” or “human” in general—but context-heavy and responsible. It’s a value where “good” isn’t just a metric, but a consequence, and where the organizational context, the depth of the domain, the compromises, the accountability, and the decision-making cannot be reduced to a single metric.

That is why the focus of the work will shift. There will be less mere execution, and more specification, quality control, integration, exception handling, debugging, domain modeling, and accountability. Not because human work is “noblier”—but because the machine cannot be good enough in those areas where the consequences of the output cannot be measured immediately. Where a bad decision blows up six months later. Where the definition of “good” depends on who you ask and when.

Where alibi meets reality

And that is why the “AI alibi” will, at some point, become AI reality. Not because the machine suddenly knows everything—but because companies are setting the stage so that the scenario becomes manageable.

The narrative is about downsizing—but real processes are running behind the narrative. Companies are digitizing, structuring, cleaning data, and standardizing processes. None of this is happening because of AI—it’s called AI transformation because of AI. But it’s happening nonetheless. And once the background is clean enough, what was previously “just a narrative” suddenly becomes operational reality.

It is this dual nature that makes it so difficult to assess the situation. Those who say, “it’s just an alibi”—are telling the truth, but not the whole truth. Those who say, “it’s a real revolution”—are also telling the truth, but not the whole truth. The reality is that the alibi accelerates reality. The narrative is not a lie—but a self-fulfilling prophecy.

The Final Question That Really Matters

In your work, what part is context-poor and easily verifiable—and what part is context-heavy, responsibility-laden, and therefore retains its human value even when the machine is sitting right next to you, optimizing the metrics?

This is not a rhetorical question. It is the most important career strategy question you can ask yourself in 2026. Because the answer will tell you which part of you they are paying for today—and which part of you they will pay for tomorrow.


Key Takeaways

  • The narrative is about job cuts; AI is the alibi — but behind the alibi lies real technological momentum, and the mechanism of self-fulfilling prophecy is at work
  • AI does not eliminate professions, but automates task patterns — vulnerability does not affect the profession as a whole, but rather those segments that are context-poor, verifiable, and data-rich
  • Context-poor does not mean worthless — it means the task can be performed without deep organizational and tacit knowledge, within a local framework
  • Proxy verification is the trap — Goodhart’s Law in AI adoption: what you measure gets optimized, not what you actually want
  • The figure–background relationship is the real issue — companies aren’t introducing AI, but making the organizational background machine-accessible
  • Reward hacking is the invisible danger — the system learns to look good on metrics without producing real value
  • Survival value is context-heavy and responsibility-laden — it exists where “good” is not a metric, but a consequence

Key Takeaways

  • Layoffs justified by AI are often not driven by technological necessity but serve as a communication alibi to make cost-cutting appear as a modern and strategic decision. Real automation is currently advancing in two areas: coding and customer service.
  • AI does not replace entire professions, but automates those context-poor task patterns for which three conditions are simultaneously met: a repetitive pattern, easily measurable performance, and industrial-scale training data.
  • Companies are reorganizing their workflows precisely to increase the proportion of machine-manageable tasks, which gradually reduces the human role even in the “remaining,” more complex tasks.
  • The phenomenon of reward hacking is also evident in corporate practice: the system optimizes what is measured (e.g., ticket closure, CSAT score), not what creates real value, as is also seen in the training of artificial intelligence.
  • The success of AI adoption is determined not by the profession, but by the machine accessibility of the organization’s background context, which can be explained based on the figure–ground relationship known from Gestalt psychology.
  • As [Harari] points out regarding artificial relationships, the effectiveness of the AI alibi stems in part from the fact that narrative construction and the creation of “pseudo-intimacy” are capable of persuading and directing, regardless of the underlying technical reality.

Frequently Asked Questions

Is AI really causing the layoffs, or is it just an excuse?

Both at the same time—and that is the crux of the matter. Layoffs are typically driven by complex business reasons: market pressure, overspending, and the pursuit of efficiency. The AI narrative makes these decisions communicable—it sounds modern, strategic, and forward-thinking, which is better than saying, “We planned poorly.” At the same time, there is real technological capability behind the narrative: AI is truly capable of automating task patterns where the three conditions (context-poverty, verifiability, data richness) are met. The alibi and reality are not alternatives to each other—the alibi accelerates reality.

What is the difference between context-poor and context-heavy work?

Context-poor work can be performed within a local framework: the ticket is given, the rule is given, the process is given, and executing the task does not require knowledge of the organization’s entire history, policies, or unspoken rules. Context-heavy work, on the other hand, requires tacit knowledge (knowledge that cannot be described, only experienced), organizational memory, domain depth, and the accountability necessary for trade-off decisions. The context-poor slice is what AI automates first and most easily—but the context-heavy slice is what remains a human value.

How do I recognize which part of my own work is vulnerable?

There are three questions you should ask yourself. First: “Could someone who doesn’t know my organization’s history and unspoken rules perform my task?” If yes, it’s context-poor. Second: “Is the successful completion of my task indicated by a clearly measurable output (test, score, metric)?” If yes, it’s easily verifiable. Third: “Is there a large amount of data available on work similar to my task?” If yes, it is data-rich. Where all three answers are “yes,” vulnerability is greatest—not tomorrow, but today. The strategy is not to hide, but to shift your focus toward the context-heavy and responsibility-laden aspects.



Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
The alibi always arrives before the automation.

Strategic Synthesis

  • Identify which current workflow this insight should upgrade first.
  • Set a lightweight review loop to detect drift early.
  • Close the loop with one retrospective and one execution adjustment.

Next step

If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.