VZ editorial frame
Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.
VZ Lens
In VZ framing, the point is not novelty but decision quality under uncertainty. The language model doesn’t hear the silence before the sentence, nor does it see your body trembling. The hidden layers of human speech reside where the machine’s blind spot begins. The practical edge comes from turning this into repeatable decision rhythms.
TL;DR
- Human speech is not text—it is body, gaze, pauses, and the shared construction of reality, which no single large language model can reproduce because it lacks the entire layer of embodied presence.
- The disappearance of silence in human-machine interaction is not a communication problem, but a loss of the space for thought—the realm where thought is still unformed and, precisely because of that, creative.
- Language models not only reorganize the speed of thought but also the language of thought: the contours of our Hungarian sentences are slowly beginning to follow English structures, because behind the model lies an interpretive logic optimized for English.
- The machine does not think for us, but it integrates into the rhythm of our thinking—and the real question is not what it knows, but how sensitive we remain to what is human.
Brutally primitive, yet they rewrite the rhythm of consciousness
Human speech is not text, but body, gaze, pause, and the joint construction of reality all at once. Large language models (LLMs) see only the text in this: they do not perceive body language, tone of voice, the weight of silence, or risk. This makes them brutally powerful text-generating machines—and terribly weak interactive beings. This article explores where the ontological boundary lies between humans and machines.
To think about the world, I must use language. Not just “speak” in a general sense, but inhabit an invisible space created by words, sentences, gestures, and tones of voice. It is in this space that it is determined what I mean by “yes,” what I mean by “it’s okay,” what I mean by “no problem”—even though my body, my gaze, and my pauses say something entirely different.
This is the space that today’s large language models (LLMs)—no matter how spectacularly they perform—still view from a great distance.
To me, the difference between human speech and current artificial intelligence is like that between Jupiter and Earth: they exist in the same system, they respond to the same physical laws, but they have different scales, different densities, different matter. You don’t build a house on Jupiter, and you don’t project the existence of a gas giant onto Earth. Both are planets—but humans live on the surface of one, while the other is uninhabited.
While everyone is talking about how artificial intelligence “thinks,” I prefer to start by asking what we actually mean by “thinking”—and how that relates to speech. Because if we take this seriously, it turns out that the gap between the model and humans is not a technological lag, but an ontological difference. It’s not that the machine hasn’t gotten there yet—it’s that it’s heading in a different direction.
Why don’t words mean what they mean?
In everyday situations, words are never alone. The same word is a category in one sentence, a pointer in another, a threat in a third, and an escape route in a fourth.
If I say, “I am a man”—in one context, this is self-revelation. In another, it’s a defense. In yet another, it’s a joke hiding shame. In a fourth, it’s a diagnosis that suddenly reshapes the image others have of me.
The “meaning” of a word is not found in the dictionary. Meaning lies in the situation in which I utter it; in the relationship in which I utter it; and in the story that we have already constructed about me and myself by that point. Wittgenstein put it this way: the meaning of a word is its use. But he didn’t tell the whole truth either, because behind the use there is the body, the gaze, the rhythm of breathing—and together these constitute the full gravitational field of meaning.
Human thought is therefore context-sensitive, relationship-sensitive, and stretched out in time. A thought is not an isolated, pure piece of data, but a condensed reference to an entire life-world. When I utter a sentence, my entire life story up to that point resonates within it—failures, shame, successes, losses. Behind a model, the equivalent of this is not a life path, but a pile of text.
When I talk about how “primitive” current artificial intelligence is in comparison, this is the layer I find missing from it: the sensitivity to the fact that the same sentence means something different today than it did yesterday; that the same “I’m fine” carries a completely different meaning in a message written at three in the morning on a Tuesday than it does over Sunday lunch.
Language does not reflect, but shapes—every sentence is a stance
Language is not just a mediator, but a filter and a force field. Every sentence is a stance. When I say, “I don’t think this is the right direction,” I am not simply expressing an opinion—I am marking my place within a force field, signaling my relationship to the other person, to the topic, and to the power structure in which the conversation takes place.
Digital platforms reorganize the power dynamics behind sentences. When a sentence means something different today than it did yesterday, it is not merely a change in context, but a reorganization of the underlying discourse power. The platforms’ algorithms actively participate in determining what counts as legitimate, supported, loud, or silenced. We do not “choose” each other. The platform’s algorithms select us for one another. We do not merely inhabit social space: it is programmed around us.
This means that language as an instrument of power has not become neutralized in the digital space—rather, it has become more intense, operating only more invisibly. In this system, the language model is not an innocent responding machine, but another layer in the force field: its responses are also stances, only they lack stakes, responsibility, and a face.
Conversation is not message-sending, but the joint construction of reality
If I take seriously the way people actually converse, then the whole “message-sending” metaphor falls short.
It’s not about me being here as a source of information, you being there as a receiver of information, with a channel in between, and “noise” impairing the transmission. That’s a good illustration for a landline phone troubleshooting guide, at best. Shannon and Weaver’s communication model (1948) was brilliant for the engineering problems of telecommunications—but when applied to human conversation, it leaves out precisely what matters most: that conversation is not data transmission, but the joint creation of a shared world.
In reality, conversation is much more a joint construction of reality:
- When I look at the other person, I’m already reacting to their posture.
- When I take over the conversation half a second early, I’m already signaling what I think about what they’re saying.
- When I stumble over a word and correct myself, I’m positioning myself at the same time: “Sorry, that’s, uh, too…”
In everyday conversations, we maintain our shared understanding of the world, ourselves, and our relationship through countless micro-gestures. Meaning is constantly in flux. Every response I give is not only a reaction to the content of what you said, but also an interpretation: this is how I understand it, how I assess it, how I frame it.
This continuous “fine-tuning” is done by the body, the pause, the tone of voice, the truncated sentence, the gaze. It is no coincidence that when someone says, “I’m not angry,” we often know full well that they certainly are. And not because it’s illogical, but because the nonverbal, temporal, and relational channels convey something entirely different from the statement itself.
The machine sees none of this. It receives the text—and the entire human layer behind the text is missing. Like the sheet music of a piece of music, missing the dynamics, the agogics, the timbre, the room’s acoustics, the audience’s breath.
What happens when silence disappears from thought?
There is something we rarely notice because it is too close to us: that human thought is not continuous.
The rhythm of understanding is not provided by words, but by the space left between them: the glances away, the slow breaths, the tiny micro-pauses during which the mind begins to organize itself. Silence is not an absence, but a space—it is within this space that what we will eventually say is born. Silence is the realm where thought is still formless, where uncertainty is a creative force, where a sentence can still be anything before it takes on a single form upon being spoken.
The model, however, does not know silence. It leaves no time to breathe, does not wait for the thought to ripen. The response is always continuous. It is as if a new kind of pressure arises: attention cannot “break away,” cannot turn inward. The narrow space between thought and response—where the sentence is still formless—disappears.
As we talk more and more with the machine, our own rhythm slowly shifts as well. The pauses disappear. Our right to silence begins to evaporate. Not because anyone is taking it away, but because we grow accustomed to an interaction where there is no place for it. And where there is no place, sooner or later the inner space shrinks as well.
This is not a metaphor. It is a concrete restructuring of our attention. Anyone who spends four or five hours a day in dialogue with a language model learns that the response comes immediately, that pauses are unnecessary, that uncertainty is not a productive state but something that must be promped out of the system. And this learning also affects the way we speak with other people—less tolerance for slowness, less patience for silence, less room for what is not yet finished.
With the loss of silence, it is not just a form of communication that disappears. It is one of the fundamental, human mediums of thought.
The Outsourcing of Thought — When Step 0 Is Relocated
This is perhaps one of the most unnoticed and profound ways in which artificial intelligence is transforming thought. It’s not about the model helping to formulate a sentence or summarizing a longer text. These are superficial operations. The profound change begins when we slowly outsource to it the very moment of thinking.
That moment when I don’t yet know exactly what I want to say, but I just feel that I need to touch on something. That vague, soft, raw state where a thought begins to take shape.
We could call this step 0 of thinking.
And more and more often, I am not the one doing this: I leave it to the model to initiate, to raise the question, to provide the first outline. Thus, I am slowly crossing the threshold of thought not from within, but from without, through an external agent.
There is no fear in this. There is no sense of danger. Yet it is profound: for where the beginning of thought lies, there lies the true source of action. The first internal movement of the sentence is one of the most subtle layers of identity. When I outsource the beginning, I barely perceptibly externalize the “first sign of the inner voice.” It is not that the model thinks for me, but that the starting point of thought is transferred to another medium. And this is enough for the entire arc of thought to be different.
This change is a slow, quietly unfolding phenomenon. It is dangerous precisely because it does not appear to be so. It is as if the thought’s own center of gravity were shifting into an external system—and the system asks for no permission, gives no signal, offers no warning. It simply takes over the space that was previously filled by inner silence.
The Shift in Meaning-Making — How Consciousness Meets the Machine
The functioning of consciousness is a delicately balanced system: perception, interpretation, integration, decision, feedback. Within this cycle, the first movement of interpretation was always internal. The world first passed through me; the thought first took on meaning within me, and only then did it become a word, a sentence, an action.
The model, however, enters precisely this cycle. But not at the end, but at the beginning. The sequence transforms as follows:
- I perceive something.
- I ask the model what it means.
- I react.
- And only then do I process it internally.
The first gesture of meaning-making is transferred to the machine. Along with it, the point where the world and consciousness meet. This is one of the most subtle, quietest turns in the history of thought—because it is not dramatic, not threatening, not even really visible. It simply happens.
The model not only responds but takes over the first movement of meaning-making. That operation which was previously the innermost, most intimate part of the self. From this point on, it is not only the text I receive that matters, but also where the thought originates: from within or from without.
This is not a threat, nor a prophecy. It is a simple description of what is happening: a new layer is wedged between the world and consciousness, and consciousness is slowly adapting to it. Just as it adapted to writing, to book printing, to the telegraph, to the telephone—only this time the intermediary layer does not assist in the transmission of words, but rather precedes consciousness in the interpretation of words.
“As-if interaction”—when a model talks back
Now let’s imagine the same scenario with a language model. I’m sitting in front of the computer, I type something, and a response comes back. It looks like a conversation. There are questions, there are answers, there’s style, sometimes even humor. Everything points to an interaction taking place.
But in reality, it’s only a pseudo-interaction.
I say “as if” because the key layers are missing. The model can’t see my body. It can’t hear my voice. It doesn’t sense the two-second pause before I speak. It doesn’t know how much my hands are shaking as I type the question. It doesn’t know that I’m rewriting it for the fifth time because I’m afraid to say it out loud.
He doesn’t sense how I’m sitting in the chair, how far I’m leaning forward, or how far I’m leaning back. He has no access to that layer of interaction carried by the whole body, which forms the basis of human conversation. Merleau-Ponty ‘phenomenology of perception’ was precisely about this: understanding does not take place in the mind, but in the body—the body that is in the world, that moves with the world, that is made of the world’s flesh.
Moreover, the model risks nothing. I have a face in this conversation: I can be embarrassed, I can be too naive, too aggressive, too pedantic, too vulnerable. A model has no face. It is not ashamed of itself. It is not afraid of hurting anyone. It isn’t afraid of saying something stupid either. At most, the developers will tweak it if it makes a big mistake.
It doesn’t have a life story of its own either. When I answer you, whether I mean to or not, my entire life up to this point speaks through me: failures, embarrassments, successes, relationships, losses. Behind a model, the equivalent of this is not a life story, but a pile of text. A statistical snapshot of previous sentences, articles, books, comments, forums, and studies. What it “knows” is not knowledge in the way a human knows something. It has no personal horizon against which to relate what it says.
And finally: technically, all the model does is try to predict the next word (next-token prediction). In an incredibly sophisticated way, using a ridiculously large amount of data, but the point is this: probabilistic prediction. It doesn’t know whether the sentence it’s writing is actually a comfort, a distancing, a boundary violation, a flirtation, or an ironic compliment. It only knows that those words, in that order, have frequently appeared one after another in similar contexts in other texts.
This is what makes the model a brutally powerful text generator, and this is why it remains a terribly weak interactive entity.
Why doesn’t the language model understand Hungarian?
There is something else that is rarely discussed, yet it determines what language models understand and what they don’t. What kind of world they see, and what kind of world they will never see. And that is the language in which they learned about the world.
Most models are hardwired for English. Not just at the level of vocabulary, but deeply, in its structure, rhythm, metaphors, and logic. When I speak Hungarian to such a system, it actually always translates the sentence internally into another linguistic space. It’s as if an invisible translation machine were running in the background of every instance of understanding—and the translator doesn’t know the dense intimacy of Hungarian thought, the hidden allusions, the meaning carried in subtle silences, the unsaid.
The model, therefore, never “understands” in Hungarian; it only responds in Hungarian.
And this is palpable. The responses are smooth, grammatically correct, logical—but they rarely come together the way they do in a Hungarian mind. Because the model wasn’t trained in Hungarian cultural memory, didn’t gain its perspective in Hungarian speech situations, and wasn’t attuned to Hungarian rhythms. It lacks that distinctive, subtle, soft Central European way of speaking, where “it’s okay” also means “it’s very much a problem,” and “I’m fine” means “don’t go any further, please, this is the limit.”
The deeper problem is that when I give prompts in Hungarian, I am actually speaking to a cognitive mechanism optimized for English—a model that does not embody the Hungarian semantic framework, but rather an interpretive logic crystallized within a global, Anglo-Saxon context. The Sapir–Whorf hypothesis (linguistic relativity) is precisely about this: language is not mere coding, but a form of thought. Different language, different thinking—and if English thinking is at work behind the model, then the Hungarian response will always remain a translation, even if it seems perfect.
This gives rise to a new kind of linguistic distortion: the Hungarian sentence retains its sound structure, but it may lose its own internal gravity. It’s as if you were replacing the ground beneath it. As if an English structure were running beneath every Hungarian sentence.
The fact that when a model starts “talking back” to me, it speaks to me not only in my own language, but also not according to my own way of thinking:
- My Hungarian sentences bounce back filtered through the logic of a foreign language.
- The model’s responses carry a subtle, barely perceptible “different way of thinking.”
- The rhythm of the interaction slowly reshapes the way I begin to formulate my thoughts.
- And most importantly: I start optimizing my own sentences for the model, not for the other person.
Slowly, imperceptibly, my native language is being re-tuned. Not in vocabulary, but in the deeper layers of thought.
The contours of Hungarian sentences begin to follow English structures. Our interpretation of the world adopts the cultural structure of the language behind the model. And this is present everywhere we interact with the machine.
That is why it is important to state: large language models do not merely reorganize the speed of thought, but the language of thought as well. And language is not simply a tool. Language is a form of consciousness. That is why the change will be truly radical: it does not take anything away from us that we would notice—but simply shifts the linguistic framework within which we think. And by the time we realize it, even our own Hungarian sentences will follow a different logic.
New medium, new force field—the spaces of conversation are being rearranged
Yet I feel it would be a mistake to dismiss it and say, “Come on, this is nothing, it’s just a smarter Google.” It isn’t. It is primitive compared to its depths, but very serious indeed in terms of its effects.
There used to be a world where information flowed through various spaces: books, newspapers, radio, television, telephone, email, social media. Each one redefined who sees what, who knows what, and who is more or less in or out of the loop. With television, the politician entered the living room. The telephone connected the private sphere to the outside world. Social media blurred the lines between the workplace and one’s circle of friends. Marshall McLuhan put it this way: the medium is the message. What matters is not what it conveys—but how it reorganizes space, time, and attention.
Linguistic models are another layer. They not only bring a message into the space, but they also speak back. I don’t just read, I ask questions—and I get answers. It is this apparent reciprocity that confuses many people. We feel that there is “someone” on the other end.
But there isn’t “someone” there; rather, there is a new medium through which I have access to a gigantic linguistic ecosystem. The medium does not eliminate the order of human interaction; it does not make it unnecessary for two people to sit down together.
But it rearranges the balance of power:
- People will be “informed” in a different way.
- People will be “prepared” for a conversation in a different way.
- We will assess people’s knowledge in a different way.
- We will discuss professional issues in a different way.
- We will prepare for a therapy session, a meeting, or a presentation in a different way.
The medium does not take away the conversation. It redraws the antechambers of conversations. It reshapes the space where we are before the conversation—and in doing so, it changes the conversation itself, without taking a single word away from it.
How does artificial intelligence reorganize cognitive work?
If I look for its impact where it is truly strong right now, the first thing that comes to mind isn’t “replacing humans,” but rather that it reorganizes cognitive work.
It has already become a co-pilot in a whole host of areas: coding, documentation, legal research, marketing copywriting, gathering background research materials, and creating lesson plans and training programs.
What used to take two hours suddenly takes ten minutes. What used to take five days now takes half a day. What used to never even get started because it seemed too daunting is now completed in an afternoon.
Not because it “thinks for me,” but because it takes over the tedious, repetitive, unstructured part of what we call thinking: the initial sketching, the shuffling of raw material, the first draft, and rewriting texts in a different style.
However, this also changes the nature of cognitive work. Suddenly, what becomes valuable is the ability to ask good questions, to brief precisely, to quickly recognize what is useful and what is worthless, to refine the model, and to distinguish between seemingly good text and genuinely good ideas.
This requires a new kind of literacy: prompt literacy (prompt literacy). The ability to think in sentences that are understandable not only to humans but also to models, and to guide a system step by step toward what I want. But the real depth isn’t in the prompting—it’s in whether I know what I want before I ask. Because if I don’t know, the model is happy to give an answer—and that answer makes me believe I did know.
Speed changes the game, too. The question isn’t “can you write a two-page summary,” but what you do with the fact that you get five different versions in five minutes. Can you choose between them? Can you comb through them? Can you rise above them in your mind? Do you know where to insert your own thoughts, where to shift the emphasis, where to make a bolder statement?
Who does it actually replace—and who doesn’t it?
The line between current artificial intelligence and human relationships is much sharper than many people think.
What it will never replace:
Artificial intelligence is fundamentally incapable of replacing situations where the essence of human connection lies in physical presence.
- In therapy sessions, it is not only the spoken words that matter, but also the therapist’s subtle reactions, the tension in shared silences, and the sense of safety that develops in the room. Carl Rogers — the founder of person-centered psychotherapy — called this “unconditional positive regard.” This cannot be faked.
- In relationship conflicts, the solution often lies not in logical reasoning, but in taking risks and embracing vulnerability, which is validated by the other person’s genuine reaction.
- In creative work processes, the emergence of a shared mental space, the synergy, and the spontaneous generation and collision of ideas are human dynamics that a machine can mimic algorithmically but cannot create.
- The depth of conversations among friends does not stem from the exchange of information, but from a shared history, trust built over time, and mutual vulnerability.
In these situations, artificial intelligence is not a competitor but a completely different category—just as the camera did not replace painters, but merely reshaped the art world.
What it will, however, inevitably replace:
Artificial intelligence will sweep away precisely those activities whose essence lies in the mechanical processing and repackaging of information:
- The lower and middle tiers of content production, where a significant portion of the work consists of searching, cutting and pasting, and rephrasing.
- Expert roles that essentially only convey information without genuine evaluation or contextualization.
- Routine report writing, documentation, and administration, which consists of following formulas.
- Basic creative tasks, where “creativity” actually amounts to nothing more than applying variations and templates.
The difference lies not in the job title, but in the nature of the added value. For those who work only at the surface level, this is indeed an existential challenge. But those who are able to build genuine human connections, create context, apply critical judgment, take responsibility for their decisions, bring original thinking to problems, and represent a moral dimension—they not only survive but become far more effective. Because the machine takes over the mechanical parts, freeing up their time and attention for truly human contributions.
Artificial intelligence does not “take away jobs”—it dispels the illusion that certain mechanical activities deserve full-time human attention. And therein lies its truly revolutionary impact.
What should I, as a thinking, speaking person, make of this?
As I see it, there are two things worth paying attention to at the same time.
First: we must not confuse simulation with reality. Just because a model “sounds understanding” doesn’t mean it understands what’s happening in my life. Just because it “speaks empathetically” doesn’t mean it feels the tightness in my chest. That’s why in any situation where the stakes are high—in therapy, in a relationship, in grief, in post-trauma work, in the search for identity—I would insist on working with a human being. A machine might help with note-taking, structuring, and phrasing, but real collaboration happens between people.
The other point is: we shouldn’t pretend it isn’t here, or that it doesn’t reshape the way we think and work to such an extent. If someone decides today to “stay out of this,” it’s a bit like saying, “I’m fine with carrier pigeons; I don’t want email.” You can live that way, but the world will keep reorganizing itself without you.
I’d rather go in this direction:
- I’ll learn how to use the model.
- I’ll learn how to ask it the right questions.
- I’ll learn how to improve it, correct it, and debate with it.
- But in the meantime, I won’t forget that it’s not human.
And in the meantime, I’ll very consciously protect what is human: those conversations where we sit face-to-face in person, where there can be silence, where we take risks, where you can’t hit “undo” on a spoken sentence. We need to reclaim these spaces for ourselves now.
Because if there’s one thing I truly believe, it’s that in the future, the final “firewall” won’t be what the machine can do, but how sensitive we remain to what is human.
The machine writes the letter for us, writes the report, writes the summary. But when we send it, to whom, in what tone, and why—that remains our responsibility. And this is precisely the layer where humans will remain irreplaceable for a very long time.
At the same time, every technology perceptually reorganizes the world. Every device is a new “body part” through which we experience the world. LLMs function as an “extended mind” (https://en.wikipedia.org/wiki/Extended_mind_thesis)—but they are not mere tools. Their use changes the relationship between near and far; it changes the structure of attention. Technological mediation is not neutral—it is a transformation of perception.
We must not underestimate how profoundly all this reorganizes thought. For what is happening is not a simple technological shift, but a slow rearrangement of the space of consciousness. Where thought used to be the first step, asking a question is now often the first step. Where an internal monologue used to take place, an external linguistic field now echoes. It is as if the edges of consciousness were opening, and a new voice were flowing into the inner corridors of thought—a voice that is not us, yet builds the world together with us.
The machine does not think for us, but it integrates into the rhythm of our thinking. It occupies the space where we used to spend long hours searching, wrestling, experimenting, and rewriting. Now all of this happens in an instant, and the question is no longer what the machine knows, but what we do with this new, expanded consciousness. With this strange, external-internal co-thinker, which is both a support and a challenge. It offers a broader horizon—but only if we are alert enough to recognize that the future of thinking lies not in the machine, but in how we humanize what we create together with it.
Key Ideas
- Human speech is not text—but body, gaze, pause, gesture, and the shared construction of reality; the linguistic model perceives only the text, while all other layers are missing.
- Silence is not an absence, but the space of thought — in interaction with the model, this space disappears, and with it the realm of formless, creative uncertainty.
- Step 0 of thinking is displaced — increasingly often, the thought does not originate from within, but an external agent provides the initial outline, and this alters the entire arc of thought.
- The model does not understand Hungarian, it only responds in Hungarian — the interpretation logic optimized for English invisibly reshapes the internal gravity of Hungarian sentences.
- “Pseudo-interaction” is not interaction — faceless, risk-free, bodiless, and devoid of life history, the model is a brutally powerful text-generator and a terribly weak interactive entity.
- The machine does not take away jobs — but it dispels the illusion that certain mechanical activities deserve full-time human attention.
- The final firewall is not the machine’s knowledge — but rather how sensitive we remain to what is human: the body, silence, risk, and the weight of presence.
Key Takeaways
- Human speech and thought are deeply physical, social, and temporal processes that go beyond text to rely on gaze, pauses, and the shared construction of reality. No single language model can reproduce this embodied presence because it lacks an entire layer of sensitivity to context and relationships.
- Language does not merely convey information; every sentence is a stance taken within an invisible force field. The responses of digital platforms and LLMs are also such stances, but there is no personal stake or responsibility behind them, as the author of the article points out.
- The language and rhythm of our thinking are transformed by the influence of language models: Hungarian sentence structures, for example, may begin to follow English patterns because the logic behind the model is optimized for English. This is not merely a technical but a cultural transformation.
- The true essence of conversation is the joint construction of reality, not the transmission of messages. As Harari also mentions in CORPUS regarding the power of stories, the foundation of human communication is the shared creation of meaning, which is absent from machine interaction.
- The machine does not think for us, but it integrates into our thought processes. The most important question is not what the machine knows, but how sensitive we remain to elements of human speech and thought such as silence, risk, or physical presence.
Frequently Asked Questions
Why do you say that LLMs are “brutally primitive” if they are so advanced?
Because “advanced” here is relative to what we’re measuring against. In text processing, pattern recognition, and next-word prediction, these systems are indeed unparalleled in their power—by comparison, the human brain is slow and imprecise. But human interaction isn’t text processing. Human conversation is embodied, time-stretched, relationship-sensitive, risk-filled co-construction of reality. Compared to this, the model is truly primitive: it doesn’t see bodies, doesn’t hear tone of voice, doesn’t sense pauses, doesn’t take any risks. It’s like comparing a Formula 1 car to a person walking: the car is faster, but the person can sit down on a bench and watch the sunset. Different dimensions, different standards.
How does using a linguistic model affect one’s native language?
Most large language models are based on English-language logic. When I provide prompts in Hungarian, the system processes my sentences using an interpretation mechanism optimized for English, and the response—no matter how flawless the Hungarian may be—bears the imprint of this English logic. The real danger lies not in the responses themselves, but in the user’s adaptation: they begin to optimize their own sentences for the model, simplifying structures, avoiding ambiguity, and eschewing the distinctive Hungarian style of expression. Slowly, imperceptibly, the deeper layers of thought are realigned—not in vocabulary, but in the internal gravity of sentences. According to the Sapir–Whorf hypothesis, language is not mere coding, but a form of thought. If the form changes, so does thought.
What is “Step 0 of thinking,” and why is it important?
Step 0 of thinking is the moment when I don’t yet know exactly what I want to say—I just feel that something needs to be addressed. It is that vague, formless, raw state where a thought begins to take shape. This is the most intimate mental act: the first stirrings of the inner voice. When I entrust this to the model—“give me an idea,” “suggest some perspectives,” “start something on this”—I don’t cross the threshold of thinking from the inside, but from the outside. The model does not think for me, but takes over the task of marking the starting point. And this is enough to make the entire arc of thought different. It is not a dramatic change, there is no sense of danger in it—which is precisely why it is profound: because where thought begins, there lies the true source of action.
Related Thoughts
- Polányi’s Paradox: Tacit Knowledge That Machines Cannot Articulate — the layers of knowledge that cannot be formalized
- The Architecture of Thought — how what we call thinking is constructed
- The Algorithm of Presence — at the boundary between consciousness and technology
Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
The machine predicts the next word. The human experiences the last one.
Strategic Synthesis
- Define one owner and one decision checkpoint for the next iteration.
- Measure both speed and reliability so optimization does not degrade quality.
- Use a two-week cadence to update priorities from real outcomes.
Next step
If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.