The Chatbot isn’t a lie, but it’s also not the whole truth.

The reason we’re ‘talking’ to AIs says so much about us and how weird we are.

The long-standing friction between human intuition and algorithmic logic is more than just a technical hurdle; it is a fundamental clash between two entirely different ways of processing reality. On one side, we have the human mind—fluid, context-dependent, and deeply social. On the other, we have computational processes—rigid, mathematical, and strictly bound by formal rules. This divide has shaped the history of artificial intelligence, influenced the way we understand economics, and today, it creates a profound "user experience" mismatch that leaves many people feeling both amazed and deeply frustrated by modern technology. To understand why we often feel like we are speaking a different language than our devices, we have to look at the "Boolean dream" that started it all and how it fundamentally misunderstood what it means to be human.

For decades, AI researchers operated under the assumption that human thought was essentially a form of high-level computation. The hope was that if we could just map out enough formal rules and logical symbols, we could recreate the human mind inside a machine. This "top-down" approach, however, missed the most important part of how we actually function: our subcognitive processes. Most of what we consider "common sense" or "intuition" happens in the brain in less than 100 milliseconds. It isn't a series of math equations; it is instantaneous, unconscious pattern recognition. While a computer is world-class at explicit calculations like winning a chess match or analyzing a medical scan, it lacks the "slippability" that humans use every day. Slippability is our intuitive ability to shift perspectives, recognize abstract analogies, and navigate the messy, overlapping boundaries of real-world concepts. Algorithms, by contrast, are brittle. They are highly precise but entirely devoid of the nuance and "give" that allow humans to survive in an unpredictable world.

Understanding our own intelligence

This gap becomes dangerously apparent when we look at something like autonomous driving. When you are behind the wheel, you aren't just calculating distances; you are using a "theory of mind" to guess what a distracted pedestrian or an aggressive driver might do next. You use "intuitive physics" to foresee if a cup on the dashboard is about to spill. A self-driving car doesn't have an imagination or a sense of "weirdness". It translates the world into a strictly geometric grid and estimates trajectories based on probability. Because it lacks sentience, it can easily fail when confronted with a real-world fluke that wasn't in its training data—situations that a human would navigate without even thinking about it. This is the core of "technochauvinism": the mistaken cultural belief that because a system is mathematical, it must be more objective or "smarter" than human judgment. In reality, reducing a complex social situation to a math problem often means losing the very context required to understand it.

We see this same tension play out in how we spend money and make decisions. Traditional economics used to rely on the "expected utility model," which assumed people were like little algorithms, always choosing the option that maximized their gain. But psychologists like Daniel Kahneman and Amos Tversky proved that we are rarely that logical. Our intuition is swayed by how a choice is "framed". For instance, we might avoid a risk if it’s described as a potential gain but seek out that same risk if it’s described as a potential loss. Marketers have known this for a long time; they realize that consumer behavior is driven by subconscious emotional associations rather than cause-and-effect logic. We usually make a non-logical, gut-level decision first and then use "logic" later just to justify it to ourselves.

Enter Chatty Robots

Today, this conflict has reached a fever pitch with the rise of conversational AI. The problem is that roughly 30% of the human brain is hardwired for social interaction. We are evolutionarily primed to look for goals, desires, and inner states in anything that talks back to us. In design terms, language is a "social affordance"—a cue that tells our brain, "This is a person I can talk to". When an AI uses natural language, we instinctively deploy a "theory of mind" to simulate its personality. We assume it has values, that it cares about being truthful, and that it would feel bad if it let us down. But this is a "failed metaphor" and, in many ways, a kind of swindle.

The underlying algorithm is actually just a mindless statistical model generating text based on patterns. It has no comprehension, no feelings, and no "mental contents" regarding what it is saying. It is like the "Chinese Room" thought experiment: it manipulates symbols according to rules without understanding a single word. Because we can’t help but treat it like a person, we fall into the trap where we imagine the bot has its own goals or artistic intent. We get baffled when an AI "hallucinates" a fake fact, because our intuition tells us that a "helpful" partner wouldn't lie for no reason. We project a point of view onto its stories or art, even though there is no one "home" inside the machine.

The only group of people who seem immune to this confusion are programmers. Because they have spent years struggling with the rigid syntax of machines, they don't fall for the social act. They don't treat AI as a friend; they treat it as a complex system to be coerced, commanded, and constantly double-checked. They interact with it "adversarially," knowing that behind the friendly chat is a mindless processor that requires strict instructions to stay on track. For the rest of the world, however, the interface remains a beautiful but misleading promise—an invitation to a human conversation that the machine is fundamentally incapable of having.

To understand why human intuition so often clashes with algorithmic logic, we have to look at how we actually make decisions. For decades, the field of economics was built on the "Rational Choice Theory," which imagined humans as Homo economicus—consistent, logical beings who always crunch the numbers to maximize their own benefit. If we were truly this logical, we would be perfectly compatible with algorithms. But behavioral economics, pioneered by theorists like Daniel Kahneman and Amos Tversky, revealed that our "logic" is actually a messy collection of shortcuts, emotional triggers, and contextual biases.

One of the most powerful explanations for our non-logical behavior is the concept of Prospect Theory. This theory suggests that humans don't perceive gains and losses in a linear, mathematical way. Instead, we are "loss averse." Experiment after experiment shows that the pain of losing $100 is roughly twice as potent as the joy of gaining $100. An algorithm sees these two events as equal and opposite integers, but the human brain treats the loss as a catastrophe to be avoided at all costs. This leads to "irrational" behavior, such as holding onto a failing stock for too long just to avoid "realizing" the loss, even when the logical move is to sell.

Another major pillar of behavioral economics is Framing. Our decision-making is highly "slippable," meaning our choice depends entirely on how the information is presented. In a famous study, people were asked to choose between two life-saving treatments for a disease. When the results were framed in terms of how many people would be saved, participants chose the "safe" option. When the exact same statistics were framed in terms of how many people would die, they suddenly became risk-seekers, choosing the gamble. A computer, seeing only the raw data of survival rates, would never change its mind based on the wording. We, however, are swayed by the emotional weight of words, proving that our "logic" is deeply tethered to our linguistic and social environment.

We also rely heavily on Heuristics, which are mental shortcuts that allow us to make decisions in less than 100 milliseconds. While these are efficient, they lead to systematic errors. For example, the "Availability Heuristic" makes us believe that an event is more likely just because we can easily remember an instance of it—like being afraid of a shark attack after watching a movie, despite the mathematical odds being near zero.

Kahneman categorized these two modes of thought as System 1 and System 2. System 1 is fast, instinctive, and emotional; it is the source of our intuition and our "subcognitive" fluidity. System 2 is slower, more deliberate, and logical. The "tension" mentioned in the article exists because computers only possess a version of System 2. They are excellent at the deliberate, rule-bound part of thinking, but they lack the fast, contextual, and often biased "gut" of System 1. Because we spend most of our lives living in System 1, we find the rigid, unbiased, and mathematically "perfect" logic of an algorithm to be cold, alien, and frequently frustrating.

We don't just want the "right" answer; we want an answer that feels right within the context of our human biases.

Previous
Previous

Tell me lies?

Next
Next

Together Behind Glass: Why Your Next Marketing Campaign Should Be Internal