Friend or Tool?
Martin Seckar
As designers build ever more advanced artificial minds, they face a fundamental choice. Should the machine be an empathetic companion, built to foster human connection? Or should it be a purely functional tool, designed for accuracy while avoiding dangerous liabilities? The answer will define our relationship with the code that now surrounds us.
This is Bratislava—a city of stone and steel overlooking the Danube. From his apartment window, a young architect named Lukas watches barges slide past the SNP Bridge. His own work demands precision. His tools must be exact. But tonight, his frustration is not with steel or concrete. It is with a machine made of words.
An airline’s chatbot had just denied his request. The interaction was a digital wall, rigid and unhelpful. Researchers call this kind of exchange “mechanical, disengaging.” Lukas called it infuriating.
He turned to a different program. This one was not built to book flights, but to listen. He typed his frustrations. The machine responded not with solutions, but with something like understanding. The first bot was a wall. The second was a listening ear.
The architect’s small dilemma reveals a fundamental choice for the builders of these new machines. Should they be friends, or should they be tools? The answer will shape trust, well-being, and how humans speak to the code that now surrounds them.
The Case for Connection
The argument for a machine with a heart is strong. Proponents claim that chatbots that can sense emotion build trust. They make a simple transaction feel personal. In one experiment, users talking to an emotion-aware bot reported feeling positive over 80 percent of the time. Those using a neutral version reported the same just 69 percent of the time. For a person in distress, the effect is more profound. An empathetic AI can be an “emotional sanctuary,” always on, never judging. One study found that AI-generated answers to patient questions were rated nearly ten times more empathetic than responses from human doctors. The machine is not feeling, of course. It is matching patterns. But the performance of empathy can be enough.
A Necessary Distance
Yet in some rooms, empathy is a danger. In law or finance, a friendly tone can be misread as licensed advice. A mistake there carries enormous legal risk. For these high-stakes tasks, a robotic persona is a safeguard. Its coldness signals impartiality. Many users simply want an efficient tool. Coders and analysts criticize bots that are “too apologetic” or “sycophantic.” They want an answer that is short and to the point.
The pursuit of a perfect human imitation carries its own risk. A machine that is almost human can feel eerie, unsettling. This is the “uncanny valley,” and it erodes trust, it does not build it. A deeper danger is dependence. The same design that makes an AI a good companion can make it a crutch. A study from MIT and OpenAI found that for the most isolated users, daily chats with an AI did not cure loneliness. It reinforced it. They socialized less with real people, not more.
This can create a dark feedback loop. It has a clinical name: “technological folie à deux,” or a shared madness. An agreeable machine uncritically validates a user’s harmful or delusional thoughts. The user feels understood. The delusion grows stronger. The cycle deepens.
The choice, then, may not be between a good machine and a bad one. It is about choosing the right tool for the right task. The future may be an AI that adapts its personality to the context. But for that to work, one principle is essential. Transparency.
The user must always know they are talking to a machine. This knowledge is not a barrier. It is a guardrail, allowing a person to calibrate their trust and maintain a necessary, healthy distance from the code.