The “Uncanny Valley” in AI: overconfidence in LLMs harms customer experience (CX)
Photo source: Interactive Powers
The “Uncanny Valley” in AI: overconfidence in LLMs harms customer experience (CX)
The concept of the “uncanny valley,” proposed by Masahiro Mori in 1970, is back at the center of the debate more than 50 years later. It describes a non-linear curve of emotional affinity toward artificial entities: acceptance rises as something looks more human, but drops sharply when it is perceived as “almost human, but not quite.” Recent studies (2024–2026) confirm this effect also appears strongly in AI conversational agents—something Anthropic’s ads demonstrate in both text and voice, as well as avatars.
In commercial contexts, the uncanny valley is triggered when a chatbot or voicebot crosses an approximate threshold of 60–65% perceived human-likeness. Below that level, AI is accepted or tolerated as a functional tool that inspires a certain “technical trust.” Above it—especially between 70% and 90%—users notice failures in empathy, emotional timing, or consistency, which produces clear rejection and a feeling of manipulation.
The uncanny valley is triggered when a Chatbot or Voicebot crosses an approximate threshold of 60–65% perceived human-likeness.
This phenomenon is especially relevant in high-involvement industries that depend on personalization such as insurance, banking, transportation, travel packages, tourism, and assistance services. In these sectors, customers seek not only objective information, but also reassurance and genuine understanding. An AI that tries to simulate human closeness but cannot fully deliver often increases ghosting and lowers purchase intent.
Unfortunately, many companies implement AI “to the maximum,” prioritizing automation, “hyper-realism,” or headcount reduction without respecting these psychological barriers. The outcome is predictable: higher abandonment rates and weaker commercial performance despite the tech investment. Many do not even measure this impact, overwhelmed by market inertia or the initial cost savings.
A responsible implementation requires keeping human-likeness below the critical threshold and designing truly hybrid processes with seamless escalation to an human agent that preserves trust and creates a “wow” effect. Only then does AI become a strategic ally instead of a source of resistance—or even CX damage.
💡 Have you observed this effect in your own customer service or sales channels?
👉🏻 Let’s talk and review practical, measurable mitigation approaches.
Interactive Powers - Streamline your business communications
