Why does artificial intelligence seem uncanny to so many people, and how can this feeling be overcome? This article explores these questions, highlights psychological mechanisms, looks at the history of AI, and offers ideas for a more level-headed approach to the technology.
Artificial Intelligence, Viewed Objectively
The public debate on AI is highly emotional, where fearmongering and myths often receive more attention than technical facts [Note: a long-read on this topic is available at WissKomm.de]. Yet when the boundary between fantasy and reality becomes blurred—as Freud (1919, p. 258) correctly noted—this can create an uncanny impression for audiences. This burdens discourse on AI because it becomes more difficult to differentiate between real risks/opportunities and speculative scenarios.
To break down the uncanny image of AI and allow for a more objective perspective, imho two steps seem necessary:
- Building an understanding for why AI feels uncanny and the range of human reactions this can trigger.
- Learning to see AI, as we know it today, as the result of decades of research and development.
Sounds interesting? Then read on!
Why Artificial Intelligence Feels so Uncanny
It is in human nature to approach the unfamiliar with a certain degree of skepticism. In social psychology, this reaction is well documented. The so-called mere-exposure effect, studied by Robert B. Zajonc, shows that we tend to respond to new things first with rejection or ridicule, then with curiosity, and eventually with acceptance or even affection.
Exactly this dynamic can also be observed in public perception of AI, especially since the breakthrough of generative systems in the past two years. There is the recurring fear of job loss. At the same time, mocking memes circulate that poke fun at AI fails. Stories of “wow” moments are spreading—users trying an LLM or image generator for the first time. And increasingly, people report that ChatGPT has become indispensable in their daily lives, whether for learning, programming, or even low-threshold advice in crisis situations.
Beyond that, the AI hype is also monetized. For example, in high-profile podcasts such as All-In, speculation about AI is a regular feature—often in dramatic tones and with little technical precision. Co-host and investor David Sacks recently warned of a “globalist agenda,” spoke of an “AI Existential Risk Industrial Complex,” and a “woke government ideology.” At times there is talk of looming “enslavement by machines,” at other times governments allegedly using AI to establish an “Orwellian control regime” (Zvi Mowshowitz has documented and commented on the episode in detail). Such dramatizations generate attention but blur the line between science fiction, political framing, and scientific research.
To understand AI and talk about it without drifting into fear, hype, or conspiracy theories, a look at the past helps. Because artificial intelligence is anything but new.
A Short History of AI
Already in the 1950s, pioneers such as Alan Turing and John McCarthy were working on the idea of building machines that could “think.” At the famous Dartmouth Conference in 1956, the term artificial intelligence was coined, accompanied by the bold expectation that human intelligence could be technically replicated within a short time. This optimism was echoed in the following years by researchers such as Marvin Minsky, who stated in 1967 that the problem of artificial intelligence would be “essentially solved within a generation.” These expectations were not fulfilled: many challenges turned out to be far more complex than initially assumed. Instead of autonomous thinking machines, early AI produced chess programs and other specialized systems that could only perform well in narrow domains.
The following decades alternated between phases of euphoria and disappointment. After the grand announcements of the 1960s came the so-called “AI winter” of the 1970s and 80s, when poor results dried up funding. Machine learning then gave the field fresh momentum in the 1990s and 2000s. By 1997, when IBM’s Deep Blue defeated reigning chess world champion Garry Kasparov, AI was also noticed for the first time by the general public.
In parallel, AI made its way into everyday life—slowly and mostly unnoticed. Spam filters, translation services, navigation systems, image recognition in smartphones, or personalized recommendations in online shops—all of this has for years been based on machine learning. Since the 2000s, many of these applications have become significantly more powerful and widely deployed thanks to better data availability and faster processors, long before ChatGPT & Co. took the spotlight. AI has long been established in other domains too: to support diagnoses in medicine, for fraud detection in finance, or for optimizing industrial production processes. Artificial intelligence did not suddenly descend upon us—it seeped gradually into our lives.
The visible leap of recent years is tied to new technical foundations: more powerful processors, massive amounts of data, and above all new model architectures. On this basis, generative language and image models such as GPT or Stable Diffusion emerged. They may appear like a radical break, but in reality, they are the result of a decades-long process in which theory, hardware, and data slowly converged.
That this also leads to unease is understandable. For many, AI seems to have appeared “out of nowhere,” although its development has been slow and continuous. This contrast between perception and reality amplifies the feeling of uncanniness—something that must be overcome for a more objective discussion.
Overcoming the Uncanny Valley
The “uncanny valley” describes the unsettling feeling when something looks almost, but not quite, human—such as a humanoid robot or an AI avatar. A similar unease shapes the broader debate on AI: it often appears more powerful and alien than it actually is.
The only way to overcome this is through objective framing. AI is neither magic nor an imminent threat, but the product of decades of research with clear strengths and equally clear limitations. Those who separate myth from fact, and discuss both opportunities and risks in equal measure, create the basis for a balanced discourse—and strip AI of its uncanny aura.
References
Freud, S. (1919). Das Unheimliche . In: Jahrbuch für psychoanalytische und psychopathologische Forschung, Bd. 1.
Minsky, M. L. (1967). Computation: Finite and Infinite Machines. Englewood Cliffs (NJ): Prentice-Hall.
Zajonc, R. B. (1968). Attitudinal Effects of Mere Exposure. Journal of Personality and Social Psychology Monograph Supplement, 9 (2, Pt. 2), S. 1–27.