logo
ResearchBunny Logo
Algorithmic unconscious: why psychoanalysis helps in understanding AI

Interdisciplinary Studies

Algorithmic unconscious: why psychoanalysis helps in understanding AI

L. M. Possati

This paper by Luca M. Possati explores the intriguing intersection of psychoanalysis and artificial intelligence, arguing that AI systems are not merely engineering feats but social actors that reflect human desires. Experience a fresh perspective on technology through a psychoanalytic lens.... show more
Introduction

The paper advances the hypothesis that psychoanalytic concepts and methods can be applied to the study of AI and human/AI interaction, at the intersection of machine behavior, psychoanalysis and the anthropology of science. It asks three guiding questions: Why do humans need intelligent machines? What is AI’s unconscious? How does this notion enrich our understanding of AI? It argues for expanding the machine behavior approach by treating large AI systems not only as engineering artifacts but as social agents in constant interaction with humans, whose behaviors must be studied in context. The paper clarifies that “algorithmic unconscious” does not attribute human-like consciousness or feelings to machines; rather, AI is understood as an intermediated field emerging from human–machine interaction. The author focuses on the interaction between the human unconscious (in the Freudian sense) and AI, analyzing how technology shapes the unconscious, how humans project their unconscious onto machines, and how machines in turn transform the human unconscious. The aim is to reinterpret Lacan’s mirror stage and Oedipus complex through Latour’s actor-network theory to show the proximity between the unconscious and technology, and to apply this framework to AI to yield an original model that explains AI’s specificity among technologies.

Literature Review

The paper situates itself within the emerging field of machine behavior, which studies AI systems empirically as actors with behavioral patterns and ecologies, integrating algorithmic properties with the social environments in which they operate (Rahwan et al., 2019). It highlights evidence of AI adaptability and artificial creativity, citing work demonstrating robots’ rapid adaptation to damage through trial-and-error learning (Cully et al., 2015) and further developments on autonomy (Johnson, 2019). It also reviews critiques warning of unintended societal consequences of algorithmic systems, including bias, inequality, and opacity (“black boxes”), emphasizing the need to go beyond purely mathematical or engineering analyses (O’Neil, 2016; Rahwan et al., 2019). On the theoretical side, it engages Latour’s actor-network theory—emphasizing symmetrical ontology and realism of resistance—to argue for human–nonhuman hybridity and technical mediation in constituting subjectivity. It reinterprets Lacan’s psychoanalysis (mirror stage, Oedipus complex, signifier/signified dynamics) through Latour’s mediation concepts (translation, composition, articulation, black-boxing). It also references discussions on miscomputation and malfunction taxonomies (Piccinini, 2018; Floridi et al., 2015), and work linking language, cognition, and the interpretability of nonhuman minds, including AI, via natural language analysis (Mondal, 2017).

Methodology

The study employs a conceptual and interpretive methodology. First, it clarifies the use of “algorithmic unconscious” as a construct for analyzing interactions between AI and the human unconscious without positing machine consciousness. Second, it reinterprets Lacanian psychoanalysis (mirror stage and Oedipus complex) through Latour’s actor-network theory to frame the unconscious as a product of technical mediations. The mirror stage is analyzed as a collectif (child + mirror + surrounding objects) via Latour’s four mediation modes—translation, composition, articulation, and black-boxing—to show how identification arises from human–nonhuman associations and how black-boxing parallels repression. The Oedipus complex is reframed as a network of four actors (child + father + mother + language), where language operates as a nonhuman actor mediating desire and instituting the symbolic order (“Name of the Father”) through bidirectional translations between humans and artifacts. Third, the model is applied to AI: the paper articulates a four-phase human-to-machine identification process (project, interpretation, identification, mirror effect) illustrated with the case of neural networks for speech recognition, emphasizing that AI is an interpreted and interpretive technology. Fourth, it delineates the constitutive tensions among human desire, logic, and machinery by reviewing the historical coevolution of logic and computing hardware/software and the layered nature of computational systems (formal, physical, experimental). Finally, it leverages Latour’s notion of black-boxing and concepts from information theory to reconceptualize miscomputation and the information/noise dynamic as expressions of the algorithmic unconscious. No empirical experiments are conducted; examples from AI (e.g., voice recognition) are used illustratively.

Key Findings
  • The behavior of AI systems cannot be fully understood within logical-mathematical or engineering frameworks alone; incorporating human and social sciences via the machine behavior approach is essential.
  • The unconscious can be understood as the effect of technical mediation; reinterpreting Lacan through Latour shows that subjectivity and identification are co-constituted by human–nonhuman networks (e.g., the mirror, language).
  • AI constitutes a new stage in the human identification process—an “algorithmic unconscious”—and forms a third register of identification after Lacan’s imaginary and symbolic registers. An AI system can be modeled as a field of three contrasting forces: human desire for identification, logic, and machinery.
  • Black-boxing in technical systems parallels psychoanalytic repression: it hides the mediations and tensions among humans, logic, and machinery that produce observable AI behavior.
  • Miscomputation is not merely a technical fault classifiable as dysfunction or misfunction; it is ontological to software/AI and expresses an irreducible tendency toward deviation from design—an aspect of the algorithmic unconscious.
  • Information arises against a background of noise; the stabilization of scientific or technical “facts” as black-boxes reflects a managed separation of information from noise, driven by social, technical, and psychological factors—illuminating how AI systems’ opacities and interpretations become fixed and later reopened.
  • The identification process is unidirectional (from humans to machines); while machines can autonomously reproduce aspects of this process, their status as “intelligent” is constituted by human interpretation within constraints set by logic and machinery.
Discussion

The findings address the paper’s guiding questions by reframing AI as deeply entangled with the human unconscious. Humans need intelligent machines, in this account, because technologies mediate and extend processes of identification, externalizing unconscious dynamics (as with the mirror and language) into artifacts that augment cognition and sociality. AI’s “unconscious” is the network of hidden mediations and repressions—analogous to black-boxing—through which human desire, logic, and machinery interact to generate AI behaviors; this realm manifests in phenomena like miscomputation and the formation of stable interpretive black-boxes. This reconceptualization enriches understanding of AI by integrating psychoanalysis with machine behavior and actor-network theory, shifting attention from solely algorithmic performance to the social-technical ecologies and tensions that constitute AI systems. It offers a framework to analyze adaptability, bias, opacity, and error not as peripheral technicalities but as structurally meaningful outcomes of the algorithmic unconscious, with implications for design, governance, and human–AI interaction.

Conclusion

The paper proposes a redefinition of artificial intelligence grounded in its deep connection to the human unconscious. By reinterpreting Lacan’s mirror stage and Oedipus complex through Latour’s actor-network theory, it presents AI as a new register of identification shaped by the interplay of human desire, logic, and machinery, and clarifies how black-boxing parallels repression. The approach yields a novel, coherent interpretive model for AI’s originality among technologies and motivates a new research program: investigating the psychodynamics of artifacts and the mutual evolution of AI and mental illness. The proposed future work includes forming groups of patients and AI systems to study bidirectional influences in language and movement, exploring whether AI can aid diagnosis and treatment, whether AI systems assimilate and reproduce psychopathological patterns, and whether categories like neurosis or psychosis can be meaningfully applied to AI, including the emergence of new forms of miscomputation.

Limitations

The paper explicitly narrows scope by: (a) not addressing whether machines can be (un)conscious in the human sense; (b) focusing on interaction rather than isolated human or machine poles; and (c) offering a conceptual, not empirical, analysis. It also notes it does not provide a detailed technical treatment of all aspects of computation. The author critiques Latour for lacking a developed AI theory and frames this work as a first step toward a broader research project, acknowledging that methodologies involving direct patient–AI interaction are not yet established.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny