
Interdisciplinary Studies
Algorithmic unconscious: why psychoanalysis helps in understanding AI
L. M. Possati
This paper by Luca M. Possati explores the intriguing intersection of psychoanalysis and artificial intelligence, arguing that AI systems are not merely engineering feats but social actors that reflect human desires. Experience a fresh perspective on technology through a psychoanalytic lens.
Playback language: English
Introduction
The paper explores the intersection of psychoanalysis and artificial intelligence (AI), proposing that psychoanalytic concepts can enrich our understanding of AI and its interaction with humans. The central question is whether and how psychoanalysis can illuminate the behavior of AI systems, particularly in their complex interactions with human users and society. The author aims to address three core questions: Why do humans need intelligent machines? What is AI's unconscious? How does this notion enhance our understanding of AI?
The study expands on the "machine behavior approach," arguing that AI systems shouldn't be solely analyzed through logical-mathematical or engineering lenses. Instead, it advocates for a broader perspective that considers AI systems as social agents interacting with humans and their environments. This necessitates incorporating concepts from the social sciences, particularly psychoanalysis, to fully grasp AI's complex behaviors and societal impacts. The author clarifies that the "algorithmic unconscious" doesn't imply attributing human consciousness or feelings to machines but focuses on the interactions between humans and AI, exploring how human unconscious projections shape AI and vice versa. The paper utilizes Latour's actor-network theory and Lacan's psychoanalysis to frame the analysis, examining the roles of technology in shaping the human unconscious and the projection of human unconsciousness onto AI systems.
Literature Review
The paper draws upon several key theoretical frameworks. It integrates the emerging field of "machine behavior," which studies intelligent machines as social actors with unique behavioral patterns and ecological contexts. The author criticizes the purely technical or instrumental approach to AI, highlighting the unintended consequences and societal impacts, often unanticipated by creators. Examples from research on adaptable robots and algorithms demonstrate the capacity of AI for autonomous adaptation and novel behavior, emphasizing the need for a multi-disciplinary perspective encompassing social and behavioral sciences. The author also leverages Latour's actor-network theory, emphasizing symmetrical ontology and the realism of resistance. This framework views all entities as centers of force, with reality as a set of trials of strength and resistances. The author interprets Lacan's concepts of the mirror stage and the Oedipus complex through this lens, arguing that the unconscious is essentially a technology shaping human identity. The paper integrates concepts from cognitive science, particularly Mondal's work on "forms of mind," to explore the possibility of attributing mental structures to machines without necessarily anthropomorphizing them.
Methodology
The paper employs a primarily theoretical and interpretative methodology. It reinterprets existing psychoanalytic and anthropological concepts to develop a novel framework for understanding AI. The core of the methodology is a re-interpretation of Lacanian psychoanalysis through the lens of Latour's actor-network theory. This involves analyzing the mirror stage and the Oedipus complex as collective interactions between humans and non-human actors, demonstrating how technology plays a crucial role in shaping human identity and the unconscious. The author examines the mirror stage as a network of actors (child, mirror, surrounding objects) and describes the process of identification as a series of translations, compositions, and articulations within this network. The concept of the Black-Box is used to highlight how technological mediation conceals the complex interactions that shape identity. This framework is then extended to the study of AI, analyzing the four phases of human projection, interpretation, identification, and the "mirror effect" in AI development. The paper examines computational systems through their formal, physical, and experimental dimensions, highlighting the tensions between logic, machinery, and human desire in AI. The author explores miscomputation and information, relating them to the Black-Box effect and the concept of repression in psychoanalysis.
Key Findings
The paper's central finding is the proposition that AI represents a new stage in human identification, extending beyond the imaginary and symbolic registers described by Lacan. The author labels this new stage the "algorithmic unconscious," characterizing AI as a system where human desire for identification interacts with the constraints of logic and machinery. The analysis of the mirror stage through Latour's framework reveals the unconscious as the product of technical mediation, suggesting that technology shapes and produces non-human artifacts that, in turn, shape human identity. Similarly, the re-interpretation of the Oedipus complex positions language as a technology emerging from unconscious tensions. Applying this framework to AI, the author describes four phases of interaction: projection (humans design AI), interpretation (humans interpret AI's behavior), identification (humans attribute qualities to AI based on interpretation), and mirror effect (AI interprets humans, triggering a new cycle). This highlights the active role of human interpretation in shaping AI and the reciprocal influence between AI and human identity.
The paper also offers a novel perspective on miscomputation and information in AI. Miscomputation is presented not merely as a technical malfunction but as a manifestation of the tensions between logic, machinery, and human desire—the algorithmic unconscious. The concept of the Black-Box is used to explain how these tensions are often hidden in the functioning of AI systems. Similarly, the relationship between information and noise is explored, drawing parallels with the psychoanalytic concept of repression. The author suggests that the desire for order in the face of inherent disorder in AI systems leads to the creation of Black-Boxes that conceal the complex interactions that shape the system's behavior.
Discussion
The paper's findings significantly contribute to the understanding of AI by integrating psychoanalytic concepts into the technological discourse. The "algorithmic unconscious" provides a new lens for analyzing the complex interactions between humans and AI, moving beyond purely technical considerations. The framework helps explain the often-unanticipated consequences of AI by highlighting the role of human desire and unconscious projections in shaping AI development and behavior. The exploration of miscomputation and information through the lens of the Black-Box effect offers a more nuanced understanding of AI's limitations and potential pitfalls. By drawing parallels between the repression of unconscious desires and the concealment of complex interactions in AI systems, the paper offers a new perspective on the challenges of designing, implementing, and utilizing AI ethically and responsibly. This interdisciplinary approach bridges the gap between engineering and social sciences, promoting a more holistic understanding of the impacts of AI on individuals and society.
Conclusion
This paper proposes a novel framework for understanding AI through the lens of psychoanalysis and actor-network theory. The concept of the "algorithmic unconscious" highlights the crucial role of human desire, unconscious projections, and the inherent tensions between logic and machinery in shaping AI's development and behavior. The findings suggest that AI is not merely a technological artifact but a complex interplay of human and non-human actors, shaped by unconscious processes and technical mediations. Future research could explore the application of this framework to specific AI systems, investigating the dynamics of human-AI interaction in various contexts, and the potential for AI to contribute to our understanding and treatment of mental illness. The framework also warrants further investigation into the ethical and societal implications of AI, particularly in light of the potential for miscomputation and the concealment of complex interactions within AI systems.
Limitations
The paper primarily relies on theoretical interpretations and conceptual analysis. Empirical validation of the proposed framework is needed through further research. While the paper explores the complex interplay of human and non-human actors in AI systems, a more detailed analysis of specific AI architectures and their impact on human behavior would strengthen the argument. The reliance on Lacanian psychoanalysis may not appeal to researchers from different theoretical backgrounds. Further interdisciplinary collaboration is encouraged to address this limitation.
Related Publications
Explore these studies to deepen your understanding of the subject.