logo
ResearchBunny Logo
Making the world observable and accountable: An ethnomethodological inquiry into the distinction between illustration and exhaustion

Sociology

Making the world observable and accountable: An ethnomethodological inquiry into the distinction between illustration and exhaustion

L. Yu-cheng

Discover the intriguing relationship between artificial intelligence and human intelligence in this thought-provoking exploration by Liu Yu-cheng. This research highlights how AI and HI perceive the world differently, employing opposing logics that shape our understanding of human-machine interactions.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper addresses how artificial intelligence (AI) and human intelligence (HI) approach and understand the world, proposing a core distinction between the logic of exhaustion (dominant in AI and formalization) and the logic of illustration (characteristic of HI). Framed by ethnomethodology’s concern with how members make social settings observable and accountable, the study asks to what extent AI can learn from HI and vice versa, and whether AI can make its own accountings observable and accountable. It situates the problem historically by tracing how formalization (from Leibniz/Newton through Turing, McCarthy, and Minsky) creates distance necessary for imitation, while contemporary technological pursuits of exactness and certainty attempt to cancel that distance, leading to hyperreal conditions (Baudrillard) and technological “certainty” that reprograms HI toward exhaustive logic. The author introduces primary distance (cognitively created, underpinning illustration) and secondary distance (technologically re-established, underpinning exhaustion), and raises the question whether fully formalizing HI is either possible or sufficient for developing human-like AI.
Literature Review
The paper reviews foundational perspectives shaping AI and representation: Minsky and McCarthy’s definitions of AI as machine-executable intelligence; Turing’s imitation game reframing intelligence as textual imitation; rationalist and formalist traditions (Hobbes, Descartes, Leibniz, Locke) influencing Babbage, Turing, von Neumann, Wiener, and AI pioneers (McCarthy, Minsky, Simon, Newell). It engages Baudrillard’s hyperreality and loss of critical distance, Heidegger’s technology as revealing and Enframing, and Merleau-Ponty’s grasping of the world. Dreyfus’s critiques (fringe consciousness, essence/accident discrimination, ambiguity tolerance) and Wolfe’s brain/mind distinction support the claim that HI tolerates ambiguity and uses rules as references rather than fixed programs. Luhmann’s systems theory contributes notions of complexity, closure, and (lack of) self-observation in programs, while Stiegler’s homo technicus and the default/de-fault distinction frame human technicity and externalization. Nilsson’s equivalence of beliefs and models is critiqued for overlooking their emergence and materiality. The paper also draws on ethnomethodology and conversation analysis (Garfinkel, Sacks) for indexicality, reflexivity, and the et cetera principle, and on Wittgenstein’s language games and family resemblance to argue against exhaustive formalization of natural language. Contemporary NLP/ML methods (RNN/LSTM, attention models, word vectors) and big data trends are reviewed as embodiments of the exhaustive logic.
Methodology
This is a theoretical and conceptual inquiry grounded in ethnomethodology (EM) and conversation analysis (CA). The author articulates an analytical distinction between the logic of illustration and the logic of exhaustion and examines their implications for HI, AI, natural language, and human-machine interaction. EM concepts (indexicality, reflexivity, members’ “doing formulating,” and the et cetera principle) are used to analyze how social settings are made observable and accountable and to assess whether and how AI could replicate such operations. CA’s formal features of conversation are discussed not as prescriptive rules for machines but as resources appropriated by human conversers. The paper employs illustrative examples from AI/NLP (RNN/LSTM, hierarchical attention, word vectors) and communicative artifacts (e.g., emojis) to demonstrate how current computational approaches instantiate the exhaustive logic and where they fall short of reflexivity and indexical mastery. No empirical datasets or experiments are conducted; the approach synthesizes philosophical, sociological, and technical literature to develop the argument.
Key Findings
- HI and AI embody two contrasting logics: HI illustrates (maintains and works through distance, incompleteness, ambiguity, indexicality, and reflexivity), while AI exhausts (seeks closure, exactness, determinacy, and elimination of distance). - Two types of distance are distinguished: primary distance (cognitively created, enabling illustration) and secondary distance (technologically re-established, enabling exhaustion). Excessive reliance on technological certainty expands distance beyond human perception and fosters hyperreality. - HI operates through de-fault (Stiegler): by removing an original lack, it produces defaults and extends an inexhaustible environment. AI operates via default (externally given programs), aiming to exhaust and control the environment within code-defined frames; AI cannot observe its own observations and lacks reflexivity. - Natural language is inherently indexical, contextual, and reflexive, enabling ambiguity tolerance and ‘context-dependent ambiguity reduction.’ Machine languages are objective, closed, and non-indexical. Consequently, applying CA rules directly as machine rules misconstrues their status as human conversational resources. - The ‘et cetera’ principle shows that rules’ applicability conditions cannot be finitely enumerated; exceptions are intrinsic resources in HI practice, but for AI they are treated as errors to be eliminated, driving further formalization. - Contemporary NLP/AI (RNN/LSTM, attention mechanisms, word vectors, large language models) exemplify the exhaustive logic: they expand memory, parameters, and data to better model sequences and context but remain within deterministic, code-closed operations without reflexive self-observation. - The rise of homo technicus reflects a societal shift toward exhaustive logic, risking the marginalization of humanities and misvaluation of human-machine relations if illustration is overlooked.
Discussion
The findings address whether AI can learn from HI and whether AI can make its own practices observable and accountable. By showing that HI’s sense-making depends on distance, indexicality, and reflexivity, while AI’s architectures operate within code-closed, exhaustive frames, the paper argues that current AI lacks the reflexive capacity to observe its own observations and to treat exceptions as constitutive resources. This explains persistent gaps in NLP and human–machine interaction despite advances in data and models. The analysis suggests that directly substituting objective expressions for indexicals (or CA rules for conversational competence) misconstrues human conversational mastery. It further contends that technological pursuits of certainty and big data instantiate and amplify the exhaustive logic, shaping both biological and social minds toward posthumanist trajectories. Re-centering the logic of illustration offers guidance for AI goals such as empathy, friendliness, and unbiasedness by valuing ambiguity tolerance, context-dependence, and reflexivity, and by recognizing that not all aspects of HI can or should be exhaustively formalized.
Conclusion
The paper connects AI with transhumanist posthumanism through an ethnomethodological distinction between the logic of illustration (HI) and the logic of exhaustion (AI). Illustration creates and maintains distance, enabling ambiguity tolerance, indexicality, and reflexivity; exhaustion eliminates distance in the pursuit of exactness and predictability, fostering closure and determinism. The emergence of homo technicus promotes exhaustive logic not only in machines but within human ways of knowing. Natural language mastery exemplifies illustration’s reflexive, context-sensitive operations, which current AI systems—optimized for exhaustive processing—do not replicate. Recognizing the value of illustration can inform AI developments aimed at empathy, friendliness, and fairness, and caution against uncritical formalization that endangers the humanities and distorts human–machine relations. Future research should explore whether and how AI might establish functional forms of distance and reflexivity—observing its own observations—without collapsing back into purely exhaustive logic, and how EM/CA insights can be operationalized in human–machine interaction without misconstruing conversational resources as rigid rules.
Limitations
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny