logo
ResearchBunny Logo
The political and social contradictions of the human and online environment in the context of artificial intelligence applications

Interdisciplinary Studies

The political and social contradictions of the human and online environment in the context of artificial intelligence applications

R. Rakowski and P. Kowaliková

This study conducted by Roman Rakowski and Petra Kowaliková delves into the societal impacts of artificial intelligence, emphasizing the intersection of AI with democratic values and social justice. It argues for the necessity of interdisciplinary collaboration to address regulatory and ethical considerations, ultimately striving for a just society that embraces technological innovation while safeguarding individual rights.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper situates AI within an ongoing digital transformation (Society/Industry 4.0) where information and communication technologies, big data, and AI permeate all spheres of life. It frames emerging ethical, political, and social dilemmas as technology reshapes human experience and traditional social dichotomies, shifting capitalism’s center of gravity from material production to data production (digital capitalism). The authors highlight datafication and commodification of user-generated data, ownership ambiguities, and the need to reconceptualize work. They stress the importance of critically examining externalities such as bias, privacy erosion, surveillance, manipulation, cybersecurity risks, and the digital divide, while acknowledging technology’s benefits. The overarching research problem asks how AI relates to society and which scientific approaches best reveal this relationship so that democratic values, justice, and rights can be preserved in a rapidly evolving technological landscape.
Literature Review
The article draws on Critical Theory, philosophy of technology, and sociology to interpret AI’s societal embedding. Key strands include: (1) Digital capitalism and datafication (Fuchs & Mosco, 2016; Mayer-Schönberger & Cukier, 2014), with attention to data as commodity and user-generated data’s role in capital accumulation and power asymmetries; (2) Critical analyses of technology, rationality, and alienation (Horkheimer & Adorno, 2007; Marcuse; Feenberg, 2009, 2014; Allmer, 2017), rejecting technological neutrality and emphasizing politics embedded in design; (3) Risks and imaginaries around AI/big data: technochauvinism (Broussard, 2018), opacity and epistemic challenges (Bridle, 2019; Greenfield, 2017), existential and societal risks (Ord, 2020), and contemporary risk profiles (Harari, 2018); (4) Inequality and bias: algorithmic oppression (Noble, 2018), automation of social services and marginalization (Eubanks, 2018), digital divide and uneven capabilities; (5) Governance and ethics: policy/regulatory lags vs. rapid technological change (Allmer, 2017; Ashok et al., 2022), cybersecurity, and surveillance capitalism (Zuboff & Schwandt, 2019). This literature frames the need for interdisciplinary, critical, and political-economic analysis of AI’s social impacts.
Methodology
The study is conceptual and integrative, employing: (1) Comprehensive literature review to synthesize current scholarship on AI’s social impacts; (2) Policy and legal analysis to assess existing regulatory frameworks, identify gaps, and propose improvements aligned with democratic values and social justice; (3) A critical-theoretical framework combining Critical Theory of Technology (Feenberg, Allmer, Fuchs), philosophy of technology, and the philosophy of information (Floridi) to interpret how power relations and values are embedded in technological design and deployment; (4) Interdisciplinary analysis drawing on sociology, anthropology, political science, and economics to examine social structures, user behavior, norms/values, political economy, cybersecurity, and market dynamics; (5) Focus on datafication and commodification as organizing concepts, using original analytical tools developed by the authors for contemporary ICTs; (6) A non-deterministic, dialectical approach: technology is analyzed as socially constructed, value-laden, and shaped by institutions, norms, and public discourse, while also transforming social subjects. The methodology is further specified via three integrated strands: (a) analysis of the political dimension of technologies with critical theory; (b) application of philosophy of information to big data/knowledge; (c) investigation of datification of knowledge and computational thinking as a didactic and epistemic counterbalance. The paper also delineates analytical challenges and tasks, including identifying relevant elements of critical theory, analyzing divisions arising from AI, reflecting on reality construction by AI, clarifying AI’s societal roles, and mapping social risks and their interrelations.
Key Findings
- AI and digital technologies transfer classic social and political contradictions into the online realm, reconfiguring power, culture, and social structure through datafication and commodification of user activity. - Ownership and control of data are concentrated in large corporations, creating asymmetries where user-generated data become private property and vehicles for capital accumulation, leading to digital exploitation and a politicization of privacy. - Technology design encodes social power relations and values; technology is not neutral. The asymmetry of power is incorporated into design choices, which shape social relations and can entrench bias and inequality. - The opacity of algorithms and the reduction of knowledge to data reshape individuals’ epistemic positions, potentially fostering bias, mistrust, and loss of control while privileging those with access to data and analytical capabilities (algorithms/AI). - Social risks include bias and discrimination in employment, credit, and criminal justice; privacy erosion and surveillance; manipulation of public opinion; cybersecurity threats; and the marginalization of groups lacking access or digital skills. - There is emergent formation of a “digital class” that produces data without access to or control over those data and their interpretations, intensifying social stratification in the digital economy. - A purely instrumental or purely substantive view of technology is insufficient; a critical, dialectical, and democratic approach is needed to analyze and redirect AI development. - Computational thinking is proposed as an educational strategy to mitigate epistemic asymmetries, enabling individuals to interpret, engage with, and modify technological systems rather than treating them as opaque black boxes.
Discussion
The findings address the core question—how AI relates to society—by showing that AI systems are embedded within, and constitutive of, social power relations, values, and political economies. The commodification of data, algorithmic opacity, and design-embedded biases demonstrate that technology co-produces social reality rather than neutrally mediating it. This dynamic yields epistemic and distributive inequalities, where access to data and interpretive tools (algorithms/AI) confers advantage. The paper argues for democratizing technology through ethical-by-design approaches, regulatory frameworks, and public participation, coupled with education (computational thinking) to empower users as knowledgeable agents. By integrating Critical Theory with the philosophy of information, the paper provides a cognitive map to interpret the “black box” sublime and to surface latent contradictions between individuals, technology, and society, thereby informing governance, ethics, and social justice strategies.
Conclusion
The study concludes that AI’s integration into society reproduces and transforms political and social contradictions in the digital sphere, reshaping power, culture, and social structures and contributing to the emergence of a data-producing yet data-excluded digital class. These dynamics generate conflicts between democratic values and data collection, market imperatives and data sharing, and individual rights and public welfare. The authors recommend strengthening democratic values and human rights through improved regulation of digital technologies, support for civil society, and public education on the consequences of digital transformation, alongside efforts to democratize technology and embed ethics into algorithms and AI systems. Future research should: (1) analyze differential impacts of digital transformation on specific social groups (e.g., minorities, women, economically disadvantaged populations); (2) investigate political and social mechanisms that generate conflicts in human and online environments; (3) develop new solutions to political and social conflicts across both environments.
Limitations
The article is a conceptual, theoretical analysis; no empirical datasets were generated or analyzed, which limits empirical generalizability. Given the rapid and ongoing evolution of AI and digital technologies, regulatory and ethical frameworks—and the phenomena analyzed—are in flux, potentially constraining the temporal scope of conclusions. Coverage is necessarily selective across vast interdisciplinary literatures and technologies.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny