logo
ResearchBunny Logo
Impact of artificial intelligence on human loss in decision making, laziness and safety in education

Education

Impact of artificial intelligence on human loss in decision making, laziness and safety in education

S. F. Ahmad, H. Han, et al.

This study delves into the intriguing effects of Artificial Intelligence on decision-making, laziness, and privacy among university students in Pakistan and China. Conducted by a diverse group of researchers, the findings reveal the pressing need for caution as AI becomes more prevalent in education.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper investigates ethical and behavioral concerns associated with AI adoption in education, focusing on how AI affects security and privacy, human laziness, and the loss of human decision-making among university students. The context is the rapid proliferation of AI in educational technologies (e.g., exam integrity, chatbots, analytics, affective/emotional AI) and the concurrent lack of agreed frameworks, policies, and regulations to address ethical issues. The study highlights debates around what constitutes “ethical” AI in education and underscores concerns such as consent, data misuse, bias, equity, privacy, and trust. It notes that while AI offers organizational benefits (e.g., information security, competitive advantage), it also raises significant risks. The research centers on three focal areas in education: (1) security and privacy, (2) issues of human decision-making, and (3) impact on human behavior (including laziness and bias). The hypotheses tested are that AI significantly impacts: (a) security and privacy issues, (b) human laziness, and (c) the loss of human decision-making capabilities.
Literature Review
The theoretical discussion reviews extensive literature on AI’s expanding role across sectors, including education (tutoring, feedback, grading, analytics, VR, social robots). Ethical concerns span data analysis, interpretation, sharing, prevention of bias (gender, race, socio-economic status), privacy, data access, accountability, and data security. The review synthesizes principles for educational AI design emphasizing user encouragement, safe human–machine collaboration, fairness, ergonomics, cultural respect, inclusivity, and preserving teachers’ roles. Security and privacy: As AI tools permeate classrooms, they create potential privacy and security risks due to data collection, storage, and algorithmic decisions. Institutions often lack expertise and resources to secure large, sensitive student datasets; remote learning expands attack surfaces. Interconnections between AI and cybersecurity amplify risks, and breaches can have serious consequences. The review references high-profile data misuse cases to illustrate vulnerabilities and calls for integrating security into AI design and deployment. Hypothesis: AI has a significant impact on security and privacy issues. Human laziness: Increasing reliance on AI can reduce human cognitive engagement, fostering dependency and diminishing motivation to perform tasks, potentially degrading professional skills and autonomy. In educational settings, automation may lead students and teachers to avoid effortful tasks. Hypothesis: AI significantly increases human laziness. Loss of human decision-making: AI’s efficiency in processing data can crowd out human cognitive capabilities such as critical thinking and creativity. In universities, AI-driven processes (e.g., admissions decisions, record analysis) may shift authority to systems, reducing staff and teachers’ active participation and reasoning, thereby eroding decision-making skills. Hypothesis: AI significantly contributes to the loss of human decision-making.
Methodology
Research design: The study adopts a positivist philosophy with a quantitative approach. It develops hypotheses from existing theory and tests them using measurable survey data analyzed via PLS-SEM (SmartPLS). Sample and sampling: Purposive (non-probability) sampling targeted university students in Pakistan and China. A total of 285 respondents participated (data collected July–August 2022), with informed consent. Measures: Constructs included perceptions of artificial intelligence, decision making, human laziness, and security/privacy issues. Items and codes are reported (Table 1). After scale refinement, items with low loadings (e.g., DM5, SP2, SP5) were removed. Common method bias: Assessed via VIF; all item VIFs < 3.3 indicate minimal CMB. Reliability and validity: Outer loadings mostly > 0.7; Cronbach’s alpha and composite reliability > 0.7 for all constructs. Convergent validity supported (AVE ≥ 0.5). Discriminant validity confirmed by Fornell–Larcker, HTMT (< 0.85), and cross-loadings. Model evaluation: Structural relationships tested in SmartPLS. Model fit acceptable (SRMR estimated = 0.068). Predictive relevance assessed via Q² for endogenous constructs. Multi-group analysis (MGA) examined moderation by gender and country.
Key Findings
- Sample profile (N=285): 57.5% male, 42.5% female; China 49.8% (n=142), Pakistan 50.2% (n=143); ages: <20 (9.1%), 20–25 (49.1%), ≥26 (41.8%); study level: undergraduate 52.3%, graduate 41.8%, postgraduate 6.0%. - Structural paths (Table 8): - AI → Loss in Decision Making: β = 0.277, t = 5.040, p = 0.000 (significant). - AI → Human Laziness: β = 0.689, t = 23.257, p = 0.000 (significant). - AI → Safety & Privacy: β = 0.686, t = 17.105, p = 0.000 (significant). - Importance–Performance Map Analysis (IPMA, Table 9): AI performance ≈ 68.78% for all three; importance of AI: laziness 0.689, decision making 0.251, safety & privacy 0.746. - Predictive relevance (Table 12): Q² Decision making = 0.033; Human laziness = 0.338; Safety & privacy = 0.314. - Model fit (Table 11): SRMR (estimated) = 0.068; NFI ≈ 0.809. - Multicollinearity/CMB: All VIFs < 3.3. - Multi-group analysis: No significant moderation by gender or country (p-values > 0.05 for path differences). - Summary percentages stated: 68.9% laziness, 68.6% safety/privacy issues, and 27.7% loss in decision-making associated with AI, indicating laziness most affected.
Discussion
The findings support the hypotheses that AI adoption in education is associated with increased human laziness, elevated safety and privacy concerns, and a measurable contribution to the loss of human decision-making capacity. The results align with prior literature citing security/privacy vulnerabilities due to extensive data collection and algorithmic processing, as well as concerns that automation diminishes users’ motivation and cognitive engagement. In educational contexts, greater reliance on AI-driven systems can reduce physical and social interactions that nurture critical thinking and decision-making skills. Algorithmic errors may be repeated consistently without human oversight, exacerbating risks. Cultural variability in privacy perceptions further complicates policy responses, underscoring the need for ethical, transparent design, robust security practices, adequate user training, and governance frameworks to mitigate these harms while leveraging AI’s benefits.
Conclusion
The study contributes empirical evidence that AI in educational settings correlates with increased human laziness, higher security and privacy risks, and a reduction in human decision-making capabilities among students. While AI offers substantial benefits for academic and administrative tasks, unchecked reliance can erode cognition and autonomy. The authors recommend prioritizing ethical-by-design principles, strengthening algorithms and security controls, minimizing bias, reducing dependency on AI in decision processes to preserve human cognition, and providing training for teachers and students. Future research should broaden ethical dimensions beyond the three examined, apply diverse methodologies, and replicate across different geographies to enhance generalizability.
Limitations
The study focuses on three ethical concerns—loss of decision-making, human laziness, and security/privacy—excluding other relevant issues. The non-probability (purposive) sampling of students in Pakistan and China limits generalizability. Alternative research designs and broader contexts are needed to validate and extend the findings.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny