logo
ResearchBunny Logo
Impact of artificial intelligence on human loss in decision making, laziness and safety in education

Education

Impact of artificial intelligence on human loss in decision making, laziness and safety in education

S. F. Ahmad, H. Han, et al.

This study uncovers the troubling effects of artificial intelligence on decision-making, promoting laziness, and raising privacy concerns among university students in Pakistan and China. With insights drawn from 285 students, the research by Sayed Fayaz Ahmad, Heesup Han, Muhammad Mansoor Alam, Mohd. Khairul Rehmat, Muhammad Irshad, Marcelo Arraño-Muñoz, and Antonio Ariza-Montes highlights the critical need for preventive measures before AI is widely adopted in education.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper investigates ethical concerns arising from the adoption of AI in education, focusing on three major areas: (1) security and privacy, (2) loss of human decision-making, and (3) increased human laziness. While AI technologies such as plagiarism detection, exam integrity tools, chatbots, learning management systems, analytics, and emotional AI offer benefits, they also raise issues of bias, consent, autonomy, and data misuse. The authors argue that despite wide interest and policy discussions, there is a lack of robust frameworks, guidelines, and regulations to address AI ethics in education. The study posits and tests three hypotheses among university students in Pakistan and China: H1: AI has a significant impact on security and privacy issues; H2: AI has a significant impact on human laziness; H3: AI has a significant impact on the loss of human decision-making. The purpose is to quantify these impacts and inform safer, more ethical deployment of AI in educational contexts.
Literature Review
The theoretical discussion reviews AI in education and its ethical dimensions. AI applications span tutoring, feedback, robots, admissions, grading, analytics, VR, and personalized learning. Ethical concerns include data analysis and sharing practices, algorithmic bias across gender, race, income, and social status, privacy, responsibility for right/wrong outcomes, and student records. The paper highlights guidelines stressing well-being, safety, trustworthiness, fairness, IP rights, privacy, and confidentiality, along with principles such as safe human-machine interaction, supporting teacher roles, cultural respect, diversity accommodation, and avoiding overreliance on systems. Security and privacy: AI integration in classrooms and e-learning increases exposure to data breaches, hacking, and misuse, exacerbated by limited technical staff and budgets, remote learning, and interconnected systems. Large-scale data collection entails risks of bias and discrimination; notable incidents (e.g., Cambridge Analytica) underscore vulnerabilities. Making humans lazy: Growing reliance on AI may diminish cognitive effort, patience, and professional skills, fostering dependency and reduced motivation to perform tasks without AI assistance. Loss of human decision-making: As AI supports or automates decisions, human cognitive processes (critical thinking, intuition, creative problem-solving) risk attenuation. Expansion of AI into strategic and administrative decisions in education (e.g., admissions, student services) may lead to overtrust and decreased human scrutiny. The review suggests a hybrid human–AI collaboration model while recognizing risks of bias and reduced autonomy.
Methodology
Research Design: Positivist philosophy with a quantitative approach using survey data and Partial Least Squares Structural Equation Modeling (PLS-SEM) via SmartPLS. The study developed hypotheses from existing theory and assessed measurable constructs. Sampling and Data Collection: Purposive sampling of university students in Pakistan and China. N = 285 respondents. Data collection occurred from July 4, 2022, to August 31, 2022. Ethical consent procedures were followed. Measures: Instrument had demographics (gender, age, country, education level) and Likert-scale items (1–5) for four latent variables. AI (7 items; Suh & Ahn, 2022), Loss in Decision-Making (5 items; Niese, 2019), Safety & Privacy (5 items; Youn, 2009), Human Laziness (4 items; Dautov, 2020). Some items with low outer loadings (e.g., DMS, SP2, SPS) were removed. Common Method Bias: Full collinearity VIFs for items were all < 3.3, indicating minimal CMB. Reliability and Validity: Item loadings mostly > 0.7 (two items slightly below but acceptable), Cronbach’s alpha and composite reliability for all constructs > 0.7, AVE > 0.5 for all constructs, establishing reliability and convergent validity. Discriminant validity supported by Fornell-Larcker, HTMT (< 0.85), and cross-loadings. Model Fit and Predictive Relevance: SRMR approximately 0.065–0.068 (< 0.08), indicating acceptable fit. Q2 (predictive relevance): Human Laziness = 0.338 (moderate), Safety & Privacy = 0.314 (moderate), Decision Making = 0.033 (low). Multi-Group Analysis: No significant moderating effects by gender (male vs. female) or country (China vs. Pakistan). Importance-Performance: Performance of AI ~68.78% for all three outcomes; importance of AI: Decision Making ~25.1%, Human Laziness ~68.9%, Safety & Privacy ~74.6%.
Key Findings
- AI had significant positive effects on all three outcomes among university students in Pakistan and China: • AI → Loss in Decision-Making: β = 0.277, t = 5.040, p < 0.001. • AI → Human Laziness: β = 0.689, t = 23.257, p < 0.001. • AI → Safety & Privacy: β = 0.686, t = 17.105, p < 0.001. - Correlational and structural results consistently show the strongest associations with human laziness and safety/privacy, and a smaller but significant association with loss of human decision-making. - Predictive relevance (Q2): Human Laziness = 0.338 (moderate), Safety & Privacy = 0.314 (moderate), Decision Making = 0.033 (low). - Importance-Performance (IPMA): AI performance ~68.78% across outcomes; importance weights: 68.9% for laziness, ~74.6% for safety/privacy, ~25.1% for decision-making, indicating the most affected areas are laziness and safety/privacy. - Model fit acceptable (SRMR ~0.065–0.068). - Multi-group analyses found no significant differences by gender or country in path coefficients.
Discussion
Findings support the hypotheses that AI use in educational contexts is associated with increased human laziness, heightened security and privacy concerns, and a measurable reduction in human decision-making capabilities. The strongest effects on laziness and security/privacy suggest that automation and data-centric AI tools may reduce users’ motivation and cognitive engagement while increasing exposure to data misuse, breaches, and algorithmic bias. Reduced human decision-making capacity may result from overreliance on AI systems for routine and strategic tasks, leading to diminished critical thinking and intuition over time. The discussion links these results to prior literature on data vulnerabilities, skills gaps in educational institutions, and algorithmic bias. It highlights that while AI can streamline operations and enhance learning support, unchecked deployment risks undermining autonomy and ethics. The authors argue for human–AI collaboration models, robust governance, security-by-design, bias mitigation, user training, and cultural sensitivity to ensure AI augments rather than replaces human judgment in education.
Conclusion
AI significantly affects education by increasing user laziness, elevating security and privacy risks, and contributing to the erosion of human decision-making capacities. While AI assists in academic and administrative tasks and supports decision-making, growing dependence can introduce ethical and practical challenges. The study recommends designing transparent, secure, and bias-minimized AI systems; limiting overdependence to preserve human cognition; and providing training for teachers and students. Overall, careful implementation and governance are essential to realize AI’s benefits while mitigating its adverse impacts.
Limitations
The study focuses on only three ethical concerns (security and privacy, human laziness, loss of human decision-making), excluding other potential issues. It relies on purposive sampling of students in Pakistan and China, which may limit generalizability. The cross-sectional survey design constrains causal inference. Alternative methodologies and broader geographic settings are suggested for future research.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny