logo
ResearchBunny Logo
From challenges to opportunities: navigating the human response to automated agents in the workplace

Business

From challenges to opportunities: navigating the human response to automated agents in the workplace

I. Dula, T. Berberena, et al.

Explore the intriguing dynamics of workers' interactions with automated agents in the workplace, revealing how trust impacts adoption and usage. Research conducted by Ivan Dula, Tabea Berberena, Ksenia Keplinger, and Maria Wirzberger uncovers surprising insights into the emotional responses that shape our experiences with AI.

00:00
00:00
Playback language: English
Introduction
The integration of Artificial Intelligence (AI) into the workplace is rapidly transforming job roles and tasks, creating both opportunities and challenges. While AI promises improved productivity and cost savings, potential downsides include increased managerial control and amplified discrimination. Existing research reveals a wide spectrum of worker perceptions towards AI, ranging from positive to extremely negative, with some professionals even questioning their expertise after AI interaction. This highlights the significant challenge of successfully integrating AI into the work environment. This paper addresses this challenge by investigating the interplay of workload, effort, and trust in influencing workers' willingness to adopt AI, specifically focusing on automated agents (AAs) such as algorithms, large language models (LLMs), and chatbots. The study uses AAs, defined as adaptable, independent computational systems fulfilling designated goals, which are increasingly used across diverse sectors. This study focuses on digital AAs engaged in intellective tasks (requiring demonstrably correct answers), common in knowledge-based work. The paper contributes by using a system dynamics simulation model to quantify and predict workers' experiences with AAs. This model helps bridge gaps in existing research by combining a systems approach with management theories and psychological concepts, advancing the field of human-AI interaction and Collective Human-Machine Intelligence (COHUMAIN). The model aims to provide actionable recommendations for managers to effectively implement AAs and achieve positive outcomes.
Literature Review
Several established theoretical models explain technology acceptance, including the Theory of Reasoned Actions (TRA), Technology Acceptance Model (TAM) and its extensions, the Motivation Model (MM), Theory of Planned Behaviour (TPB), Combined TAM and TPB (C-TAM-TPB), Model of PC Utilization (MPCU), Innovation Diffusion Theory (IDT), and Social Cognitive Theory (SCT). The Unified Theory of Acceptance and Use of Technology (UTAUT) integrates these, identifying performance expectancy, effort expectancy, social influence, and facilitating conditions as key predictors of technology use. However, these models lack a detailed investigation of the dynamic behavioral consequences of technology use, including changes in cognitive and affective states and undesirable behaviors. This paper addresses this gap by building a conceptual model incorporating workload, effort, performance, emotional and social response, trust in AAs, and undesirable behavior. The model leverages the system dynamics approach, focusing on the interplay of reinforcing and balancing feedback loops to capture the complex dynamics.
Methodology
The research utilizes a system dynamics simulation model developed and tested using Vensim Professional 9.4.0. The model incorporates nine variables and seven feedback loops (five balancing, two reinforcing) representing the relationships between workload, effort, performance, use of AAs, emotional and social response, trust in AAs, and undesirable behavior. Workload is defined as the accumulation of tasks, effort as the cognitive/physical load allocated, and performance as the task completion rate. The model quantifies these using a stock-and-flow approach similar to Homer's model of worker burnout. The use of AAs is also modeled as a stock, representing the time spent using AAs. Productivity with and without AAs is captured using 'time per task' variables. The emotional and social response, modeled as a stock ranging from 0 (complete disengagement) to 1 (complete engagement), influences the propensity for undesirable behavior (also a 0-1 stock). Trust in AA, a 0-1 stock, is influenced by perceived versus expected performance. The model considers intellective tasks in knowledge-based work contexts. To validate the model, a base run (scenario 0) demonstrating equilibrium is conducted. Four additional scenarios are simulated: 1) step increase in workload; 2) increased AA efficiency; 3) varying initial trust levels; and 4) reduced emotional and social response adjustment time. Each scenario is evaluated by observing backlog, workweek, AA workweek, and trust in AA. Sensitivity analyses are conducted to assess the influence of parameters like AA time per task, trust adjustment time, and expected performance on the main variables.
Key Findings
Scenario 0 confirmed model equilibrium when no external factors influence AA use. Scenario 1, with a step increase in workload, showed increased AA usage, trust, and a slight decrease in workweek as the worker successfully managed the increased workload with AA support. Scenario 2, featuring a more efficient AA, surprisingly resulted in a higher backlog, lower trust, increased workweek, and reduced AA usage compared to Scenario 1. Sensitivity analysis confirmed that increased AA efficiency consistently led to a higher backlog, lower trust, increased workweek, and decreased AA use. This counter-intuitive result is attributed to the reduced need for frequent AA interaction, slowing trust building. Scenario 3, varying initial trust, demonstrated that high initial trust led to quicker AA adoption and lower backlog, while low initial trust resulted in higher backlog, increased workweek, and slower AA adoption. Sensitivity analyses showed that low initial trust could lead to high adoption with quick trust adjustment. Worker expectations strongly influenced trust; exceeding perceived performance gains decreased trust. Scenario 4, with reduced emotional/social response adjustment time, showed quicker trust growth but lower AA usage compared to Scenario 1. This suggests that highly responsive workers might achieve high productivity with less reliance on AAs. The findings across scenarios highlight the dynamic interplay between workload, trust, AA efficiency, initial trust, emotional responses, and the overall use and impact of AAs in the workplace.
Discussion
The findings address the research question by demonstrating the complex interplay of factors influencing AA adoption and worker experience. The counter-intuitive result of lower-efficiency AAs potentially outperforming higher-efficiency ones highlights the crucial role of trust in AA adoption. This underscores the need for managers to consider not just AA efficiency but also the dynamics of trust-building through interaction. The impact of initial trust emphasizes the importance of pre-implementation strategies to address worker perceptions and expectations. The findings' relevance to the field lies in their contribution to understanding the dynamic complexity of human-AI interaction, providing nuanced insights beyond simple technology acceptance models. This model bridges the gap between theoretical frameworks and practical implementation, offering actionable recommendations for managing human-AI integration. The results highlight the importance of considering both efficiency and the human element when implementing AAs.
Conclusion
This study contributes significantly to understanding the dynamic interplay between human workers and automated agents in the workplace. The system dynamics model provides valuable insights into the complex feedback loops influencing AA adoption and usage. Future research could focus on empirical validation of the model's predictions using real-world data collected during human-AI interactions. Further investigation into the role of organizational culture and specific task types on AA adoption is also warranted. The model’s potential application to diverse workplace settings should also be explored. Ultimately, a deeper understanding of these dynamics will allow us to create more human-centered AI systems that foster productivity and well-being in the workplace.
Limitations
The model is based on a specific scenario (software developers and intellective tasks), potentially limiting generalizability to other contexts. The model uses simplifying assumptions for certain variables, and the parameters may need further refinement based on empirical data. The lack of empirical data to validate the model's predictions is another limitation. Future work should focus on collecting and analyzing data from real work environments to validate the model and further refine its parameters.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny