Business
From challenges to opportunities: navigating the human response to automated agents in the workplace
I. Dula, T. Berberena, et al.
Explore the intriguing dynamics of workers' interactions with automated agents in the workplace, revealing how trust impacts adoption and usage. Research conducted by Ivan Dula, Tabea Berberena, Ksenia Keplinger, and Maria Wirzberger uncovers surprising insights into the emotional responses that shape our experiences with AI.
~3 min • Beginner • English
Introduction
AI is rapidly transforming the nature of work by altering an individual's workload and common work tasks, as well as by stimulating new processes and practices. Given the rapidly changing work environment, leaders need to carefully balance new opportunities (e.g., improved productivity, time and cost savings) and unintended challenges brought by the use of AI, such as increased managerial control or exacerbated discrimination. Previous research suggests that workers' perceptions of AI range from rather positive to very negative expressions, and after interacting with AI, some professionals even start doubting their expertise. Thus, it can be a huge challenge for workers to successfully use newly implemented AI in their work environment. The research question addressed is: How does the interplay of workload, effort and trust impact the workers' willingness to adopt AI in their work environment?
The paper aims to foster a better understanding of workers' interactions with digital automated agents (algorithms, LLMs, chatbots) in work settings, focusing on intellective tasks (tasks with demonstrably correct answers common in knowledge work). The study responds to calls for computational modelling in organisational science by building a system dynamics simulation (Vensim Professional 9.4.0) to quantify, evaluate and predict workers' experience with AA. The model captures balancing and reinforcing feedback loops among workload, effort, performance, trust, emotional/social response and AA use. Findings suggest lower-efficiency AAs may outperform higher-efficiency AAs due to trust-constrained adoption; low initial trust can sometimes lead to higher adoption; and greater emotional/social responsiveness can increase trust but reduce AA utilisation. The authors also derive managerial recommendations and highlight the interdisciplinary contribution to human-AI interaction and COHUMAIN research.
Literature Review
The Background and Conceptual model draw on established technology acceptance and behaviour theories: Theory of Reasoned Action (TRA), Technology Acceptance Model (TAM) and extensions, Theory of Planned Behaviour (TPB), Combined TAM-TPB, Model of PC Utilization (MPCU), Innovation Diffusion Theory (IDT), Social Cognitive Theory (SCT), and the Unified Theory of Acceptance and Use of Technology (UTAUT) and its extensions. These frameworks identify performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, price value and habit as predictors of intention and use, moderated by demographics and voluntariness. However, prior work lacks investigation into dynamic behavioural consequences (e.g., changes in workload, frustration, undesirable behaviours) and simulation modelling of these dynamics.
The authors integrate literature on workload, effort and performance; trust in automation and AI; emotional and social responses to automation; and undesirable behaviours. They propose a conceptual model with nine variables and seven feedback loops (five balancing, two reinforcing). Key links include: higher workload increases effort and/or AA use; AA use can reduce emotional and social response, affecting rationality of interactions and performance; trust grows with reliable AA performance and declines with poor performance; reduced emotional/social response can either increase rationality and performance in contexts where emotions hinder performance or increase propensity for undesirable behaviour in other contexts. This synthesis motivates a system dynamics approach to capture endogenous feedback-driven behaviours in human–AA interaction for intellective tasks within knowledge-work contexts.
Methodology
The study employs system dynamics modelling (Forrester; Sterman) implemented in Vensim Professional 9.4.0. The model is constructed iteratively to reflect the conceptual feedback structure while restricting scope to intellective tasks in knowledge work (e.g., software development). Core constructs are represented as stocks with flows and delays: Backlog (workload), Workweek (human effort), AA Workweek (effort using AA), Emotional and Social Response, Propensity for Undesirable Behaviour (kept inactive in scenarios), and Trust in AA. Performance emerges via completion rate determined by time per task (human) and AA time per task.
Quantifications and key assumptions:
- Workload as Backlog (stock of unresolved tasks). Tasks arrive and are completed; base scenario has 5 tasks/week arrival, target 1-week completion time.
- Human effort (Workweek) is a stock (initial 40 h/week, adjustable up to 48 h). Desired workweek depends on schedule pressure (derived from desired vs. standard completion rate) and undesirable behaviour effect.
- AA Workweek is a stock (initial 0 h/week) driven by desired AA workweek, which depends on desired completion rate and trust. Delays capture adjustment time.
- Productivity parameters: Standard time per task (human) = 8 h; AA time per task baseline = 6.4 h (from Peng et al., 2023), varied in scenarios (down to 3.2 h or sensitivity 0.1–8 h).
- Emotional and Social Response is a stock (initial 1) that decreases with higher AA usage share (AA workweek / total workweek), with a 4-week adjustment time (reduced to 1 week in scenario 4). It multiplicatively reduces time per task (lower response → faster task time, modelling increased rationality of interactions).
- Propensity for Undesirable Behaviour (initial 0) is modelled but kept inactive for the intellective-task context to focus on the beneficial B3 loop.
- Trust in AA is a stock (initial 0.5 unless varied). Trust updates when AA is used, based on perceived vs. expected performance. Perceived performance is the ratio of actual completion rate (current time per task) to standard potential completion rate; expected performance initially set to 1 and varied in sensitivity (0.85–1.15). Trust adjustment time default 2 weeks (varied 0.1–10 weeks in sensitivity). Trust scales desired AA workweek.
Equations follow standard system dynamics formulations (stocks integrate net flows; multipliers via table functions for non-linearities). Key stock initialisations include: Backlog(0)=5; Workweek(0)=40; AA Workweek(0)=0; Emotional and Social Response(0)=1; Propensity for Undesirable Behaviour(0)=0; Trust(0)=0.5. The complete equation listing is provided in supplementary materials.
Simulation design:
- Continuous simulation over 20 weeks; time step 0.0625 weeks; Euler integration. Focus indicators: Backlog, Workweek, AA Workweek, Trust in AA.
Scenarios:
- Scenario 0 (baseline): Equilibrium with 5 tasks/week; no AA use.
- Scenario 1: External shock increasing task arrivals from 5 to 7/week at week 5.
- Scenario 2: Same workload shock as Scenario 1 plus increased AA efficiency (AA time per task from 6.4 to 3.2 h). Sensitivity on AA time per task 0.1–8 h.
- Scenario 3: Vary initial trust: high (0.75; 3a) and low (0.25; 3b) under workload shock; sensitivity on trust adjustment time (0.1–10 weeks) and expected performance (0.85–1.15) for high (3c) and low (3d) trust cases.
- Scenario 4: Increase emotional/social response sensitivity by reducing its adjustment time from 4 to 1 week under workload shock.
Key Findings
- Scenario 0 (equilibrium): With 5 tasks/week arriving and 5 completed weekly, Backlog remains stable; no change in Workweek or AA use; Trust unchanged without AA use.
- Scenario 1 (workload step-up to 7/week at week 5): Backlog rises then stabilises (M=6.77, SD=1.15); Workweek increases then settles slightly below initial as AA takes load (M=38.60, SD=1.56); AA Workweek increases and stabilises around ~7 h/week by week 20 (M=6.42, SD=4.33). Trust in AA increases over time due to effective, error-free AA usage; performance improves via faster task completion through the B3 loop.
- Scenario 2 (more efficient AA: 6.4 → 3.2 h/task): Counterintuitively, Backlog is similar or slightly higher than Scenario 1 (M=6.87, SD=1.18); Trust increases more slowly and ends slightly lower than Scenario 1; Workweek is higher (M=42.44, SD=1.64); AA Workweek is lower (M=2.72, SD=1.74). Sensitivity (AA time per task 0.1–8 h) shows that greater AA efficiency consistently yields higher Backlog, lower Trust, higher human Workweek, and reduced AA usage. Explanation: higher efficiency reduces interaction frequency, slowing trust build-up and constraining adoption, leading workers to rely more on own effort.
- Scenario 3 (initial trust): High initial trust (0.75) yields faster AA adoption, lower Backlog (M=6.50, SD=0.92), higher early AA use (M=6.76, SD=4.73), and lower Workweek (M=37.75, SD=2.31), converging with Scenario 1 over time except Trust reaches high levels faster. Low initial trust (0.25) shows higher Backlog (M=7.74, SD=1.88), higher Workweek (M=39.66, SD=2.20), and slower, gradual AA adoption (M=6.10, SD=4.57); avoids initial overshoot in AA use but may later achieve similar or higher AA usage while maintaining higher effort. Sensitivity: With low initial trust, very short trust adjustment times can drive high early AA adoption akin to high-trust scenarios. Expectations strongly modulate trust trajectory: lower expected performance (0.85) accelerates trust growth; higher expectations (1.15) can reduce trust over time, even from a high-trust start.
- Scenario 4 (faster emotional/social response adjustment 4 → 1 week): Backlog slightly lower (M=6.60, SD=1.00) and Workweek slightly higher initially (M=39.30, SD=1.16) versus Scenario 1, converging by week 20. Trust increases faster than Scenario 1; however, AA use is significantly lower (M=5.49, SD=3.52). Mechanism: faster decrease in emotional/social response reduces time per task more quickly (greater rationality of interactions), improving productivity and reducing the need for AA usage.
Overarching insights:
- Lower-efficiency AA may outperform higher-efficiency AA in long-run adoption and overall outcomes due to more sustained interactions building trust that facilitates usage.
- Low initial trust can, in certain conditions (e.g., rapid trust adjustment, low expectations vs. perceived gains), lead to higher adoption trajectories.
- Greater emotional/social responsiveness can foster trust growth but reduce AA utilisation, as productivity gains from increased rationality offset the need for AA assistance.
Discussion
The simulations indicate that workers deploy AA primarily in response to externally driven workload increases. Effective AA support stabilises backlog and reduces pressure. However, persistent reliance on AA can progressively build trust, encouraging continued AA use. Managers should anticipate these dynamics and consider policies to prevent over-reliance (e.g., restoring workloads, calibrating expectations, monitoring for undesirable behaviours).
Contrary to intuition, more efficient AA do not necessarily reduce workload more than less efficient ones and can lead to lower trust, lower AA usage, and higher human effort. Trust is built through interaction frequency; overly efficient AA reduce exposure and slow trust growth, constraining adoption. This has implications for both managerial deployment and AA design: balancing capability to ensure sufficient user engagement may promote sustainable adoption. Developers aiming for maximal use may prefer AAs that are better than humans but not so superior that they curtail interaction frequency; product strategies that limit capabilities in widely available versions could inadvertently (or strategically) increase long-run use by fostering gradual trust-building.
Initial trust and expectations critically shape trajectories. High initial trust accelerates adoption and performance gains. Low initial trust can create a vicious cycle of avoidance, limited exposure, and further distrust, though rapid trust updating or favourable expectation–performance calibration can break this cycle. Managers should address trust and expectation calibration proactively before deployment, via training, demonstrations, and transparent communication about capabilities.
Individual differences in emotional/social responsiveness also matter. Workers who quickly adapt their emotional/social responses to AA use can achieve productivity gains (via increased rationality of interactions) with lower AA usage and similar performance, but may also reduce personal effort over time by delegating more. Organisations should leverage these strengths while mitigating risks of over-delegation or disengagement.
Overall, the system dynamics perspective clarifies how reinforcing and balancing feedbacks generate non-intuitive outcomes in human–AA interaction. The findings inform policies for sustainable, effective AA implementation that consider workload dynamics, trust development, expectation management, and individual differences.
Conclusion
This study applies system dynamics modelling to examine human–AA interactions in workplaces performing intellective tasks, focusing on feedback loops among workload, effort, performance, trust, AA use, and emotional/social responses. The simulations reveal: (1) lower-efficiency AA can outperform higher-efficiency AA in adoption and outcomes due to trust-constrained adoption of highly efficient AA; (2) low initial trust may, under certain conditions (fast trust adjustment, calibrated expectations), accelerate adoption; and (3) stronger emotional/social responsiveness can increase trust while reducing AA usage through productivity gains from more rational interactions. By formalising these dynamics, the work provides fine-grained insights to guide researchers, managers, and developers toward productive human–AI partnerships and healthier work environments. Future research should empirically validate these mechanisms and extend the model to diverse task contexts and organisational settings.
Limitations
The model is an idealised, conceptual representation that captures only selected aspects of real-world human–AA interactions. Variables like emotional/social response, undesirable behaviour, and trust are simplified and parameterised without direct empirical measurement in this study. The simulations are not validated with behavioural data from actual workplaces; therefore, generalisability is limited. Results may vary across task types beyond intellective tasks, organisational cultures, or AA error characteristics. The undesirable behaviour loop was kept inactive, which may understate risks in contexts where emotional/social reduction could increase unethical behaviour. Empirical studies are needed to validate and refine parameters, structures, and boundary conditions.
Related Publications
Explore these studies to deepen your understanding of the subject.

