
Interdisciplinary Studies
Machines that feel: behavioral determinants of attitude towards affect recognition technology—upgrading technology acceptance theory with the mindsponge model
P. Mantello, M. Ho, et al.
Dive into the intriguing world of affect recognition technology (ART) with this research by Peter Mantello, Manh-Tung Ho, Minh-Hoang Nguyen, and Quan-Hoang Vuong. This study sheds light on the behavioral factors influencing attitudes towards emotional AI, revealing how familiarity with technology and social media habits can ease apprehension toward ART. Discover cultural insights and implications for policymakers in this rapidly evolving field!
~3 min • Beginner • English
Introduction
Artificial intelligence, particularly emotional AI, is increasingly embedded across domains such as in-cabin systems, education, chatbots, toys, assistants, advertising, robotics, and security. Emotional AI is distinct in harvesting non-conscious, psychophysiological data (e.g., heart rate, respiration, skin conductance, gaze) and is moving toward multimodal fusion and transfer learning. While marketed as enhancing daily life, affect-recognition technologies raise significant legal, ethical, cultural, and scientific tensions that can influence acceptance: covert non-consensual data capture and misuse; regulatory opacity and cross-cultural privacy norms; generic Western-centric models and algorithmic bias; practical ethical framework shortcomings and organizational readiness; and scientific validity concerns regarding the computability of emotions.
These issues suggest adoption is contingent on cultural, social, and political contexts, challenging traditional acceptance models. This study aims to understand attitudes toward an era where machines both feel and feed on emotions to influence behavior. The authors integrate TAM with the mindsponge information-filtering framework and hierarchical Bayesian modeling, and include demographic and contextual factors (region, religiosity, income, political regime) and differences in acceptance by data harvester (government vs. private sector).
Research questions:
RQ1) Does extending TAM by incorporating core personal values and environmental factors yield more reliable indicators for acceptance of affect recognition technology?
RQ2) Do correlations between acceptance and behavioral factors change when the data collector is the government versus the private sector?
RQ3) How do perceived utility and perceived familiarity with AI predict attitudes toward non-conscious emotional data harvesting?
RQ4) How do different social media uses associate with attitudes toward non-conscious emotional data harvesting?
Literature Review
Multiple theories explain technology adoption: Innovation Diffusion Theory, Theory of Reasoned Action, Theory of Planned Behavior, TAM, UTAUT, and the mindsponge theory. TAM posits perceived usefulness and ease of use as key predictors; TAM2 adds subjective norms. TAM has strong empirical support but has limitations for affect-recognition technologies: it under-addresses cross-cultural variance in forming perceptions and presumes conscious, direct user-technology interaction. Emerging evidence shows cultural values shape TAM-relevant perceptions, with differing acceptance across countries (e.g., social credit systems, social media screening). Emotional AI often operates ambiently and covertly, complicating the subject-object interaction assumed by TAM.
The mindsponge framework models the mind as a filter for new inputs (values, ideas, technologies), where usefulness and ease of use serve as trust evaluators but are not solely determinative. Acceptance depends on core values and external environments (culture, society) that amplify or constrain adoption. The framework has been applied to adoption in education, entrepreneurial finance, and vaccine production but has not been integrated with TAM for attitudes toward non-conscious data harvesting. The study addresses this gap by combining TAM with mindsponge insights and hierarchical Bayesian modeling to capture cross-cultural and contextual influences on acceptance.
Methodology
Design: Three-step approach. (1) Identify TAM-based variables relevant to acceptance of non-conscious emotional data harvesting: perceived utility, perceived familiarity with technologies, and attitude toward AI systems. (2) Extend TAM using mindsponge factors: core values (religiosity), environmental cultural factors (region of home country), political regime of home country, and income level. (3) Build Bayesian multi-level regression models employing these factors as varying intercepts to compare effects across contexts and evaluate model plausibility and fit.
Participants and data collection: Online survey via Google Form administered in >10 classes at Ritsumeikan Asia Pacific University (APU), late 2020 to early 2021. Sample: N=1015 international and domestic students (ages 18–27) from nearly 100 countries spanning 8 world regions. Ethical procedures followed the Science Council of Japan and APU's Research Code; participation was voluntary and anonymized, with implied consent.
Measures: Socio-demographics and context (income, school year, major), region (derived from nationality; 8 regions), religiosity (not/mildly/very religious), political regime category (Economist Intelligence Unit Democracy Index 2020), income. Predictors: social media usage duration; PostingSM (public vs. private posting tendency); AngrySM (frequency of feeling angry in SM debates; higher=less often); DiversifySM (active diversification of feeds); Familiarity (mean of familiarity with AI, smart cities, emotional AI, coding); Perceived Utility (perceived utility of AI applications); Attitude toward AI (optimism/benefit). Outcomes: worry/acceptance regarding non-conscious emotional data harvesting by (a) private sector and (b) government.
Model construction: Two classes (private vs. public sector as data collector). For each class, a baseline TAM-style model and four multi-level models with varying intercepts: Region, Religiosity, Political Regime, and Income. Example structures (private): outcome ~ SocialMedia + PostingSM + AngrySM + DiversifySM + Attitude + Familiarity + PerUtility + varying intercept by factor. Analogous structures for public sector models.
Estimation and evaluation: Bayesian MCMC (4 chains, 5000 iterations, 2000 warm-up). Diagnostics: Rhat=1.0 for all parameters; effective sample sizes >1000. Model fit and comparison: PSIS-LOO with Pareto k<0.5 (mostly; few <0.7 OK), model plausibility via weights (Pseudo-BMA without/with bootstrap, Bayesian stacking). Partial pooling enables robust estimation across non-random, heterogeneous online survey data, aligning with mindsponge-informed stratification by cultural and value-related factors.
Key Findings
Model performance and cultural factors:
- In both domains (private and public sector data harvesting), the best-performing models use Region as varying intercepts (Model 2_Private and Model 2_Public). Religiosity-varying models (Model 3) are second-best. Models with Political Regime or Income as varying intercepts have negligible weights (<1%).
- Model weights (Table 3): Private sector—Model 2 (stacking 0.639) > Model 3 (0.330) >> others; Public sector—Model 2 (stacking 0.638) > Model 3 (0.362) >> others. This indicates cultural segmentation (region, religiosity) yields better estimates than political regime or income segmentation.
Determinants for private sector acceptance:
- Consistently positive predictors: public posting on social media (PostingSM), lower engagement in heated debates (AngrySM; higher value indicates less anger), higher familiarity with AI (Familiarity), higher perceived utility (PerUtility). Attitude toward AI is more ambiguous.
Determinants for public sector acceptance:
- Positive predictors: PostingSM, AngrySM, Attitude toward AI, Familiarity, PerUtility.
- Time spent on social media (SocialMedia) shows an overall negative association with acceptance of government harvesting (posterior mass largely negative; 89% HDPI predominantly below zero). DiversifySM has unclear effects.
Comparative insights (public vs. private):
- Familiarity and perceived utility positively correlate with acceptance in both sectors.
- Social media dwell time negatively associates with acceptance of government harvesting but is ambiguous for private sector cases.
- Attitude toward AI positively predicts acceptance in the public sector but is less certain for the private sector.
Overall pattern:
- Individuals who feel more familiar with AI, perceive higher utility, publicly message more on social media, and restrain from heated online arguments report less worry about non-conscious emotional data harvesting by both governments and private companies.
- Cultural factors (region, religiosity) are key antecedents of perceived risks/rewards and acceptance of emotional AI.
Discussion
Addressing RQ1, integrating mindsponge with TAM significantly improves explanatory power, as shown by model plausibility weights favoring models with cultural/value-based varying intercepts (Region, Religiosity) over baseline TAM models. This supports the premise that acceptance is filtered through core values and environmental contexts.
Addressing RQ2, acceptance determinants shift by data-harvesting actor: social media dwell time reduces acceptance for government practices but is ambiguous for private sector practices, indicating lower trust in government data handling. Attitude toward AI more strongly predicts acceptance for public-sector harvesting than private-sector harvesting.
Addressing RQ3, perceived utility and familiarity with AI are robust, positive correlates of acceptance across both sectors, aligning with TAM’s evaluators and mindsponge’s trust-evaluation mechanism.
Addressing RQ4, social media behavior indicators of agency/self-regulation—public messaging and avoidance of heated debates—associate with greater acceptance. Conversely, higher overall time on social media corresponds to lower acceptance of government data harvesting. These findings suggest self-efficacy and perceived control relate to reduced privacy concern, whereas exposure/intensity (time) may heighten concern toward public-sector dataveillance.
The results align with literature on surveillance capitalism and public distrust of government AI use, and with Bandura’s self-efficacy theory: vicarious and mastery experiences with AI, affective/physiological feedback via wellness tech, and verbal persuasion (hype/bandwagon effects) shape efficacy and acceptance. The mindsponge framework also suggests future Bayesian updating of beliefs as emotional AI normalizes, potentially balancing attitudes toward different actors.
Conclusion
This study upgrades TAM by incorporating mindsponge-based cultural and value filters and employing hierarchical Bayesian modeling. Across an international student sample, models accounting for region and religiosity outperform those based solely on TAM or on political regime/income segmentation. Key determinants of lower concern toward non-conscious emotional data harvesting include higher AI familiarity, higher perceived utility, more public-oriented social media posting, and lower engagement in heated debates. Cultural context is pivotal, and there is a relative deficit of trust toward government harvesting compared to private-sector practices. The findings inform policymakers and designers on the need for culturally sensitive, transparent governance of emotional AI and highlight the importance of user autonomy and self-efficacy. Future research should extend the framework to broader populations, incorporate awareness of non-conscious data collection practices, and continue refining mindsponge-informed, multi-level models.
Limitations
- Sample restrictions: Participants were 18–27-year-old international and domestic students at a single Japanese university (APU), limiting generalizability beyond youth/student populations.
- Data collection context: Conducted via Zoom-based remote lectures; potential selection and context effects.
- Non-random sampling: Some ethnic or regional groups may be over-represented. Although hierarchical Bayesian partial pooling mitigates small-cell issues, representativeness remains limited.
- Measurement scope: Future studies should include measures of awareness of non-conscious data harvesting and potentially additional contextual moderators.
Related Publications
Explore these studies to deepen your understanding of the subject.