logo
ResearchBunny Logo
Averse to what: Consumer aversion to algorithmic labels, but not their outputs?

Interdisciplinary Studies

Averse to what: Consumer aversion to algorithmic labels, but not their outputs?

S. Mariadassou, A. Klesse, et al.

This research was conducted by Shwetha Mariadassou, Anne-Kathrin Klesse, and Johannes Boegershausen and reveals a third possibility: people may be averse to AI labels yet appreciative of algorithmic output. The authors call for careful labeling, broader study of real interactions with tools, and attention to technical configuration to better explain public reactions to AI.... show more
Introduction

Advances in artificial intelligence have made algorithmic tools pervasive in domains such as medicine, law, consumption, dating, and careers. While existing research largely finds that people are reluctant to rely on algorithmic advice and prefer human guidance, the paper suggests a third interpretation of this phenomenon: people may be averse to the AI or algorithm label, but not necessarily to the outputs these systems produce. Evidence includes cases where algorithmically recommended content or AI-generated messages are evaluated positively when unlabeled, and widespread adoption and performance of tools like ChatGPT. This perspective motivates rethinking how scientists study reactions to algorithms and AI.

Literature Review

The article reviews streams documenting algorithm aversion and preference for humans, alongside findings of algorithm appreciation in certain tasks. It synthesizes results showing that labeling outputs as algorithmic often reduces their perceived value, despite outputs frequently matching or exceeding human performance in advice quality, emotional awareness, and content generation. It integrates branding research on the power of labels and logo descriptiveness, and explainable AI work emphasizing transparency benefits. The review also covers studies on consumer preferences for algorithm adaptivity, symbiotic human-algorithm relationships, conversational AI design features that mimic human interaction to build trust, and dynamics in recommender systems (e.g., cold-start). Finally, it discusses the need to align machine learning's focus on technical optimization with psychology's focus on human preferences and values, including challenges from psychological biases and algorithmic curation affecting organic web data used in behavioral research.

Methodology

This is a conceptual, narrative review synthesizing prior research on consumer reactions to AI and algorithms. The authors develop a perspective that distinguishes aversion to algorithmic labels from appreciation of algorithmic outputs, and derive three research implications: more precise and informative labeling, expanding study beyond adoption to interactions, and aligning psychological and machine-learning approaches by incorporating technical specifications and objective functions. No new empirical data were collected (data availability states no data used). The paper integrates evidence across marketing, consumer behavior, psychology, and machine learning, and proposes practical steps such as stimulus sampling and ecologically valid experimental infrastructures.

Key Findings

• Consumers often appreciate algorithmic outputs but react negatively when those outputs are explicitly labeled as coming from an algorithm or AI. • Three core insights for studying reactions to algorithms: (1) labels matter—use specific, descriptive, and transparent labeling of algorithm type and features; (2) broaden focus from adoption decisions to real interactions over time; (3) align psychology and machine learning by accounting for technical configurations and objective functions. • Evidence includes: algorithmically recommended jokes and AI-generated emotional support rated higher when unlabeled; widespread adoption of ChatGPT (~180 million users shortly after launch) and superior performance versus humans in advice quality, emotional awareness, and persuasive ad content tasks; labeling as AI reduces favorability of outputs in some cases. • Descriptive and suggestive labels (from branding research) and explainable AI can improve recall, fluency, evaluations, and reduce aversion by increasing understanding. • Consumers prefer algorithms described as high-adaptive for creativity-demanding products; highlighting human input can increase perceived helpfulness of AI advice. • Interaction design matters: conversational AI that mimics human dialogue (turn-taking, dynamic language) improves trust and evaluations. • Interactions with recommender systems evolve as data accumulates (cold-start issues initially reduce fit but improve over time). • Psychological biases and inconsistent preferences can misalign algorithmic optimization with users’ normative preferences; people may prefer algorithms that replicate the events they predict over objectively more accurate ones. • Practical implications include stimulus sampling to enhance generalizability and open-source, realistic platforms for studying social media personalization and recommender systems.

Discussion

The findings address the core question by distinguishing between aversion to algorithmic labels and receptiveness to algorithmic outputs. Recognizing this distinction reshapes how we interpret prior evidence of algorithm aversion and suggests that careful labeling, transparency, and human-AI symbiosis can improve acceptance. Expanding research beyond binary adoption toward sustained interactions captures dynamic user-algorithm relationships and helps identify design features that foster trust and better outcomes. Aligning psychological insights with technical configurations enables more precise hypotheses about which algorithms, objective functions, and error trade-offs drive user reactions, and how to structure systems to reflect normative preferences rather than mere engagement. This integrated approach is highly relevant to AI’s growing role in consumer decision-making, platform design, and societal well-being.

Conclusion

AI is transforming daily life, necessitating research that combines technical sophistication with psychological richness. The paper argues that people may be averse to algorithmic labels but appreciative of outputs, and it offers three research directions: carefully specify and explain labels for algorithms and AI; move beyond adoption to study real, longitudinal interactions; and align machine learning and psychology to account for objective functions, errors, and human values. Future work should develop ecologically valid, open-source platforms and apps to study interactions, employ stimulus sampling for generalizable insights, detect and account for algorithmic interference in organic data, and design systems that learn and optimize for users’ normative preferences. These steps can help leverage AI to improve welfare while maintaining scientific rigor and transparency.

Limitations

As a narrative review and conceptual piece, the article presents no new empirical data and does not test its propositions experimentally. The authors note practical challenges for studying interactions, including complexity, cost, and the need for ecologically valid platforms that mimic real-world environments. Detecting and accounting for algorithmic curation, sorting, and filtering in organic web data is difficult and requires recording more input variables. Generalizability of insights depends on diverse stimulus sampling and may vary across algorithm types and contexts.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny