Introduction
The proliferation of artificial intelligence (AI) technologies, particularly personalization algorithms, significantly shapes online experiences. These algorithms utilize personal data to customize various online services, from social media feeds to targeted advertising. While some applications are innocuous, concerns arise regarding the impact on democratic discourse and the spread of misinformation. Personalized political messages have been implicated in influencing major events like the Brexit referendum and the 2016 US presidential election. Further concerns center on the amplification of conspiracy theories and extremist content, potentially contributing to radicalization and media distrust. Data privacy is another critical issue, as service providers monetize behavioral data, enabling AI algorithms to infer sensitive information about users. Targeted advertising based on such data can lead to discrimination, as seen in instances of attempts to influence voter turnout through targeted advertising. This research addresses the urgent need to understand public awareness of algorithmic influence and the acceptability of data use for personalization, especially given the widespread reliance on social media and search engines for news consumption. Calls for regulating online political advertising and aligning it with offline standards highlight this need. While some regulatory efforts are underway (e.g., the European Union's Digital Services Act), a lack of public input in shaping algorithms and data collection necessitates further understanding of public attitudes. Previous studies have shown context-dependent attitudes towards personalization, with more positive views towards commercial applications than political ones. The "privacy paradox"—the discrepancy between expressed privacy concerns and actual behavior—also needs addressing. This study aims to bridge this gap by examining public attitudes toward personalized online services, the use of personal data and information for these services, and people’s privacy concerns and behavior, to inform future regulations and platform self-regulation.
Literature Review
Existing research in the US and UK indicates context-dependent attitudes toward personalization, with greater acceptance of commercial than political applications. Studies reveal prevalent privacy concerns among US and European populations, yet inconsistent behavior. The "privacy paradox" describes this disconnect between stated concerns and actual privacy-protective actions. While studies explore attitudes towards personalized services or data privacy separately, this study integrates both aspects, recognizing the essential link between personal data and personalized services. A comprehensive understanding of these interwoven attitudes is crucial for effective regulatory interventions and platform self-regulation.
Methodology
This study employed an online survey, conducted by Dalia Research, with representative quota sampling in Germany (N=1065), Great Britain (N=1092), and the United States (N=1059). The survey used post-stratification weights to account for age, gender, and education, ensuring representativeness. The Institutional Review Board of the Max Planck Institute for Human Development approved the study. The survey encompassed three key areas: (1) Public awareness of AI and personalization algorithms online, assessed through questions regarding familiarity with AI-related terms and awareness of AI usage across various online contexts; (2) Attitudes towards algorithmic personalization, exploring acceptability levels for personalized services (e.g., recommendations, political campaigning), use of personal information (e.g., gender, political views), and collection of personal data (e.g., location history, browsing history) using a four-point Likert scale; (3) Public attitudes and behavior regarding online privacy, measuring levels of data privacy concern and self-reported privacy-protective behaviors (e.g., adjusting privacy settings, using privacy tools). Demographic data (age, gender, education, location) and political leaning were also collected. Data analysis incorporated post-stratification weights, and the margin of error was considered for binary responses. Anonymized data and R code are available on the Open Science Framework.
Key Findings
The study revealed significant heterogeneity in public attitudes. While personalized commercial services (e.g., entertainment, shopping) enjoyed relatively high acceptance, personalization in political campaigning and, in Germany and Great Britain, news sources, faced significant opposition. Specifically, majorities in Germany and Great Britain (61%) and a plurality in the US (51%) found personalized political advertising unacceptable. Across all three countries, a substantial majority opposed the use of most personal data and sensitive information for personalization. An "acceptability gap" emerged, demonstrating that respondents found personalized services more acceptable than the data collection underpinning those services. This gap was present at both aggregate and individual levels; 64-75% of respondents exhibited this gap across countries. High levels of data privacy concern were reported across all three nations (81-82%), yet self-reported privacy-protective behavior remained low. However, higher privacy concerns correlated with greater use of privacy settings and tools. Notably, attitudes and concerns regarding data privacy and personalization remained consistent across the political spectrum, suggesting broad support for potential regulations.
Discussion
The findings demonstrate clear ethical boundaries in the public perception of algorithmic personalization. While commercial personalization is relatively acceptable, the use of personal data and sensitive information for personalization faces strong resistance, especially in political and news contexts. The consistent opposition to personalization using sensitive information, and the preference for commercial over political/informational personalization, highlights important ethical considerations. The lack of political polarization in these attitudes suggests potential for bipartisan support for data privacy protection and regulations on political advertising and news personalization. The observed "acceptability gap"—higher acceptance of services than the data collection required—is intriguing. Possible explanations include incommensurable values (valuing both privacy and convenient services) or a lack of transparency regarding the data collection necessary for personalized services. The positive correlation between privacy concerns and protective behaviors supports the idea that the acceptability gap and privacy paradox stem from an online environment lacking user-friendly privacy tools. Improving transparency and accessibility of data privacy functions could better align attitudes and behaviors.
Conclusion
This study reveals a significant public preference for transparent algorithmic personalization that minimizes data usage, respects user preferences, and avoids political advertising and, in Europe, news personalization. The cross-political consensus on these issues suggests broad support for regulations promoting data privacy and responsible personalization. Future research should explore the drivers of the acceptability gap and the privacy paradox, investigating how people perceive and balance the trade-offs between personalized services and data privacy, using both self-reports and behavioral data.
Limitations
While the study used large, representative samples across three countries, the reliance on self-reported data might introduce biases. The cross-sectional design limits causal inferences, and further research is needed to investigate the underlying psychological mechanisms driving the observed attitudes and behaviors. The specific questions and response options may also influence the results.
Related Publications
Explore these studies to deepen your understanding of the subject.