logo
ResearchBunny Logo
Women's online opinions are still not as influential as those of their male peers in buying decisions

Business

Women's online opinions are still not as influential as those of their male peers in buying decisions

O. Fan-osuala

This research by Onochie Fan-Osuala explores the intriguing question of whether women's online product reviews hold as much sway as men's in shaping consumer decisions. Findings reveal a bias against women's opinions, especially for male-gender-typed products, raising fascinating implications about online influence.

00:00
00:00
Playback language: English
Introduction
Gender inequality in opinion sharing persists, with women facing challenges and interruptions more frequently than men in various settings. This raises the question of whether women's opinions are valued less, particularly in decision-making contexts like purchasing. This study focuses on online product reviews, an increasingly important source of purchase information, to examine if gender biases affect the perceived value of opinions. Previous research shows biases in evaluating works by female authors, entrepreneurial pitches by women, and even computer-generated speech with female voices. Online reviews offer an ideal context due to their prevalence and significance in online shopping. The study aims to determine (1) whether women's product opinions are less valued than men's and (2) whether evaluations favor a specific gender for products traditionally associated with that gender. Three studies will be conducted, one experimental and two utilizing field data from Yelp.com and Amazon.com.
Literature Review
Existing research indicates a persistent gender bias in evaluating opinions and contributions. Studies have shown that writings by male authors are rated higher than those by female authors, entrepreneurial pitches by men receive more investment, and lectures are rated higher when attributed to male professors. Similarly, computer-generated speech with male voices is deemed more credible and influential. This suggests a possible gender bias in how opinions are perceived and evaluated.
Methodology
The study employed three methods: **Study 1 (Experimental):** 216 participants (108 men, 108 women) on Amazon Mechanical Turk evaluated online product reviews. Reviews were manipulated to show either a male or female avatar and name. Products were categorized as gender-neutral, male-typed (toolkit), and female-typed (baby care kit). Participants rated review helpfulness and likelihood to influence purchase decisions on a 9-point Likert scale. Repeated-measures ANOVA was used to analyze the data. **Study 2 (Field Data - Yelp.com):** Data from Yelp.com reviews (nightlife category) between January 2015 and May 2015 were collected (7626 reviews, 3854 unique individuals). A machine learning technique ('genderizer') inferred review writer gender based on first names. Reviews with <99% gender probability were excluded, resulting in 2399 reviews. A negative binomial regression model examined the effect of gender on 'useful' votes, controlling for other variables. Matched samples were also analyzed. **Study 3 (Field Data - Amazon.com):** Data from Amazon.com reviews (beauty and home improvement categories) from January 1, 2014 to February 28, 2014 were collected (15948 reviews). Reviews with zero votes were excluded. Similar to Study 2, machine learning determined gender and reviews with <99% gender probability were excluded (3262 reviews). A binomial regression model, with logit transformation, analyzed the effect of gender on the proportion of helpful votes, controlling for other factors. Data was also analyzed separately for male and female-typed products.
Key Findings
**Study 1:** Reviews written by women received significantly lower helpfulness ratings and were less likely to influence purchase decisions compared to men's reviews. This difference was more significant for male-typed products, while for female-typed products, there was no significant difference between men's and women's reviews. **Study 2:** Reviews written by women received significantly fewer 'useful' votes than those written by men. This held true even after controlling for various review characteristics and using a matched sample. This suggests that women's opinions, even in a seemingly gender-neutral context, are less valued. **Study 3:** Similar to study 1, reviews written by women received lower helpfulness ratings than those written by men. This effect was stronger for male-typed products, with no significant difference found for female-typed products.
Discussion
The consistent findings across three diverse methodologies strongly suggest the presence of implicit gender bias in the evaluation of online product reviews. Women's opinions are consistently undervalued compared to those of their male counterparts, irrespective of the product type. This reinforces the notion that women's experiences and perspectives are often discounted, even when relating to products commonly associated with women. The bias may stem from lingering stereotypes about men's analytical abilities and a subtle bias against women's competence, leading to the discounting of their opinions. The underrepresentation of women in expert panels might further contribute to this bias. The fact that the effect appears stronger among female participants is unexpected and merits further investigation.
Conclusion
This study provides robust evidence of implicit gender bias in the evaluation of online opinions, demonstrating that women's opinions are consistently undervalued compared to men's in buying decisions. This bias holds across different product types and methodologies. Future research could explore interventions to mitigate this bias, such as highlighting female reviewers' expertise, or examining whether algorithmic adjustments could help level the playing field. Further research could also focus on the role of individual differences (e.g., personal beliefs about gender equality) in moderating this bias.
Limitations
While the studies used robust methodologies and large datasets, several limitations should be acknowledged. The inference of gender based on names in studies 2 and 3 might have introduced some error, although attempts were made to mitigate this. The focus on US online review platforms limits generalizability to other cultures or contexts. Future research could address these limitations by using more comprehensive gender identification methods and expanding to diverse cultural settings.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny