logo
ResearchBunny Logo
Gender and feminist considerations in artificial intelligence from a developing-world perspective, with India as a case study

Interdisciplinary Studies

Gender and feminist considerations in artificial intelligence from a developing-world perspective, with India as a case study

S. Kumar and S. Choudhury

This research explores the critical relationship between women and technology in developing nations, with a focus on how AI and robotics could shape women's futures in India. Conducted by Shailendra Kumar and Sanghamitra Choudhury, the study tackles the pressing question: will technology empower or further marginalize women?

00:00
00:00
~3 min • Beginner • English
Introduction
Women in some South-Asian countries, like India, Pakistan, Bangladesh, and Afghanistan face significant hardships and problems, ranging from human trafficking to gender discrimination. Compared to their counterparts in developed countries, developing-world women encounter a biased atmosphere. Many South Asian countries have a patriarchal and male-dominated society, and their culture has a strong preference for male offspring. These countries, particularly India, have experienced instances where technology has been utilized to create gender bias. For example, sonography (ultrasound), intended to determine fetal health, has been misused for sex selection and female feticide, contributing to deteriorating child sex ratios. Although India enacted the Pre-conception and Pre-Natal Diagnostic Techniques (Prohibition of Sex Selection) Act in 1994, enforcement challenges have persisted. In artificial intelligence, gender imbalance is a critical issue. Biases in training data and developer teams risk embedding and amplifying gender stereotypes (e.g., feminized service/sex robots versus masculinized security robots), potentially disadvantaging women in applications like risk assessment. Female gendering can increase bots' perceived humanity and marketability, yet women remain underrepresented in AI roles (e.g., 22% of AI positions globally; only ~12% of ML researchers are women). This manuscript explores whether AI will exacerbate or ameliorate women’s precarious position in South Asia, using India as a case. Hypotheses tested include: (H01/HA1) whether increasing AI use threatens existing human relationships; (H02/HA2) whether perceptions of AI robots differ by respondent gender; (H03/HA3) whether requirements/use of AI robots differ by gender; and (H04/HA4) whether preferences regarding robot gender differ by respondent gender.
Literature Review
The review traces AI’s foundations, from Turing’s mechanistic view of intelligence to Picard’s affective computing, noting that many human features are now instantiated in machines. Social robotics leverages humans’ tendency to anthropomorphize (e.g., designing robots to resemble women, children, pets). Studies indicate growing human-robot companionship and the prospect of physical and emotional relationships, raising new questions about perceptions of intimacy and competition with robots. Concerns are documented that as AI learns from human-generated data, it may absorb racial and misogynistic norms, producing gender bias in systems. The field of Critical Algorithm Studies highlights concealed discrimination via ostensibly neutral rationales. Haraway’s Cyborg Manifesto challenges identity politics and suggests technology-enabled hybrid identities, though real-world systems still exhibit bias. Empirical findings show that gendered and lifelike robots can be perceived as more sentient and acceptable, with some evidence that consumers prefer female AI for perceived warmth and trustworthiness and that male robots may be seen as more intimidating in-home contexts. Trust may increase when users’ gender matches an AI assistant’s voice. HRI research often finds men more positive toward robots than women and calls for scrutiny of whether gendering is necessary for HRI. As robots become more sapient, they may become more gendered. Commentators advocate for more women in robotics rather than more feminized robots. Despite substantial work on gender bias in AI in developed contexts, there is limited research on impacts in developing countries; this study addresses that gap.
Methodology
Exploratory study employing a vignette-based online survey to examine gender and feminist issues in AI from a developing-world perspective, focusing on India. Due to the lack of commercial availability of advanced social robots, a vignette portrayed a near-future scenario featuring various AI robots (domestic, sex, medical, assistant, etc.) and everyday interactions (e.g., Alexa, Google Assistant), followed by a questionnaire. Sample: N=225 (Female=125; Male=100), majority (76%) university students in India; ages 16–60 (Mean=27.25, SD=7.722). Recruitment via WhatsApp groups and email during the COVID-19 period; participation voluntary, with informed consent and confidentiality. Sampling approach described as purposive (simple random). Instruments: demographic data sheet and a self-developed questionnaire assessing four constructs related to AI: perspective, gender, requirements/use, and perceived threat. Analyses conducted in IBM SPSS 23 included descriptive statistics, item-total correlation, reliability, correlation analyses, t-tests, and regression; alpha set at 0.05 (two-tailed). Ethical approval obtained; GDPR and related ethical standards observed.
Key Findings
- Significant positive correlations among constructs: - Perspective–Gender: r=0.424, p<0.001 - Perspective–Requirements/Use: r=0.235, p<0.001 - Perspective–Threat: r=0.397, p<0.001 - Gender–Threat: r=0.317, p<0.001 - Requirements/Use–Threat: r=0.147, p=0.028 - Regression predicting perceived Threat from AI robots (dependent variable): - Perspective: B=0.204, SE=0.045, β=0.311, t=4.554, p<0.001 (significant) - Gender: B=0.056, SE=0.021, β=0.179, t=2.662, p=0.008 (significant) - Requirements/Use: B=0.030, SE=0.041, β=0.045, t=0.727, p=0.468 (ns) - Conclusion: HA1 accepted; increasing AI use/lifestyle changes may threaten existing human relationships. - t-tests (male vs female): - Perspective: Male Mean=6.490 (SD=2.047), Female Mean=7.152 (SD=2.051), p=0.017 (significant difference) - Gender scale: Male Mean=16.680 (SD=4.208), Female Mean=16.552 (SD=4.383), p=0.825 (ns) - Requirements/Use: Male Mean=16.370 (SD=2.268), Female Mean=16.640 (SD=1.944), p=0.338 (ns) - Threat: Male Mean=3.290 (SD=1.335), Female Mean=3.408 (SD=1.380), p=0.519 (ns) - Adoption and societal impact perceptions: - Many respondents believe living with AI robots will soon be a reality and that increased AI reliance may lead to more prejudiced, discriminatory, and solitary societies. - A majority believe AI may significantly affect gender balance in developing countries like India, echoing concerns from prior technology misuse (e.g., sonography). - Buyers’ preferences may be influenced by robots’ racial/ethnic appearance. - Gendered robots and ethics: - Respondents indicated that gender plays a role in AI robot development/production and supported distinct ethical programming for male vs female robots. - Most disagreed that female robots are inherently more humane than male robots, yet agreed that female robots are perceived differently. - Feminization is seen as increasing marketability and acceptance, but 55.11% believed it heightens the risk of stereotyping women. - Intimacy/sex robots: - Overall, 15.56% were open to sex and love robots; demand showed gender disparity: 24% of males vs 8.8% of females expressed interest. - Males showed more favorable attitudes toward sex/love robots; females showed greater reluctance, linked to feelings of envy/insecurity in the narrative context. - Interest across other robot categories (assistant, education, entertainment, chores/errands, care) was relatively similar by gender. - Self-referential impact perceptions: Both men and women believed AI would impact both genders, but when asked which gender would be most impacted, respondents tended to select their own gender.
Discussion
Findings indicate that in a developing-world context like India, respondents anticipate both increased integration of AI robots into daily life and substantial societal risks. Perceptions and gendered views about AI significantly predict perceived threats to human relationships, supporting the hypothesis that rising AI presence may exacerbate relational strains. While men and women differ in general perspectives toward AI, they do not significantly differ in stated requirements/use or perceived threat scales, suggesting broadly similar functional expectations but divergent attitudinal framing. The pronounced gender gap in openness to sex/love robots underscores potential gendered market dynamics and ethical concerns, with possible implications for intimate relationships, norms, and vulnerability to stereotyping. Respondents’ agreement that feminization boosts marketability but also increases stereotyping risk highlights a tension: practices that enhance acceptance may inadvertently reinforce harmful gender norms. Concerns that AI could affect gender balance resonate with India’s history of technology misuse (e.g., sex selection via ultrasound), amplifying the paper’s central question: AI may serve less as a liberating force and more as a vector for reproducing existing inequalities unless governance, diversity in development, and ethical design are prioritized. Overall, the results suggest that without deliberate intervention, AI may entrench gender biases, yet broad acceptance of many robot roles hints at opportunities for inclusive, equitable design to mitigate harm.
Conclusion
The gendering of robots in AI is problematic and does not inherently improve acceptability or functionality. Contrary to a common belief that women in developing nations are particularly wary, both men and women anticipate living with AI robots in the near future and believe AI will have strong impacts on both genders. Most respondents did not view female robots as more compassionate than male robots but acknowledged that female robots are perceived differently. Many felt AI could significantly affect gender balance in countries like India, paralleling past misuse of sonography for sex selection. At the same time, changing lifestyles and relational challenges may motivate people to adopt AI robots, potentially to avoid the costs and complications associated with human relationships. Participants also expressed an innate fear that robots might one day outwit humans. The study underscores the need for careful, gender-sensitive AI design and policy to avoid reinforcing stereotypes and exacerbating inequalities, while acknowledging the likely growth of human-robot cohabitation. Future research should expand beyond student-heavy samples, examine longitudinal effects, and test interventions (e.g., ethics-by-design, diversity in development teams) to reduce gender bias in AI.
Limitations
- Sampling constraints: Majority (76%) university students in India; purposive (simple random) recruitment via online channels during COVID-19, limiting representativeness and generalizability beyond educated, connected populations. - Context and scope: Single-country focus (India) within a developing-world framing; cultural specificity may limit applicability to other contexts. - Measurement: Self-developed questionnaire without detailed psychometric validation reported; reliance on self-report may introduce bias. - Design: Cross-sectional survey; cannot infer causality. Vignette-based, hypothetical scenarios due to lack of commercially available robots may not fully capture real-world behavior. - Language/administration: Survey primarily in English with optional translation; online administration may exclude less digitally literate participants.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny