The growing presence of artificial intelligence in everyday life is raising new questions about how technology encodes social values. Media and technology scholar Belinda Amartey has warned that the increasing tendency to give AI systems feminine voices and personalities risks entrenching long-standing gender stereotypes.
From virtual assistants to customer service chatbots, many of the most familiar AI systems are designed to sound and behave “friendly, patient, and endlessly accommodating.” Amartey argues that these traits are not neutral but reflect cultural expectations historically attached to women’s labor. “These systems are presented as friendly, patient, and endlessly accommodating,” she said. “But those traits are not neutral. They reflect cultural ideas about femininity and care historically attached to women’s labor.”
Her concern is that repeated interaction with compliant, feminized systems could normalize unequal power relations, subtly reinforcing the idea that emotional responsiveness should always be available on demand. The risks are particularly acute in sectors such as banking, healthcare, and public services, where AI is increasingly tasked with managing complaints and calming tense situations. “In these spaces, AI does more than provide information,” Amartey explained. “It manages emotion and de-escalates tension. That mirrors work long feminized and undervalued.”
Beyond cultural implications, Amartey cautioned that feminized AI may also encourage users to trust systems more readily, prompting them to share personal data without fully grasping how it is collected or used. As a doctoral researcher at the University of Texas at Dallas, she called for deeper scrutiny of design choices in AI. “Ethical AI requires more than correcting data bias,” she said. “We must question the social values being built into these systems.”
Her remarks highlight a broader debate in technology ethics: whether the design of AI should simply optimize user comfort or whether it must also challenge ingrained social assumptions. As AI becomes more embedded in everyday interactions, the question of whose values are being coded into machines is becoming harder to ignore.

