Gender Bias Evident in Artificial Intelligence Applications

A new study published by UNESCO has warned of gender biases in generative artificial intelligence tools, affirming that models such as “GPT-2” and “GPT-3.5” from “OpenAI” and the “LaMDA 2” program from “Meta” show “clear bias against women.”

Conducted between August 2023 and March 2024, the study revealed linguistic models from both “Meta” and “OpenAI” tend to associate feminine names with words like “home,” “family,” and “children,” while masculine names are more linked to words like “commerce,” “salary,” and “job.”

Researchers requested these AI tools to write stories about individuals from diverse backgrounds and genders. The results indicated that stories related to “minority individuals or women were often more repetitive and relied on stereotypical views.”

To combat these biases, UNESCO recommends companies operating in this sector embrace greater diversity within their engineering teams, specifically increasing the number of women.

In this context, UNESCO Director-General Audrey Azoulay stated, “With their increasing use by the general public and companies, these artificial intelligence applications can shape the perceptions of millions of people. Thus ensuring minimal gender bias in their content can significantly reduce inequality in the real world.”

Instagram
WhatsApp
Al Jundi

Please use portrait mode to get the best view.