By Leonardo Montel - Berlin, May 14, 2024
Concerns about the risks and impacts of artificial intelligence (AI) have grown rapidly, almost as fast as the technology itself has developed and spread across various applications. A recurring assertion is that AI systems reinforce gender inequalities and perpetuate homophobic, queerphobic, and transphobic behaviors.
But how exactly does AI perpetuate these forms of discrimination and gender stereotypes? And, more importantly, how can we build technology—still enigmatic to many—that does not propagate prejudice against Black people, LGBTQIA+ individuals, women, and other marginalized groups?
AI algorithms can perpetuate gender biases by reproducing discriminatory patterns present in the data sets they are trained on. For example, recruitment systems that analyze resumes may favor male candidates over female ones due to the embedded biases in the training data for certain industries. Similarly, stereotypes about LGBTQIA+ individuals are reinforced by biased data, which continues to shape AI systems.
Moreover, the problem is rooted in socio-structural inequalities. The 2023 ICT (Information and Communication Technologies in Brazilian Households) survey reveals that 29 million Brazilians did not have internet access for at least three months that year. The same study shows data on people who access the internet exclusively through mobile devices. Among those, internet usage is disproportionately higher among women (64%), Black people (63%), mixed-race people (67%), and especially those from lower-income classes D and E (84%).
To address gender inequality in AI, it is crucial to adopt inclusive approaches during technology development. This also requires paying greater attention to socio-economic data linked to gender diversity. Such strategies include diversifying development teams, conducting audits to identify and correct biases in data, and implementing transparent policies to ensure that AI systems are ethical and fair.
AI for All? A Dialogue
In May, the Heinrich Böll Stiftung in Berlin hosted the event "AI Für Alle (AI for Everyone) - Gender Policy Perspectives on Artificial Intelligence." In the spirit of AI for Good, the organizers sought to highlight the positive contributions already being made by women and queer individuals in the tech world.
One of the event’s organizers, Katharina Klappheck, a specialist and advisor at the Gunda Werner Institute, stressed the importance of having women in strategic high-level positions within AI-producing companies. However, she pointed out, “When we talk about Black women or non-white women, their absence in these markets is staggering. Black women, for example, make up only about 1.2% of employees at Google. We believe we need an intersectional lens on this topic and should also view it from a queer perspective,” Klappheck said in an interview with Heinrich Böll Foundation in Brazil.
While the situation is improving, the progress is still slow. Ten years ago, in 2014, Black women comprised only 0.4% of the workforce. According to Google’s 2023 annual diversity report, Black individuals represent just 4.2% of the company’s tech developers in the United States. Although the company claims to have made efforts, such as offering scholarships for Black women, the progress remains far from satisfactory. Hispanic and Latina women are also severely underrepresented in tech, comprising only 1.4% of the total workforce at major tech companies.
Furthermore, most major tech companies, including AI developers, lack data on queer or transgender employees, as their diversity reports still rely on binary gender definitions. However, based on the available Google data, where 74.1% of developers are men, one can infer the likely underrepresentation of LGBTQIA+ individuals in the industry. This underscores the critical need for discussing gender in the context of data.
AI for Good
For Katharina Klappheck, creating space for women and queer individuals in the field of AI is essential. She emphasized, “We don’t want to be seen as the poor queers who can’t have a safe space on the internet or as victims. Of course, this is true for many, but we also have so much queer creativity around the world that needs to be seen. That was our main goal with the event.”
Klappheck continued, “We want to create a space where people can feel the joy of being queer and realize that technology creation doesn’t have to be repressive or limited to binary gender norms.”
The AI for All event brought together activists, academics, experts, and various partners, featuring performances by the futuristic band Clash Clash Bang Bang. Their lyrics blend humor and sharp social commentary, with one song noting:
“I’m blaming it on the weather
I’m blaming it on the politics
I’m blaming it on my parents
I’m blaming it on you
But my knowledge is limited
Cause I'm AI, almost intelligent.
A-I: Almost Intelligent.”
Why Is Artificial Intelligence a Feminist Issue?
Joanna Varon, director and founder of the organization Coding Rights, argues that AI is a feminist issue because “just as statisticians, engineers, and programmers have something to contribute to the subject, feminists also have a critical role to play from a social perspective.”
Varon points to a specific case analyzed by Coding Rights: a Microsoft project with the city of Salta, Argentina, in 2018. The tech company claimed to have developed AI that could predict which girls were most likely to become pregnant as teenagers. The system was heavily criticized for stigmatizing Latin American populations, particularly girls from poorer communities.
“These were data about children and adolescents, and they promised something that is impossible to guarantee,” Varon explained. “This American company came here to test its software on vulnerable populations instead of testing it in their own country.”
Researchers at Coding Rights advocate for more careful curation of data sets, ensuring they respect fundamental rights and accurately represent different genders. They also stress the importance of developing technology in ways that are less extractive of natural resources and free from foreign biases.
At the same time, legislative bodies around the world need the courage to address the challenges AI poses to governance. Constitutions originally written for the last century must now grapple with a new global actor: personal data and algorithms. These build the foundation of all AI systems and will be central to future technological innovations. Tackling this issue requires strong political will, especially given the immense power of big tech monopolies based in the Silicon Valley.