AI, queer and gender identities, and the digital rights of marginalized folks

Colorful AI-generated art of two persons with serious expressions looking towards the right

Artificial intelligence (AI) has become increasingly prevalent in our lives. From chatbots to facial recognition software, AI is being used to automate and streamline a wide range of processes. As AI continues to expand its reach, it is essential that we take a step back to evaluate how it is being developed and deployed. The voices and interests of marginalized communities must be heard and represented in the development of AI to ensure that it is used for the greater social good.

In the latest episode of Campus 10178, the business podcast of ESMT Berlin, I speak with Arjun Subramonian (they/them), a computer science PhD student at UCLA whose research is focused on inclusive machine learning and natural language processing.  As a member of Queer in AI, Subramonian is also an advocate for the involvement of LGBTIQ+ persons and other marginalized communities in the development and deployment of AI to ensure that it serves their interests.

AI and intersectionality

AI is often developed by people who are not part of marginalized communities, notes Subramonian, which can lead to AI being developed in ways that do not serve the interests of these communities. For members of the LGBTIQ+ community, these include the ways by which AI research is conducted and published. Outside of support and advocacy space, like Queer in AI, queer and trans people may be subject to discrimination in academic spaces. Subramonian cites, for example, the deadnaming and misgendering phenomenon, by which trans scholars are forbidden to correct their names on their own published research. Working to correct these wrongs deepens feelings of isolation and trauma.

Moreover, large language models (LLMs) like ChatGPT are trained on large amounts of data which, without specific human intervention, can perpetuate existing inequalities.  Subramonian cites as example the research conducted by Joy Adowaa Buolamwini (she/her), a computer scientist and digital activist based at the MIT Media Lab and the founder of the Algorithmic Justice League, and Timnit Gebru (she/her), a renowned researcher on ethics and AI and the founder of the Distributed Artificial Intelligence Research Institute (DAIR). Their research — Gender Shades — on three market-dominant facial recognition systems showed that the systems performed better on males and lighter skinned subjects; all systems performed worst on darker females. The darker the skin, the less accuracy of the systems in gender identification.

Gender Shades

“One of the most shocking things to me,” says Subramonian regarding their path to AI and ethics advocacy, “was that one of the systems that Joy was inspecting was not able to recognize her [at all]. But when she put on a white face mask, it was able to recognize her face. That was one of many moments where I was just, like, this is awful. What am I working on? Who is this serving?” It is crucial for people who are part of marginalized communities to be involved in the development of AI to ensure that it also serves their interests.

One of the other challenges facing marginalized communities is that AI is often developed in a centralized manner, where data is collected from around the world and stored in a centralized location. This can lead to the specific interests of some communities being overlooked, which can be especially problematic for already marginalized and underrepresented communities.  “It doesn’t have to be in warehouses; it doesn’t have to be super large,” says Subramonian. “In fact, Indigenous in AI has talked a lot about indigenous epistemologies around AI, for example, having decentralized AI systems that are specific to different communities.”

Grassroots organizations and student activism

Achieving equity in the development of AI is not just a technical problem, but a social and political one as well. Grassroots organizations, such as Queer in AI, Black in AI, and LatinX in AI, are playing an essential role in the development of AI, says Subramonian. They are anti-hierarchical, guided by intersectionality, and committed to justice. These organizations are challenging the hegemonic forms of AI and advocating for a more bottom-up approach to its development. 

People outside of the AI sector can support the development of more than just AI by getting involved in grassroots organizations and leveraging their resources. (Subramonian notes the inclusive conference guide published by Queer in AI, for example.) These organizations are actively seeking to challenge established practices and need people who have experience with activism to get involved. Individuals do not need to be AI experts to get involved; they just need to have a commitment to justice.

“We need more people who have experience with activism,” says Subramonian. “You don’t need to be an AI expert — that’s a huge barrier to people joining. They think they need to have a deep understanding of the mathematical foundations of these models. The truth is, nobody has a deep understanding of the mathematical foundations. They’re opaque in many ways. Some people have maybe more intuition about it and may have some mathematical understanding of certain aspects. But they’re still so large, and so many of the capabilities that they’re expressing are just not well understood. I would take that as a sign that you should just get involved, be your best activist self.”

There are many ways that business school students interested in promoting fairness and diversity in AI research, use, and access can contribute to this effort: 

  1. Educate yourself: Begin by deepening your understanding of the ethical implications and biases associated with AI. Familiarize yourself with the challenges faced by marginalized communities in AI development and deployment. Stay updated on the latest research, guidelines, and best practices in the field of AI ethics. 
  2. Collaborate with community groups: Reach out to grassroots organizations, such as Queer in AI, Black in AI, and LatinX in AI, to actively participate in discussions and conversations related to fairness, diversity, and ethics in AI. Offer your skills, resources, and support to their initiatives and projects. Actively listen to the voices and experiences of marginalized communities, and ensure their perspectives are included in AI development and decision-making processes. 
  3. Become an advocate: Within your academic and professional spheres, actively promote the importance of diversity and inclusion in AI. Encourage your peers and colleagues to consider the impact of AI on marginalized communities and advocate for inclusive practices. Raise awareness about the potential biases in AI algorithms and the need for fairness in AI applications. 
  4. Stay informed and engaged: AI is a rapidly evolving field, and it is crucial for end users to stay atop the latest advancements, debates, regulatory changes, and ethical considerations. Engage in continuous learning through courses, webinars, conferences, and reading materials that focus on AI ethics, fairness, and diversity. Support initiatives that aim to establish guidelines and regulations for ethical AI development and deployment. Advocate for policies that promote fairness, transparency, and accountability in AI systems.


“If we fail to make ethical and inclusive artificial intelligence, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality” says Buolamwini in her Gender Shades video.  Subramonian concurs. AI can perpetuate existing inequalities, but it has the potential to be used for the greater social good, advancing efforts in diversity, equity, and inclusion. It is crucial for people from marginalized communities to be involved in the development and deployment of AI to ensure that it serves their interests. Grassroots efforts, such as Queer in AI, Black in AI, and LatinX in AI, can move the development of AI along the paths of justice and equality. But individuals outside of the AI sector must do their part by becoming informed, engaged, and supportive of these initiatives. The future of AI can be more just if we democratize these systems and actively work to combat the inequalities being perpetuated by current AI systems.