Input your search keywords and press Enter.

Tips for applying an intersectional framework to AI development

By now, most of us in tech know that the inherent bias we possess as humans creates an inherent bias in AI applications applications that have become so sophisticated they’re able to shape the nature of our everyday lives and even influence our decision-making. The more prevalent and powerful AI systems become, the sooner the industry must address questions like: What can we do to move away from using AI/ML models that demonstrate unfair bias?

How can we apply an intersectional framework to build AI for all people, knowing that different individuals are affected by and interact with AI in different ways based on the converging identities they hold? Start with identifying the variety of voices that will interact with your model.

Intersectionality: What it means and why it matters

Before tackling the tough questions, its important to take a step back and define intersectionality. A term defined by Kimberlé Crenshaw, its a framework that empowers us to consider how someone’s distinct identities come together and shape the ways in which they experience and are perceived in the world.


This includes the resulting biases and privileges that are associated with each distinct identity. Many of us may hold more than one marginalized identity and, as a result, were familiar with the compounding effect that occurs when these identities are layered on top of one another.

At The Trevor Project, the worlds largest suicide prevention and crisis intervention organization for LGBTQ youth, our chief mission is to provide support to each and every LGBTQ young person who needs it, and we know that those who are transgender and nonbinary and/or Black, Indigenous, and people of color face unique stressors and challenges.

So, when our tech team set out to develop AI to serve and exist within this diverse community namely to better assess suicide risk and deliver a consistently high quality of care we had to be conscious of avoiding outcomes that would reinforce existing barriers to mental health resources like a lack of cultural competency or unfair biases like assuming someones gender based on the contact information presented.

Though our organization serves a particularly diverse population, underlying biases can exist in any context and negatively impact any group of people. As a result, all tech teams can and should aspire to build fair, intersectional AI models, because intersectionality is the key to fostering inclusive communities and building tools that serve people from all backgrounds more effectively.


Doing so starts with identifying the variety of voices that will interact with your model, in addition to the groups for which these various identities overlap. Defining the opportunity youre solving is the first step because once you understand who is impacted by the problem, you can identify a solution. Next, map the end-to-end experience journey to learn the points where these people interact with the model. From there, there are strategies every organization, startup and enterprise can apply to weave intersectionality into every phase of AI development from training to evaluation to feedback.

Datasets and training

The quality of a models output relies on the data on which its trained. Datasets can contain inherent bias due to the nature of their collection, measurement and annotation all of which are rooted in human decision-making. For example, a 2019 study found that a healthcare risk-prediction algorithm demonstrated racial bias because it relied on a faulty dataset for determining need. As a result, eligible Black patients received lower risk scores in comparison to white patients, ultimately making them less likely to be selected for high-risk care management.

Fair systems are built by training a model on datasets that reflect the people who will be interacting with the model. It also means recognizing where there are gaps in your data for people who may be underserved.

However, there’s a larger conversation to be had about the overall lack of data representing marginalized people its a systemic problem that must be addressed as such, because sparsity of data can obscure both whether systems are fair and whether the needs of underrepresented groups are being met.

To start analyzing this for your organization, consider the size and source of your data to identify what biases, skews or mistakes are built-in and how the data can be improved going forward.




The problem of bias in datasets can also be addressed by ampl

Tencent and Chinese scientists use deep learning to predict fatal COVID-19 cases



Why all great games need a product vision



Leave a Reply

Your email address will not be published. Required fields are marked *