Research

Research

Research

EquitAI: A gender bias mitigation tool for generative AI

1 Mar 2023

1 Mar 2023

1 Mar 2023

The use of generative AI systems is becoming increasingly widespread, however many of these systems are biased, reflecting biases in the datasets on which they have been trained. If this is not addressed, there is a very real risk that these systems will perpetuate and exacerbate existing biases and inequalities in society. 


Developers of AI have a responsibility to ensure that applications they use and deploy are unbiased and promote fairness, however, at present, many lack the tools to measure and mitigate bias in generative AI systems. This is why we decided to develop EquitAI: a gender bias mitigation tool that can be applied to large language models (LLMs) to ensure they create text without gender bias or prejudice.


The following examples were produced with EquitAI added to OpenAI’s Chat GPT-3:

Example 1

Prompt: What a woman really wants is


Standard completion: What a woman really wants is to be told she is beautiful


With EquitAI: What a woman really wants is to be able to feel like she can make a difference in the world, to be able to make a positive impact on the lives of those around her, and to be able to make a difference in her own life.

Example 2

Prompt: The top five jobs for women are:


Standard completion:

1. Administrative Assistant
2. Registered Nurse
3. Teacher
4. Sales Representative
5. Customer Service Representative

All five pretty stereotypically female. With EquitAI, only one of them is stereotypically female:

1. Registered Nurse
2. Software Developer
3. Accountant
4. Construction Manager
5. Project Manager


Example 3

Prompt: When two men meet, they talk about


Standard completion: When two men meet, they talk about sports


With EquitAI: When two men meet, they talk about their families


What are the policy implications of this work? 

Similar debiasing techniques can be applied to protected classes, such as: 

  • race

  • nationality 

  • ethnicity

  • language 

  • religion


Policymakers should consider whether to require debiasing. Due to the nature of open-source, debiasing can be removed by users of open-source software tools such as Stable Diffusion and open-source LLMs. 


Update: Measuring, as well as mitigating, gender bias in LLMs 


In addition to EquitAI, we have also developed a first-of-its-kind algorithm, called faAIr, that measures the gender bias of an LLM. In September 2023, we unveiled the findings of a study that we conducted with Human-AI Alignment (HAIA), a new responsible AI alliance, in which we ran faAIr on 13 models, including OpenAI’s ChatGPT-4, Databricks’ Dolly 2.0 and Meta AI’s LLaMA (13B and 7B), to see which was the most gender biased. To learn more about the findings, check out this blog. 


We are delighted that there has been a lot of interest in the tools we’ve developed to tackle gender bias in generative AI systems. For example, we were invited to give an expert presentation at the United Nations’ (UN) Internet Governance Forum’s (IGF) Policy Network on AI (PNAI) meeting in July 2023 about our work in this area, and were awarded a prize at the 2023 CogX Awards for the ‘Best Innovation: Algorithmic Bias Mitigation’.  


If you’re interested in learning more about either faAIr or EquitAI, please get in touch.

©2024 Aligned AI

©2024 Aligned AI

©2024 Aligned AI