In Santa’s Naughty or Nice AI, we assisted Santa in a ‘choose-your-own-adventure’ to decide how his new artificial intelligence (AI) system would determine which children have been naughty, and which have been nice. Santa faced challenges with fairness in his new AI, and he isn’t alone.
In this follow-up post, we dive deeper into the complexities of AI fairness, discussing what it is, why it’s important, the challenges that come with it, and ways to address these issues.
What Is AI Fairness?
AI fairness refers to the principle that decisions made by AI systems should be made equitably, without introducing bias or discrimination. AI fairness ensures that individuals do not face discrimination for their characteristics, such as race, gender, age, or socioeconomic status.
In the example of Santa’s AI, the decision to include the child’s socioeconomic status meant that Santa was able to evaluate the child’s behaviour with more context, taking into consideration the opportunities that may or may not be available to them due to their family’s financial position, rather than solely on their actions.
Why Is It Important?
Without AI fairness, people cannot trust the decisions made by AI, which is crucial when it’s being used in many areas of our lives, such as financial institutions, healthcare, and law enforcement. Whether it’s deciding who receives a gift from Santa, or who qualifies for a loan, AI fairness plays a critical role in ensuring that technology benefits everyone. However, AI fairness isn’t a one-size-fits-all approach.
What Constitutes Fairness?
There are three main concepts of fairness that apply to AI decisions.
Individual Fairness - Head Elf #1
Individual fairness ensures that similar individuals are treated similarly.
In our Santa example, Head Elf #1 suggested that the AI should only look at what each child did during the year, without considering their family background or challenges. In other words, the AI should only look at the individual’s actions. In doing so, the child’s external circumstances (their family’s income) are not considered. While this seems impartial at first glance, it risks introducing bias against children from disadvantaged backgrounds. Without considering these circumstances, the AI could unintentionally favour children with more resources.
Real-World Example: Apple’s Credit Card
In 2019 Apple's credit card, issued by Goldman Sachs, was accused of offering significantly lower credit limits to women than men, even when they shared similar financial profiles. This revealed a failure in individual fairness; the AI system relied on biased historical data that reflected gender disparities in financial lending. By not treating individuals with similar attributes similarly, the system produced unfair outcomes.
Group Fairness - Head Elf #2
Group fairness ensures that individuals from similar groups are treated similarly.
In our Santa example, Head Elf #2 suggested that the AI should consider the children’s family situations, such as the opportunities available to them dependant on their family’s income. For example, children from affluent families and those from disadvantaged backgrounds could be grouped separately. With each group, the AI would evaluate a child’s behaviour relative to their peers. This approach helps account for differences in resources and opportunities, making comparisons fairer.
Real-World Example: A-Level Grading
In 2020 the UK’s A-level grading system downgraded many students’ predicted grades, disproportionately affecting students from unprivileged schools. Students were not only ranked by their ability, but also by their school’s historical performance. While this was intended to ensure consistency across groups, it ultimately penalised high-achieving students in disadvantaged schools.
Intersectional Fairness - Head Elf #3
Intersectional fairness ensures that fairness applies across multiple overlapping characteristics, such as gender, race, socioeconomic status, or any combination of attributes that might influence outcomes.
In Santa’s example, Head Elf #3 suggested that Santa’s AI consider the unique combinations of circumstances children face. For example, a child might belong to a low-income family (a disadvantaged group) but also excel in helping others despite these challenges. Intersectional fairness would aim to balance these overlapping factors to provide a more contextual and fair decision.
Real-World Example: Hiring Algorithms
In 2015, Amazon tested a recruitment tool and had to scrap it when they realised that the system was favouring male applicants by rejecting applications that included words such as “women’s”, for example “women’s football team”. This was the result of the system being fed applications submitted over a 10-year period in which a large number were from men. This demonstrates how multiple attributes, such as gender and the inclusion of certain words associated with those genders, could create unfair outcomes.
What Can I Do About It?
While ethical AI and fairness issues might seem distant, there are practical steps everyone can take to help tackle these challenges:
1. Educate Yourself and Others
- This post is a great place to start! There are a number of online resources that explain AI fairness in more detail - if the subject has peaked your interest explore it further.
- Share this post with friends, family, and communities to spread awareness about AI fairness and its implications.
2. Question Algorithms
- Algorithms influence our day-to-day lives and much of what we see online, from news to social media recommendations. Question the “why” behind those recommendations and be mindful of how these suggestions can be influenced by factors like your browsing history, demographics, or location. If you’d like to know more about online privacy, check out my post How Private Are You?.
3. Choose Ethical Technology
- If possible, consider using tech platforms that prioritise fairness, transparency and user empowerment. Some search engines, browsers, and social-media platforms have built-in privacy or bias-mitigation features, for example Firefox and DuckDuckGo.
4. Advocate for Transparency and Regulation
- If AI is used in your work-place, advocate for audits, diversity in
decision-making teams, and fairness evaluations in its deployment.
- Participate in public consultations or discussions about AI policies and ethical standards.
- Support and call for organisations and governments to adopt transparent AI practices. This includes demanding clear explanations for how AI makes decisions.
- Advocate for legal frameworks that hold AI developers accountable for bias and discrimination.
While these steps may seem small individually, together we can contribute to a growing demand for ethical standards in technology, which can influence broader changes in how companies and policymakers address AI fairness and transparency.
Conclusion
As AI continues to integrate into everyday life, it is important that we do what we can to ensure it works for everyone. It can be easy to see the topic as too complex for us to understand, or for us to seem like a drop in the ocean incapable of making waves, but with AI making more decisions that impact our lives (whether we want it to or not), the more we know, the more we can do.
I hope this introduction into AI fairness was helpful. Would you like to see more posts like this? Is there a particular topic you’d like me to cover? Let me know your thoughts in the comments below!
No comments:
Post a Comment
Comment