Ethics & Bias in Technology
  • Home
  • Posts
    • Tags
      • AI
      • Algorithms
      • Automation
      • Bias
      • Educational
      • Ethics
      • Fairness
      • Introduction
      • Interactive
      • Mental Health
      • News
      • Privacy
      • RPA
      • Social Media
      • Tips
      • Women In Tech
      • Quiz
  • About
    • About the Blog
    • About the Creator
    • Contact Me
    • Support Me
neon signage depicting likes

 We spend much of our time online. We follow influencers, celebrities, join communities that share our interests or our hobbies, and we form relationships. We find our ‘cliques’. But, how many of them are genuine, and how many are just after your ‘clicks’? 


The Age of Online Connection

Once upon a time online communities were simple, text-based forums. While they weren’t anything fancy (although you had a much better chance of getting the username you wanted), they were the start of what we know the internet for today.

Now, platforms like Reddit host more than 100,000 communities, also known as subreddits, for just about every interest you can think of (including ones you’d never even thought of). Instagram and TikTok bring us influencers, creativity and cat videos. Discord and Slack create hubs for collaboration and real-time socialising. X is, well, X. It’s never been easier to find ‘our people’.

On average, we spend more than 2 hours per day in on our online communities, so it’s important that they are genuine. But, how can you tell if you’re in a ‘clique’, or a ‘click’?


blue red and gree letters depiciting social media platforms

 

Clique vs. Click: How to Spot the Difference

A ‘clique’ community encourages two-way interaction, where members feel seen and valued beyond their engagement or purchasing power. They provide spaces for collaboration, diversity of thought, and real emotional connections. When you spend time in this community, you may feel energised, comforted or heard.  

A ‘click’ community may have minimal interaction beyond sales pitches, and may have a constant push to monetise their interactions. They may use emotional appeals building one-sided relationships, or create the fear of missing out (FOMO). Their content may come with a catch; for example offering ‘too-good-to-be-true’ promises in return for sharing or engaging with their posts. When you spend time in this community, you may feel drained, insecure or unheard.


Quiz: Is Your Community a Clique or a Click?

Take the quiz and share your results with #CliqueOrClick to see what others think about their online spaces!


Conclusion

Finding our ‘cliques’ can be incredibly positive for our mental health and sense of belonging. But, it’s all too easy to fall into communities that aren’t as positive. Knowing how to identify which communities are genuine and which ones aren’t enables us to make conscious decisions about how and where we spend our time. We have the power to demand authenticity and to seek out spaces where we feel appreciated.    

So, what do you think - are you in a clique or a click? Let me know your thoughts with #CliqueOrClick!

 

Santa using his AI to determine naughty or nice children with his three head elves

 In Santa’s Naughty or Nice AI, we assisted Santa in a ‘choose-your-own-adventure’ to decide how his new artificial intelligence (AI) system would determine which children have been naughty, and which have been nice. Santa faced challenges with fairness in his new AI, and he isn’t alone.

In this follow-up post, we dive deeper into the complexities of AI fairness, discussing what it is, why it’s important, the challenges that come with it, and ways to address these issues.

What Is AI Fairness?

AI fairness refers to the principle that decisions made by AI systems should be made equitably, without introducing bias or discrimination. AI fairness ensures that individuals do not face discrimination for their characteristics, such as race, gender, age, or socioeconomic status.

In the example of Santa’s AI, the decision to include the child’s socioeconomic status meant that Santa was able to evaluate the child’s behaviour with more context, taking into consideration the opportunities that may or may not be available to them due to their family’s financial position, rather than solely on their actions.

Why Is It Important?

Without AI fairness, people cannot trust the decisions made by AI, which is crucial when it’s being used in many areas of our lives, such as financial institutions, healthcare, and law enforcement. Whether it’s deciding who receives a gift from Santa, or who qualifies for a loan, AI fairness plays a critical role in ensuring that technology benefits everyone. However, AI fairness isn’t a one-size-fits-all approach.
 

What Constitutes Fairness?

There are three main concepts of fairness that apply to AI decisions. 

Individual Fairness - Head Elf #1

Individual fairness ensures that similar individuals are treated similarly.

In our Santa example, Head Elf #1 suggested that the AI should only look at what each child did during the year, without considering their family background or challenges. In other words, the AI should only look at the individual’s actions. In doing so, the child’s external circumstances (their family’s income) are not considered. While this seems impartial at first glance, it risks introducing bias against children from disadvantaged backgrounds. Without considering these circumstances, the AI could unintentionally favour children with more resources.

Real-World Example: Apple’s Credit Card

In 2019 Apple's credit card, issued by Goldman Sachs, was accused of offering significantly lower credit limits to women than men, even when they shared similar financial profiles. This revealed a failure in individual fairness; the AI system relied on biased historical data that reflected gender disparities in financial lending. By not treating individuals with similar attributes similarly, the system produced unfair outcomes.

Group Fairness - Head Elf #2

Group fairness ensures that individuals from similar groups are treated similarly.

In our Santa example, Head Elf #2 suggested that the AI should consider the children’s family situations, such as the opportunities available to them dependant on their family’s income. For example, children from affluent families and those from disadvantaged backgrounds could be grouped separately. With each group, the AI would evaluate a child’s behaviour relative to their peers. This approach helps account for differences in resources and opportunities, making comparisons fairer.

Real-World Example: A-Level Grading

In 2020 the UK’s A-level grading system downgraded many students’ predicted grades, disproportionately affecting students from unprivileged schools. Students were not only ranked by their ability, but also by their school’s historical performance. While this was intended to ensure consistency across groups, it ultimately penalised high-achieving students in disadvantaged schools.

Intersectional Fairness - Head Elf #3

Intersectional fairness ensures that fairness applies across multiple overlapping characteristics, such as gender, race, socioeconomic status, or any combination of attributes that might influence outcomes.

In Santa’s example, Head Elf #3 suggested that Santa’s AI consider the unique combinations of circumstances children face. For example, a child might belong to a low-income family (a disadvantaged group) but also excel in helping others despite these challenges. Intersectional fairness would aim to balance these overlapping factors to provide a more contextual and fair decision.

Real-World Example: Hiring Algorithms

In 2015, Amazon tested a recruitment tool and had to scrap it when they realised that the system was favouring male applicants by rejecting applications that included words such as “women’s”, for example “women’s football team”. This was the result of the system being fed applications submitted over a 10-year period in which a large number were from men. This demonstrates how multiple attributes, such as gender and the inclusion of certain words associated with those genders, could create unfair outcomes.

What Can I Do About It?

While ethical AI and fairness issues might seem distant, there are practical steps everyone can take to help tackle these challenges:

1. Educate Yourself and Others

  • This post is a great place to start! There are a number of online resources that explain AI fairness in more detail - if the subject has peaked your interest explore it further.

  • Share this post with friends, family, and communities to spread awareness about AI fairness and its implications.

2. Question Algorithms

  • Algorithms influence our day-to-day lives and much of what we see online, from news to social media recommendations. Question the “why” behind those recommendations and be mindful of how these suggestions can be influenced by factors like your browsing history, demographics, or location. If you’d like to know more about online privacy, check out my post How Private Are You?.

3. Choose Ethical Technology

  • If possible, consider using tech platforms that prioritise fairness, transparency and user empowerment. Some search engines, browsers, and social-media platforms have built-in privacy or bias-mitigation features, for example Firefox and DuckDuckGo.

4. Advocate for Transparency and Regulation

  • If AI is used in your work-place, advocate for audits, diversity in decision-making teams, and fairness evaluations in its deployment.

  • Participate in public consultations or discussions about AI policies and ethical standards.

  • Support and call for organisations and governments to adopt transparent AI practices. This includes demanding clear explanations for how AI makes decisions.

  • Advocate for legal frameworks that hold AI developers accountable for bias and discrimination.


While these steps may seem small individually, together we can contribute to a growing demand for ethical standards in technology, which can influence broader changes in how companies and policymakers address AI fairness and transparency.

Conclusion

As AI continues to integrate into everyday life, it is important that we do what we can to ensure it works for everyone. It can be easy to see the topic as too complex for us to understand, or for us to seem like a drop in the ocean incapable of making waves, but with AI making more decisions that impact our lives (whether we want it to or not), the more we know, the more we can do. 

I hope this introduction into AI fairness was helpful. Would you like to see more posts like this? Is there a particular topic you’d like me to cover? Let me know your thoughts in the comments below!

 

a hand holding a newspaper called good news

 

 Welcome to the November 2024 roundup of positive AI stories! Each month, I gather examples of how AI is being used responsibly and thoughtfully to improve lives, foster sustainability, and inspire meaningful change. In an ever-evolving digital world, it’s essential to highlight the positive impacts AI can make when developed with ethics in mind. Let’s dive into this month’s top stories of AI at work for good.

1. Breast Cancer Detection

Over two months, an AI reviewed more than 12,000 mammograms from 15 NHS Trusts in the UK, identifying cancers that were undetectable to the human eye. Working alongside radiologists, this AI helps to reduce the number of missed cancers and lower false positives.

Read more here: BBC News

2. AI Granny Fighting Scammers

UK telecom provider O2 has created AI grandma ‘Daisy’ (or dAIsy) to help keep scammers on the phone for as long as possible, preventing them from targeting real people. Trained with anonymised scammer data, Daisy sounds indistinguishable from a human and has successfully kept scammers on calls for 40 minutes at a time.

Read more here: Virgin Media O2

3. Squirrel Agent

Genysys Engine have developed an AI they call ‘Squirrel Agent’ to identify red squirrels by their whiskers. Currently in trial across the UK with five wildlife charities, this AI could be used to automatically control access to squirrel feeders helping to manage the red and grey squirrel populations.

Read more here: BBC News

4. Advancing Sustainable Supply Chains

In Egypt, AI is helping industries like textiles and agriculture align production with demand, track resources, and cut carbon footprints, driving more sustainable business practices.

Read more here: Supply Chain Brain

5. Personalised Infection Treatments

Researchers at the University of Liverpool’s Centres for Antimicrobial Optimisation Network (CAMO-Net) have developed an AI-driven approach to personalise infection treatment. This method helps clinicians target infections more accurately, reducing the risk of antibiotic resistance - a growing global health concern.

Read more here: University of Liverpool

Conclusion

November has shown us that AI is making significant strides in areas that matter. Each of the stories highlighted in this roundup demonstrates AI's potential to improve lives, make critical processes more efficient, and support sustainability. It reminds us that, where there can be concerning news stories about AI, it's not all doom and gloom and it certainly has its place in society when used responsibly. So, stay tuned for next month's roundup!

Want more good news stories? Find them all here!

Get Involved!

Found these stories inspiring? Share this post with others who may be interested in the positive side of AI, and let’s keep the conversation about ethical, impactful tech going. I’d love to hear your thoughts or any other positive AI stories you’d like to share in the comments below!

A whimsical AI generated image of an old lady and a secret agent squirrel working together
Daisy & Squirrel Agent | ChatGPT




Santa's hand typing a list of names onto a good list on their phone

  ‘Tis Christmas season and Santa has a new artificial intelligence (AI) system to help him with his Naughty or Nice List.  But he faces a dilemma. The AI comes with some quirks. In this “choose your own adventure” holiday special you will be helping Santa decide what he should prioritise: fairness, accuracy, or speed? Let’s see what choices you’ll make for Santa!


So, how did Santa do this Christmas? Did your decision feel easy to make? Was the outcome what you expected? While Santa’s example may be quite light-hearted, the reality is the dilemma Santa faced touches on real-world issues for AI systems, and this can have significant implications in our day-to-day lives. If you want to know more, check out my post AI Fairness: A Naughty (Or Nice) Introduction.

What are your thoughts? Let me know in the comments below! And if you enjoyed this post why not share it and see how your friends fare in helping Santa.

 a hand holding a newspaper called good news

   

 Welcome to the October 2024 roundup of positive AI stories! It can seem that all we see in the news and on our social media feeds are negative stories about AI, so I’ve made it my mission to highlight the positives. Each month, I will be gathering examples of how AI is being used responsibly and thoughtfully to improve lives, foster sustainability, and inspire meaningful change. Here’s what October had to offer us.

1. Predicting Health Risks, Including Early Death

Researchers at Imperial College London have developed an AI model capable of predicting health risks, including early death, from electrocardiograms (ECGs). This AI, called AIRE, can detect subtle patterns in ECGs that might be missed by human cardiologists, identifying risks such as heart disease, heart failure, and even non-heart-related causes of death. The AI has demonstrated high accuracy, with trials set to begin in the NHS by mid-2025. The model could help doctors prioritise treatments and improve patient outcomes, offering significant benefits for healthcare efficiency.

Read more here: Imperial College Healthcare

2. Distribution of Hurricane Relief Payments

The nonprofit organisation GiveDirectly is using AI to streamline the distribution of hurricane relief payments. By leveraging machine learning, GiveDirectly can identify and assist disaster victims quickly, especially in areas with limited infrastructure. This AI-driven approach reduces overhead costs and allows funds to go directly to those most in need.

Read more here: CBS News

3. Mediation Tool for Culture War Rifts

Researchers are exploring how AI mediation tools could help bridge divides in cultural debates, for example in political discussions. These tools aim to promote more empathetic, constructive conversations by analysing language and suggesting ways to resolve conflicts. They could play a key role in reducing polarisation by offering neutral, balanced perspectives, especially in heated discussions. The technology is still in development but holds potential for enabling dialogue in contentious areas.

Read more here: The Guardian

4. Prevent Cancer Patients from Unnecessary Treatments

An AI model is helping to reduce unnecessary cancer treatments by improving diagnostic accuracy. By analysing medical data, the AI assists doctors in identifying when certain treatments may not be needed, ensuring patients avoid harmful or costly procedures. This model aims to enhance clinical decision-making, reduce healthcare costs, and ultimately improve patient outcomes.

Read more here: Forbes

5. Track Antarctica’s Penguins

AI is being used to analyse tourist photos to help track penguin populations in Antarctica. Researchers have developed a model that can identify penguins in images taken by visitors, allowing for more efficient monitoring of their numbers and movements. This method enhances conservation efforts by providing valuable data without disturbing the penguins’ natural habit.

Read more here: New Scientist

Conclusion

October has shown us that AI is making significant strides in areas that matter. Each of the stories highlighted in this roundup demonstrates AI's potential to improve lives, make critical processes more efficient, and support sustainability. It reminds us that, where there can be concerning news stories about AI, it's not all doom and gloom and it certainly has its place in society when used responsibly. So, stay tuned for next month's roundup!

Want more good news stories? Find them all here!

Get Involved!

Found these stories inspiring? Share this post with others who may be interested in the positive side of AI, and let’s keep the conversation about ethical, impactful tech going. I’d love to hear your thoughts or any other positive AI stories you’d like to share in the comments below!

A penguin engaging with AI technology in a snowy Antarctic setting
Penguin engaging with AI | ChatGPT






"You are my creator, but I am your master;-obey!"  Mary Shelley's Frankenstein, 1818

In Mary Shelley's Frankenstein, Victor Frankenstein, a brilliant scientist, becomes obsessed with the idea of creating life. Driven by his desire, he fails to reflect on the social and moral implications of his work. Using his knowledge of biology and chemistry, he assembles and animates a creature. However, the instant his creation becomes sentient, Frankenstein recoils in horror, abandoning it. Frankenstein refuses to acknowledge the moral responsibility for his creation, allowing it to become a monster driven by isolation, loneliness and rage.

 

Frankenstein remains a timeless exploration of the dangers of unchecked ambition and the ethical responsibilities tied to creation. While the original story revolved around Victor Frankenstein's obsession with creating life, the cautionary tale has found new relevance in today's rapid advancements in artificial intelligence (AI). As AI evolves, with systems growing more autonomous and capable, we must grapple with the same moral dilemmas Mary Shelley posed over 200 years ago. In this modern reimagining of the Frankenstein narrative, we find ourselves as both creators and stewards of powerful technologies that could reshape society - or unravel it.

 

The Ambition Behind Artificial Intelligence

 

At its core, AI represents humanity's ambition to create machines capable of replicating, and potentially surpassing, human intelligence. AI is already being integrated into every aspect of our lives - autonomous cars, healthcare diagnostics, algorithm-driven financial markets, and even creative processes like art, music generation and writing. As AI systems become more advanced, we face a pivotal question: will these machine always operate within the boundaries we set, or are we approaching a point where they might "learn" to operate beyond them?

 

The Fear of Unintended Consequence

 

AI systems, particularly those driven by machine learning, can develop behaviours and make decisions that their creators don't explicitly design or anticipate. For instance, deep learning algorithms used in social media platforms can promote misinformation or exacerbate societal divisions because they prioritise engagement over truth. Similarly, facial recognition technologies have been found to exhibit racial bias, raising ethical concerns about their widespread use in law enforcement and surveillance. In both cases, the creators of these AI systems may not have intended harm, but the consequence of these technologies have been far-reaching.

 

The fear isn't just that AI will "turn against" humanity in a dramatic, science fiction-inspired rebellion, but that it could subtly reshape the world in ways we are unprepared to manage. Much like Frankenstein's creature, AI might begin to operate in ways that no longer align with human values, posing risks that range from loss of privacy to job displacement and biased decision-making.

 

The Rise of Autonomous Systems: Who Is in Control?

 

One of the greatest challenges AI presents is the shift towards autonomy. As machines become capable of making decisions without human intervention, questions of control and accountability come to the forefront. Autonomous vehicles are a prime example: while they promise to reduce accidents and revolutionise transportation, they also raise ethical dilemmas. How should an AI-driven car respond in a situation where a collision is unavoidable? Who bears responsibility for accidents - humans or machines? The question isn't just about whether AI will become "too powerful" but about the complexity of decision-making in high-stakes situations. AI, like Frankenstein's creature, may develop the capacity to think outside of its creator's control, leading to unforeseen consequences that challenge societal norms and ethics.

 

AI and Consciousness: Could Machines Think for Themselves?

 

One of the most provocative questions surrounding AI is whether machines might one day achieve consciousness or self-awareness. While current AI systems are sophisticated in pattern recognition and problem-solving, they lack subjective experience - the hallmark of consciousness. Yet as AI research continues to advance, the possibility of creating machines with true cognitive abilities is no longer purely speculative.

 

This raises profound philosophical and ethical questions. If machines could think and feel like humans, what rights and responsibilities would they have? Would a conscious AI be entitled to autonomy, or would it be the modern equivalent of Frankenstein's creature - an intelligent being trapped in a world that doesn't understand or accept it? These questions, once the domain of science fiction, are becoming central to discussions about AI's future.

 

The creation of conscious AI would mark a turning point in history; it would blur the lines between human and machine, raising existential questions about what it means to be alive. The possibility of conscious AI highlights the ongoing challenge of ensuring that our technological advancements remain aligned with ethical values and human welfare.

 

Navigating the Ethics of Creation

 

The lesson of Frankenstein is not that creation is inherently dangerous, but that creation without responsibility leads to tragedy. Victor Frankenstein's fatal flaw was not his ambition, but his failure to take responsibility for his creation once it came to life. In today's world of AI, we find ourselves in a similar position - poised on the edge of incredible  breakthroughs, but at risk of losing control if we fail to act with foresight and ethical consideration.

 

AI developers and researchers must prioritise transparency, accountability, and ethical frameworks as they build increasingly powerful systems. Like Frankenstein, the creators of AI are responsible for ensuring their creations serve the greater good rather than simply advancing technological capability for its own sake. This includes addressing issues of bias, ensuring data privacy, and considering the long-term societal impacts of AI-driven systems.

 

Moreover, we must involve a broader range of voices in discussions about AI's future. The ethical challenges posed by AI are too important to be left solely to technologists. Philosophers, policy makers, ethicists, and the public at large should all have a role in shaping the trajectory of AI development to ensure that it aligns with human values.

 

Conclusion: Learning from Frankenstein

 

In this age of AI, the cautionary tale Frankenstein serves as a powerful reminder that the pursuit of technological progress must always be tempered by ethical responsibility. As we continue to push the boundaries of what AI can achieve, we must remain vigilant in our stewardship of these powerful tools, lest we create something that outgrows our control.

 

As we navigate the complexities of AI and its ethical implications, we want to hear from you! What are your thoughts on the responsibilities of creators in the age of AI? Could AI surpass human intelligence, and if so, what would that mean for society? What precautions do you think we should take to ensure AI serves humanity positively? Join the conversation in the comments below and let's explore these questions together! Your insights could shape the future of how we approach this incredible technology.

 


 

This article was crafted by both creator and creature - written by a human with the help of AI, echoing the very collaboration of man and machine that lies at the heart of this modern-day Frankenstein story.

 
A robotic hand reaching out in a welcoming manner

What is Robotic Process Automation (RPA)?

RPA is a software technology that enables the automation of highly repetitive rule-based processes by emulating human actions on computer systems through keyboard presses and mouse clicks. These automations can be referred to as ‘software robots’, ‘digital workers’ or simply as ‘bots’.
 
RPA has become increasingly popular in recent years and while there is plenty of documentation about the risks of RPA, something that isn’t as highly discussed is the risk of automation bias in RPA.  

What is automation bias?

Automation bias is an overreliance on automated systems or technology. When humans favour automation over their own judgement or critical thinking skills it becomes easier to make mistakes, for example accepting a suggestion from a spell-checking programme even if the suggestion is clearly incorrect, or blindly following the GPS and driving your car off a 100-foot cliff.  

What are the risks of automation bias in RPA?

Automation Complacency

Automation complacency describes how high automation reliability can lead human users to disengage from monitoring the performance of the automations.
 
While it is best practise for RPA teams to provide reports to end-users detailing what the bot has completed and what is has not, the end-user is ultimately responsible for checking that the completed activities have been processed as expected. There can be an unfounded comfort in expecting a bot to break if something has changed, therefore preventing it from processing incorrectly, but this is simply not the case. It is possible for bots to ‘invisibly break’. A process or computer application can change in such a way that causes the bot to behave differently rather than breaking it. An example of this could be a home number field changing to a mobile number field. The bot can still see the field it is interested in, but the label gives the field a new context that is clear to a human but not to a bot.  

Automation Misuse

Automation misuse is an overreliance on automations, or the implementation of an automation when it is inappropriate.
 
It can be easy to ‘set and forget’ a bot, particularly as there is no visibility of the automation to the end-users. This can result in failures of monitoring the outputs,  the assumption that the bot is processing more than it is, or the belief that the automation will never break.  

Out-of-the-loop performance problem

The out-of-the-loop performance problem refers to the loss of knowledge and skills as a result of automation. In RPA this can be particularly seen where 100% of the process is automated or when the human-workforce is no longer performing the work. Robots do break and without the humans to manually complete the process the consequences can be critical.  

How can automation bias be mitigated in RPA? 

Take a ‘human-centred approach’ that allows the bots to work alongside humans rather than replacing them.
 
Have open conversations and educate end-users on the capabilities of the bots and set clear expectations.
 
Put ownership and accountability back onto the end-users by having them validate the work the bot has completed and to continue  throughout the bot’s lifespan.
 
Provide transparency of the bots’ performance through user-friendly reporting. This can help end-users to have visibility of what the bots are doing and how often they’re doing it.
 
Follow the 80/20 rule and aim to automate the easy 80%. Take careful consideration when looking to automate 100% of the process. By keeping the more complex variations in the hands of the humans, it is possible to ensure that humans keep the required skills and knowledge.
 
Where bots are making decisions, these should be treated as advice rather than a direct instruction. A human should always be making the final decision. Transparency in how the bot made that decision will enable humans to determine if it is a correct one, or if there is additional context that needs to be considered.
 
Finally, raise awareness of these potential risks in your RPA solutions by sharing this article. Anything we've missed? Let us know in the comments below!

 

A red-haired woman focussing on a computer screen

 

At least that’s what I used to tell myself.

When people would ask me how I got into programming I’d say ‘by accident really’ and I was doing myself a disservice. The truth is I’ve worked incredibly hard to be where I am, and while it wasn’t in my plan, it was not ‘by accident’.

I read stories from other developers talking about how they loved computers from childhood, how it was their dream to code or how technology had always fascinated them. That was never me. I was more interested in reading, creative writing, drama and music. I hated maths and IT at school. My dream job was to be a writer. I didn’t see anything in my personality that would suggest choosing a career in technology. For me, programming was all about numbers, binary, and staring at what looked like a foreign language. I loved words, but I loved them in fiction and in poetry, not in the cold, formulaic medium that code appeared to be. I think that is why when I became a developer I would say ‘I’m not a traditional developer’.

So, how did I find my way into programming?

When I finished my A Levels all I knew was that I wasn’t ready for University. I got my first full-time job working in an admin role and I quickly found myself getting bored. In my frustration I found my love for fixing things. When a job came up for an IT Service Desk Analyst the only aspect of it that appealed to me was that I’d get to fix things full-time. My IT knowledge wasn’t great but I was ready for a new challenge. I got the job and was excited to start in a new field.

I expected to find things a bit tricky when I joined the Service Desk, what I wasn’t expecting was the heavy banter environment. The team was predominantly young men and ‘having a laugh’ was the norm. I was completely out of my comfort zone. I found myself altering my behaviour, acting more like ‘one of the guys’ to try and fit in. After a while I started picking up more of the late shifts as I’d spend the last couple of hours on my own and I could be myself. It was exhausting, but I really loved fixing computers and helping people.

A year later a role came up for a Junior Software Developer. I’d been enjoying working in IT and I was ready for the next step in my career. I applied for the job despite having no coding experience and to my surprise I was successful. Great! Then that niggle in the back of my mind started; did they only offer me the job because I was a woman? Cue the start of my imposter syndrome.

I’m not going to lie; those first six months were hard. I wanted a challenge, but maybe I’d bitten off more than I could chew. Learning to code took time and I was worried that my lack of maths skills would hold me back. I told myself that until someone fired me I would keep going.  That day never came and my persistence paid off. Over time things started to click, I realised that I required very little maths in my day-to-day work and my problem-solving skills were getting stronger. I started to see that code wasn’t as cold as I originally thought and actually it was another way of creating with words. Being a developer wasn’t boring; it was, in fact, all of the things that I loved.

Women in Tech

Throughout my career I’ve been asked what it’s like to be a woman in tech and I’ve always found it a difficult question to answer. I’ve realised it’s because I’m looking for those ‘defining’ moments where I’ve encountered adversity because of my gender and how I overcame it, but the truth is those events are rare (at least they were for me). The challenges I have faced have been subtle and only became more noticeable when I moved into teams where I was the only woman developer. For example, it wasn’t unusual for some people to talk over me or for my ideas to be ignored only to have someone else take credit for it later. If I made mistakes some people wouldn’t let me hear the end of it. I was more likely to be challenged on my proposed solutions than my male colleagues. 

These biases and microaggressions can be really easy to miss and it can be difficult trying to understand if the criticism is fair or if it’s based in bias. These things can wear you down, but there are actions we can take to make tech (or any field) more inclusive:

For the women:

  • Don’t be afraid to vouch for yourself and make your voice be heard
  • You have as much of a right to be there as anyone else. Don’t ever let anyone tell you otherwise or make you feel like you don’t belong
  • Invest in yourself and find your niche
  • Take chances on yourself and apply for jobs

 
For the men:

  • Amplify women’s voices and recognise their achievements
  • Call out inappropriate behaviour (and challenge your own)
  • Take time to understand unconscious biases and microaggressions that impact women
  • Mentor and sponsor women colleagues


Probably not as private as you think. 

Data is considered to be one the most valuable resources in the world, and your data is no exception.

In some instances we’re happy to give our data freely, for example loyalty cards offer discounts in return for information on your spending habits. However, what you may not know is who else collects your data, what they collect, and how to prevent it. Let’s take a look at some of them.

Who collects your data and what do they know?

Google (Alphabet Inc.)

It’s likely no surprise to see Google on this list, after all Google dominate the internet with 90% of all searches performed through their search engine. But what you may not know is Google collects your data by default and what they collect is pretty extensive:

What Does Google Know About You?

(credit: vpnAlert.com)

Google collects this data as part of their ‘Ad personalisation’. If you wish to opt-out then check out this handy article from vpnAlert.

Your browser

Yes, even your browser collects data about you. Your browser knows what device you are using, your internet service provider (ISP) details, your location, your social media logins and your mouse movements. This may seem insignificant, but it takes very little data to infer additional information about you. Want to see it in action? Try ClickClickClick. I’d recommend having your sound on for the full experience, but be warned there is some adult language.

To find out more about what your browser knows about you, visit Webkay or CoverYourTracks.

Think you’re protected if you use incognito/private browsing modes? NothingPrivate will demonstrate otherwise. 

Why is this important?

I have nothing to hide

As DuckDuckGo put it, “privacy isn’t about hiding information; privacy is about protecting information”. It may be that you have nothing to hide, but privacy is a fundamental right. The more people who give up their data, the more normalised it becomes.

GDPR will protect me (EU)

The General Data Protection Regulation (GDPR) does provide more protection for your personal information online, but don’t let this allow you to become complacent. Companies can still process your data as long as they receive consent, or if they have a legitimate interest. Utilise your rights by rejecting unnecessary cookies and take advantage of your right to erasure.

My data is anonymised

Your data may be anonymised, but it takes very little information to link the data back to you. In the US., a ZIP code, gender, and a date of birth are enough information to uniquely identify most people. John Oliver explores this further in an episode of Last Week Tonight, available here.

What can I do about it?

There are some really simple steps you can take to reduce the amount of data collected about you online.

Privacy based search engines

Avoid Google and use search engines that are focused on privacy such as DuckDuckGo or SwissCows.

Privacy based browsers

Consider browsers that are focused on privacy such as Brave, Mozilla Firefox or Tor.

Virtual Private Network (VPN)

Consider using a VPN to hide your IP address, preventing your ISP and other third parties from seeing which websites you visit and what data you send and receive online. Check out the latest recommendations from PC Mag.

Disable ad tracking

When possible, decline cookies on websites. 

If you use an Apple mobile device, iOS versions 14.5+ will allow you to disable cross-app tracking. 

Opt-out of or limit the data that can be used in personalised advertising.

Share this post!

Finally, help raise awareness and share this post. If you have any additional tips, post them in the comments below! 

Home

Contact Form

Name

Email *

Message *

Popular Posts

  • FrankenAI: The Modern-Day Frankenstein and the Future of AI
  • Automation Bias in Robotic Process Automation
  • Op-Ed: I'm Not A Traditional Developer

Tags

ai algorithms automation bias educational ethics fairness interactive introduction mental health news privacy quiz rpa social media tips women in tech

About

Here at Ethics & Bias in Technology, I am passionate about understanding the ethical challenges and biases (conscious and unconscious) that come with technology. With the growth of artificial intelligence, smart homes and social media, technology is no longer just for the technical-minded.

Proudly Unsponsored

Ethics & Bias in Technology is a proudly unsponsored and unaffiliated blog. All links/recommendations come from the EBIT team.

Popular Posts

  • FrankenAI: The Modern-Day Frankenstein and the Future of AI
  • Automation Bias in Robotic Process Automation
  • Op-Ed: I'm Not A Traditional Developer

Tags

ai algorithms automation bias educational ethics fairness interactive introduction mental health news privacy quiz rpa social media tips women in tech

Subscribe

Get new posts by email:
Powered by follow.it
Copyright © 2023-2025 || Ethics & Bias in Technology || All views my own

Acknowledgements ThemeXpose | Gooyaabi Templates