How can we address gender bias and inequality in AI?
There is no escaping the buzz surrounding artificial intelligence not only in the technology industry but in wider society too. AI is a valuable resource used by companies around the world to improve their decision-making, streamline their services and even make their energy use more efficient.
It is also a great tool that facilitates an easier everyday life, from virtual assistants to conversational language models, like Chat GPT, capable of interacting with us on a more personal and informative level. But, while these advancements are great as a whole, they are not perfect and we must work harder to address gender bias and inequality.
It might seem like AI tools are just machines with no gender but by digging a little deeper we can uncover gender bias and inequality at a foundational level. Here are some of the most important gender bias and inequality questions regarding AI we must ask and the solutions required to address them.
Recognising that AI reflects its creators
When it comes to machine learning, you get out what you put in. AI systems are trained on data that is collected and labeled by humans, and this data can reflect the biases of the people who collected it. For example, if a dataset of images of people is mostly of white men, then AI systems trained on this data may be more likely to recognise white men than other groups of people.
If the original coding and inputs are gender biased, even if done so unintentionally, then the AI is more likely to have a slant towards its predispositions. This can result in gender-based discrimination, such as biased hiring or loan decisions, and perpetuate existing societal inequalities.
Those algorithmic biases create unfair outcomes, which might not be a huge issue when it comes to your Netflix recommendations but will be if, say, welfare or social protection decisions are made using AI. With an increasing reliance on machine learning systems, we must eradicate any biases or flawed technologies that reinforce gender bias as quickly as possible before they start to impact people’s lives more significantly.
Human biases have found their way into AI
Training AI with data that isn’t representative, for example, overwhelmingly reflecting the experiences of men, results in a system that will be biased toward men. Of course, this is not through bad intentions on the part of the machine learning developers but it doesn’t change the fact that an imperfect data set and unequal industry are contributing factors.
New AI tools are being rolled out at a fast pace and our dependence on them is only going to increase. According to Justin Aldridge, Technical Director at a leading digital marketing company: “In March there were over 1,000 new AI tools launched.” Adding: “The rate of innovation and adoption has been unprecedented”.
This also means that the number of potentially biased versions of AI made available to people is increasing but if the industry can identify where problems lie it can also start addressing them.
Using AI to overcome its own gender biases
The important thing for machine learning is that humans are at its core. So if we can initially define its behavior through algorithms and training data, then we can also redefine it.
Training the AI to also look for gender bias and inequality is a great step in making it more aware but it comes back to the same problem when programming; it is limited by the human trainer. That is why the most important method of reducing this gender bias and inequality is to level the playing field throughout education and recruitment in STEM industries.
Staying vigilant to mitigate sources of bias in AI
It is important for machine learning coders, trainers, and designers to actively avoid unrepresentative bias from working its way into their AI systems and tools. One simple step is to label data more carefully, ensuring that it is identified in a way that is consistent with the values of the people who are using it.
Developers need to be careful and proactive when identifying and mitigating the potential sources of bias in their AI systems.
For example, there is evidence that autonomous cars find it difficult to recognise pedestrians with darker skin, similarly, those in wheelchairs and mobility scooters were also more at risk. You could argue that were there more diversity amongst the teams creating driverless cars then this would have been more of a consideration as a person of color or someone in a wheelchair could have been on the design team to flag it sooner.
Improving data collection and cleaning processes
The oversight of the autonomous car industry is most likely an innocent oversight but it is also a great example of how diversity improves data collection and cleaning. Improving diversity among AI development teams that train the machine to learn and write the algorithms is the most important step to removing bias.
Encouraging more women and people from minority groups into STEM jobs is essential as at the moment women and minorities are underrepresented. Olwyn DePutron, Director of Step IT Up America at UST, explains “In today’s workplace, there’s still a limited number of minorities pursuing STEM careers”.
Adding: “One reason is that in the early education age, there are not a lot of programs available within the schools minorities attend”. This not only limits the available pool of talent to a more homogenous group but also the cultures in those workplaces are less diverse. This is another example of how gender bias in the tech industry is looped back into its ecosystem.
Deploying AI as a hiring tool
Using AI as a hiring model is a great example of how gender bias can continue to be perpetuated. For a job hire, the data is CVs from applicants but in the technology industry just 28% are women, which immediately can skew the algorithm as it is being fed with historical information as a baseline.
These new hiring systems will simply replicate the old patterns, whereby the biased data leads AI to make further biased decisions. This is a great example of why it’s important to hire a diverse workforce as someone from an underrepresented group may spot this bias during a data cleanup exercise and help their team develop and refine the AI’s machine learning system to combat against it.
Training with extra care can play its part
With 80% of executives believing AI can be applied to any business decision, there is a real need to iron out the imperfections in machine learning sooner rather than later. This type of high level of acceptance of AI in making decisions just shows how willing companies and people are to adopt it.
There is a need for greater representation of women and minority groups in the technology industry but thoughtful training also has a role to play. Educating those already in the tech industry and working hard to prevent gender bias and inequality will also help to reduce the unintentionally ingrained biases that exist in AI.