Global Comment

Where the world thinks out loud

How lack of diversity in tech leads to coding and algorithm biases

Female software engineer with projected code

Artificial intelligence (AI) is an essential part of our modern world, shaping everything from healthcare to transportation. As such, it’s become increasingly important to recognize the flaws behind the code. AI technology is plagued with biased algorithms that, if left unfixed, can negatively affect the lives of people all around the world.

Before algorithm biases can be eliminated, it’s important to understand the root cause of the issue: the lack of diversity within the tech teams that create them. The United States Equal Employment Opportunity Commission (EEOC) estimates that over 83% of tech executives and managers are white and just over 20% are women. In fact, white and male employees fill the majority of all tech positions.

Having a nearly homogeneous workforce isn’t just unethical or detrimental to equitable company culture, its effects can also seep into code. Without racial or gender diversity in tech, algorithms are susceptible to programmed, implicit biases in coding. This can result in unconscious biases in algorithms — meaning that programmers don’t even realize they are there.

The idealized world of algorithmic tech

There’s irony in the fact that algorithms are often skewed toward the perspective of the white, male majority. Algorithmic tech is frequently touted as a solution to bias issues in several fields, for example, AI has been designed to help hiring professionals screen candidates without the influence of human biases. In an ideal world, AI technology would prevent the attribution of racial and gender stereotypes to job candidates, like the attribution of hostility to Black women.

Similarly, AI has the potential to help healthcare professionals interpret medical data more accurately. This powerful technology can identify patterns — like trends within a patient’s health history — faster and more effectively than humans. This allows for better diagnoses and treatment plans for patients.

While algorithmic tech has lots of potential, it often perpetuates biases. When an undiversified group of professionals provides training data for AI, it’ll likely adopt the group’s subconscious biases as it learns.

Algorithms gone wrong

AI hasn’t been mainstream for long, yet there are already numerous examples of algorithms gone wrong. Tay, a Twitter-based chatbot released by Microsoft in 2016, offered an early look into how influential biased training data can be. As Twitter users began to send misogynistic and racist tweets to Tay, the Twitter bot learned to make inflammatory posts in less than 24 hours.

In another example, when YouTube released a kid-safe algorithm that blocked adult content, it also blocked content from LGBTQIA+ creators. If left unfixed, this issue could have deeply affected their performance on the platform, unfairly punishing marginalized YouTube creators. Given that 92% of software developers are heterosexual, it’s clear how underserved populations can easily be hurt or overlooked by the majority of coders.

Recent AI issues are no surprise

More recently, Google’s AI has sparked significant controversy. The language model LaMDA has learned racist and sexist stereotypes — which isn’t the first time Google’s technology has perpetuated harmful biases. These issues are likely continuing because the Google engineering team is failing to diversify.

Google’s diversity report shows that, as of 2022, 74.1% of its global tech employees are men, and 44.4% are white. Google also sees the lowest retention rates among women, Black, and Latino employees. While they have taken strides to improve the hiring and retention process, the lack of diversity is still clear and may take a while to be remediated.

Reform is essential for effective AI

While algorithms may be advanced enough to work, they’re not yet as effective as they should be. Making AI useful — without any of the harmful biases it’s been prone to in recent years — depends on efforts from tech industry leaders to diversify their workplaces. This is especially the case within development teams which are heavily dominated by white men.

Diversifying may take time since the lack of diversity often stems from deeply embedded company culture and discriminatory practices of the past. However, one action that business leaders can take is implementing internal governance policies, particularly for large corporations. Since AI collects and analyzes large amounts of data, these policies can help organizations to oversee the utilization of this technology and prevent data misuse. The future of enterprise AI may even afford the use of AI to prevent these very transgressions.

Entering a brighter future for algorithmic tech

As we enter into the future of algorithmic tech and its inevitable broader application, it’s essential to address the biases of this smart technology as a core issue. Since AI is trained and coded by mostly white, male engineers, the technology commonly perpetuates the stereotypes held by the majority in the tech industry.

If left unaddressed, the lack of diversity among developers can lead to negative experiences for historically underserved populations. This is especially the case as AI is implemented into more parts of our everyday lives, like our careers, homes, and community services, including police forces and the medical field. However, with some reformation, tech leaders can minimize harm while producing incredible benefits for our world.

Image: ThisisEngineering RAEng