AI Bias: Algorithms and the Potential Harm They Can Do

While concerns about the collection of digital data have always been present, the influx of AI and machine learning into the technology space has exacerbated the problem. For years, the main worry was that big tech would learn all about you. Now, they want to predict you.  Concerns over regulating data have produced laws all […]

Category

Technologies

Posted

Leo

Oct 5, 2023

While concerns about the collection of digital data have always been present, the influx of AI and machine learning into the technology space has exacerbated the problem. For years, the main worry was that big tech would learn all about you. Now, they want to predict you. 

Concerns over regulating data have produced laws all around the world to protect people from big tech companies. However, the restrictions these laws produce have been hotly debated. Some people think that they go too far, and some people think they don’t go far enough. However, as we see companies expanding their AI operations, a new frontier of privacy concerns, problematic models, and a wild west of corporate activity has arrived. 

Because AI and LLM rely on huge inputs of data to be trained, companies currently have little regard for whose information they use. Furthermore, these AI algorithms are known to produce biased and wrong information from that data collection.

It’s a bit like if two people decided to create a list of items needed for survival. One in New York. The other in the wilds of Alaska. Most items would be different, and if only the list from Alaska was fed into the algorithm, many people in New York might end up walking around with Bear Repellent, fishing supplies, and matches.

After bias is discovered, many companies have had to correct their AI algorithms.  

These AI algorithms affect everyone’s life, and if we expand their unregulated usage, then entire groups of people will be marginalized, even if there was no intent to do so. If the people choosing what to feed into the algorithms have particular biases, those biases may very easily make it into the algorithms–much like a cook flavors food to his or her tastebuds.

These AI algorithms can also break down when focusing on specific regions or tasks. To use Alaska again, an AI that deals with hospital optimization may have trouble optimizing a rural hospital in Alaska when it has data feed from an “average” hospital. Like many things, a hospital’s environment must be taken into account. In the deep South, poisonous snakes are common. In Alaska, they’re not. How would this change the supplies a hospital stocks? Would an algorithm have the wisdom to take all these factors into account?

While many companies are establishing AI policies and ethics boards, this is simply not enough. We can’t trust these companies to regulate themselves, especially when they have proven they can’t do it with data collection. The lack of transparency from these companies on their products is also a potentially harmful decision. If we won’t know what went wrong with an AI algorithm, how will we make sure that same mistake doesn’t happen again?

Problems

It’s unlikely a suddenly sentient AI will usher in an apocalyptic rule by our new robotic overlords, but that doesn’t mean that AI can not do harm. From chatbots gone very bad to self-driving systems crashing cars to a more light-hearted event when an AI camera developed to track a soccer ball consistently mistook a bald head as the soccer ball, AI is far from infallible — but one also cannot discount how truly useful it can also be. AI can help companies succeed, assist individuals, and even save lives.

AI is here to stay, so businesses should take care to not fall behind on the AI and data revolution. If you need the right technology partner to bring to life your world-changing or business-transforming idea, we can help. Contact us to schedule a free assessment