Remarkable new applications of artificial intelligence (AI) seem to hit the headlines at an ever-increasing frequency these days. Whether enabling faster, more accurate diagnoses for cancer patients or powering conservation efforts for endangered animal species, the uses of AI are expanding and becoming more impactful.
As well as the impressive technological progress, these examples are important because they contribute to increasing public acceptance of, and trust in, AI. These vital positive stories run counter to the negative headlines published by some media outlets. For context, it’s helpful to compare the reception AI receives from some quarters with the historic response to other now-familiar technologies. Many of these examples received similar levels of scrutiny before becoming mainstream. Cellphones are a case in point.
For instance, much discussion centers around the fact that AI and machine learning (ML) platforms inherit our human biases, often because of human decisions made on the specific data selected for use. Even this decision can demonstrate conscious or unconscious bias.
This is a problem. For example, we don’t want to end up with criminal justice systems that display bias against certain races or socioeconomic groups because they may have been historically overrepresented in the justice system. We don’t want systems that perpetuate wage inequality against women on the basis of historical estimates, nor should AI-based systems discriminate against people from less affluent neighborhoods by judging them as at higher risk of certain diseases when determining health insurance premiums. An AI-powered future should deliver positive progress, not further entrench existing prejudice and inequality. There has been discussion about how comedy needs to avoid “punching down,” eschewing soft targets who may well already be marginalized in their daily lives. In the same way, AI should not make the already difficult lives of disadvantaged people any harder.
To eradicate bias in AI and ML, we need to think carefully about our intentions when designing products based on this technology. We must be careful not to build human biases — conscious or unconscious — into products based on the datasets used to train them, and we need to learn from warnings rather than repeat those failures. Transparency is also vital. Decisions coming out of black boxes — which don’t show how they arrive at a particular decision — can be flawed and subject to bias. For areas that affect human lives, we need to be able to consider and examine how data flows through the system and how decisions are reached.
This is a cultural challenge for AI experts and data scientists. In the past, scientists and product designers have been able to fall back on their impressive problem-solving abilities without explaining precisely how the solution works. In AI and machine learning today, the system in question is often so complex that even another expert may not be able to determine why the product decided on solution A rather than answer B, C or D — or a whole universe of other possible outcomes.
When these decisions involve health diagnoses for patients or determine whether you can open a bank account or board a flight, the people and companies that create these systems must be able to explain the logic behind these decisions. At this point, society decides whether the system and outcomes are acceptable based on current social and cultural mores. It’s this transparency that leads to accountability, which in turn leads to fairer decisions for everyone. Similarly, we also must apply checks and balances to AI and ML, most of which are crucially not rule-based like conventional decision-making systems. We need to consider which people are ignored, underserved or left behind by these systems. Consider that 95% accuracy isn’t good enough if you’re in the 5%. When designing AI systems, ask yourself who will lose.
We talk a lot about the power of AI. Power used for good has the potential to change our world for the better, possibly in ways we can’t yet even imagine. But with AI, that power could also lead to negative outcomes based on decisions due to bias. Outside of some fringe cases (consider a hypothetical dictator), one person’s bias usually has a limited impact on society as a whole. Often, the impact is limited by others around the person or the prevailing mood and ethics of the community where that person operates.
But the potential for an AI system to amplify and perpetuate bias is much greater. As just one example, look at how algorithms on social media platforms are thought to have impacted recent election outcomes. This is the definition of an unintended consequence, and these outcomes undoubtedly have a huge impact on the lives of citizens.
Humans are the root cause of — but also the solution to — problems around AI and bias. If we want to avoid the proliferation of biased solutions and outcomes based on conscious or unconscious bias, we need to focus on humanity when we’re building technology. Logic has provided us with a brilliant set of tools, but if we become slaves to data rather than being informed by data, we also lose our sense of humanity. We need to choose a brighter future, one in which we harness the power of AI and machine learning to deliver outcomes that make the world a better place for everyone. This might sound like wishful thinking, but it’s possible — as long as we don’t forget to put humanity at the heart of everything we’re building.