Log in     Support     Status

Ethics in AI: Seven Guiding Principles

by | Mar 31, 2020

Introduction

At Capacity, we are committed to ensuring that we responsibly bring AI into the workplace. As such, we’ve put together 7 guiding principles around how we put AI into practice:

Principle I: Look for bias everywhere

One of the first things we’ve learned in looking for bias is that it can be found just about anywhere.

Principle II: Assess the potential outcomes

While bias can affect a wide variety of AI systems, it’s important to triage bias:

Magnitude of Bias = Probability of Occurrence * Impact of Bias

For example, something could be a low bias severity but very likely to occur. Or an outcome could have a low probability of occurrence but a high impact.

We aim to weigh these two together when deciding where to spend time in reducing bias in AI systems.

Principle III: Feed in diverse data sets.

People often ask about stopping algorithmic bias. The algorithms aren’t biased per se, the data sets used to train the algorithms are.

A classic case on this involves training an image classifier to recognize animals (a common training task thanks to the availability of Imagenet). A particular algorithm kept scoring well on identifying closely related animals – like tigers vs lions. However, the algorithm kept failing on delineating between domesticated dogs and wolves. The researches kept tweaking the algorithms and fine tuning the hyperparameters to no avail.

Eventually, a human team went in and started recognizing a pattern. If an image of a domesticated dog was in the mountains of Colorado, the algorithm incorrectly identified that dog as a wolf – not by the characteristics of the dog itself – but by the background snow! The research team quickly deduced that all of the wolf pictures in the training set were trained with snow in the background. Once they fixed the training set (by adding more diversity of wolves in other climates) the algorithm produced the intended results.

Principle IV: Reduce bias with self-learning systems

Any system that makes a decision has potential for bias. And yet, the best designed systems recognize this and are designed to improve over time.

From early on, we recognized that a robust feedback mechanism is core to ensuring that Capacity is delivering great results. That’s why every response we provide is paired with thumbs up/down feedback. This feedback is fed directly into our neural networks to improve responses over time.

Principle V: Diversify your trainers

Back to the training data itself, we’ve found that the best training data emerges when sourced from a diverse set of people. Diversity can include demographic information like gender, race and age but it can also include polling different parts of the org. For example, an accountant may have a completely different way of asking questions than a social media marketing team member.

Principle VI: Collaborate with others to reduce bias.

If an organization is experiencing AI driven bias, it’s likely that other organizations are experiencing the same thing. That’s why we believe it is important to network with peers in our industry around fighting bias. It’s also why we are part of Prepare.ai, a nonprofit (501c3) designed to champion connecting companies working on AI in the Midwest region.

Principle VII: Commit to improving through transparency

Rather than giving our customers yes or no answers to questions, we try to contextualize the origin of each answer. This transparency gives people the confidence of understanding their answer rather than having to guess why the AI made its decision.

Closing

At Capacity, we’re here to help teams do their best work. Reducing AI-driven bias is an important part of fulfilling our mission.