Candidates: Are you interviewing and need support?
Candidates: Are you interviewing and need support?
Biased artificial intelligence tends to grab headlines.
A quick Google search for “AI bias” returns dozens of stories where an algorithm made some sort of discriminatory comment or decision.
For those who still equate “AI” with “Skynet,” artificial intelligence - one of the most promising technological developments of the last century - must seem terrifying.
Today’s data scientists have a duty to test that their algorithms are not biased, ensuring their efforts do not unfairly impact certain demographic groups.
Properly applied, we can use AI to unfurl the biases that exist in our society today. We can make things like hiring, lending, and even the legal system fundamentally fairer for traditionally marginalized groups.
To this end, some data science teams are stripping their datasets of demographic data. The theory here is that removing demographic data from consideration will make an algorithm’s decisions unbiased across different groups of people.
For instance, if an algorithm “learns” that a certain demographics are more likely to default on their loans, it will tend to disproportionately punish members of that group when they go to apply. By removing any demographic markers from the training data, you can theoretically avoid this.
This line of thinking is sound, to a certain extent. While hearts are in the right place, this approach can still “bake-in” some of our existing societal biases. This approach successfully removes explicit demographic data, but does nothing for implicit demographic data.
Explicit demographic data describes specific and labelled demographic markers. For example, patient data in a hospital setting is always labelled with gender, ethnicity, and age, since that data is crucial for determining the proper treatment. That data is explicit.
Implicit demographic data predicts certain demographics, though you may not expect it to. Zip code, for instance, can correlate highly with ethnicity. If your algorithm’s training data includes zip code, chances are high the trained algorithm will make its decisions based partially on ethnicity. In most cases this is not a good thing. In other words: certain training data that seems like it shouldn’t predict ethnicity, gender, or age does predict it.
If the goal is to train the algorithm to treat all groups without bias, removing explicit demographic data may be insufficient. Demographic information can still leak into the data via other measures (like zip code) that are correlated with this data.
Occasionally, you might get unlucky with algorithm optimization and see some bias as a result. For the most part, however, bias comes from the training data itself which many times is a result of a human’s biased judgement.
Let’s say we want to build an AI that recommends who gets promoted at a certain company. To do so, we would look at all the previous performance metrics and promotion data from that company. Since we know men tend to get promoted more than women, we also strip away any explicit gender markers.
It is well documented that women aren’t just promoted at lower rates than men - they also tend to receive lower performance reviews for the same work. Perhaps in this particular instance performance measures were entirely subjective (and biased) or a hostile work environment made it difficult for women to achieve to their full potential. The bottom line is, if we are using performance reviews as inputs to predict promotion, sexism will be baked into the trained algorithm.
In this case, elements of the company’s performance metrics are an indicator of gender - just like zip code is an indicator of ethnicity. Unfortunately, we may walk away seeing that the model has great accuracy without even realizing some input data is causing bias.
Since it is very difficult to know how bias is going to present itself once the algorithm is trained, post-training algorithm auditing is critical for identifying the implicit data that causes the greatest potential for bias.
If a specific group is adversely impacted, you can go back to the training data and attempt to build a separate algorithm designed to predict membership of that group. The data that ends up predicting group membership can then be “repaired” in some way or removed completely from the training data. Then you can retrain the primary algorithm and re-audit until the adverse impact is mitigated. Through this iterative process you can strike a balance between good performance (predicting what you want to predict, like likelihood to promote) and mitigated bias.
In the promotion example, you would end up with a model that does not take into account any performance metrics that were sexist, just objective performance metrics. In other words, removing any gender clues from the training data.
In order to implement this process, you need to have enough inputs that removing or morphing some does not ruin the algorithm’s predictive power entirely. This is what would end up happening if you tried to use this method on a traditional job or aptitude assessment with 100 or so closed-ended multiple-choice questions and very few competencies or traits.
Looking at data in this way can help bring human bias to light. Training does exist to help us address our biases, but these programs are highly variable in their effectiveness and can sometimes have the opposite effect. Human bias isn’t going away any time soon.
The use of AI to identify and remove bias holds real promise for AI to assist with decision-making that is usually done by biased humans. We have a moral duty to make those decisions that have historically marginalized certain groups fundamentally more fair.
Lindsey Zuloaga, PhD, is HireVue’s Director of Data Science. She holds a PhD in Applied Physics from Rice University and leads HireVue’s Data Science team, building the sophisticated machine learning algorithms that analyze video interviews and make hiring fairer. Find her on LinkedIn. This article originally appeared in HR.com's May 2018 Talent Acquisition Excellence magazine.