Beauty Bias: Understanding How Attractiveness is Perceived in Hiring

November 17th, 2020
Lindsey Zuloaga
Artificial Intelligence,
Diversity & Inclusion
mirror image

At HireVue, we are invested in understanding bias in both humans and algorithms. We focus on many research questions in this area, some of which involve building algorithms purely for research purposes. Our aim is to understand all kinds of bias more fully in order to prevent it in our own algorithms. We are adamantly opposed to ever putting these algorithms into commercial use, they are built for research purposes only.

Complexities of attractiveness bias in hiring

It may not come as a surprise that attractive people have certain advantages. Research has shown that more attractive people are perceived as more successful, happy, and sociable than others. This bias has ramifications for all of us - students, defendants, political candidates, employees, and job candidates. The idea of "what is beautiful is good" has been formally studied for decades and there is a snowball effect on this bias. For example, one study showed that people who watch more films where attractive characters are portrayed more favorably than unattractive characters (which is very common) are more likely to show favoritism towards attractive people in real life.

Additionally, studies show there are complexities to attractiveness bias in hiring: often lower-skilled women are judged more harshly on their looks than more highly-skilled women, and for women in male-dominated roles the bias is flipped - the "beauty is beastly" effect, where attractive women are seen as less competent. Age and attractiveness discrimination are tied and women who are close to retirement age are not given the same opportunities as comparable men. Overall, when a role is seen as less desirable, attractive people are at a disadvantage, as others perceive that an attractive candidate is entitled to something better.

When bias in hiring is discussed, conversations rightfully focus on age, ethnicity, gender, sexual identity, or disability. The degree of attractiveness is left out of this conversation since unattractive people do not have legal protections as a demographic group. Ideally, hiring decisions should be made on the candidate’s ability to perform the job well, regardless of their looks, but this is extremely difficult for humans to do. Our implicit biases are strong and affect our decisions, whether we know it (or like it) or not.

Proxies and Fair Algorithms

Knowing what we do about attractiveness bias, HireVue has trained a deep neural network algorithm to predict perceived attractiveness. This algorithm has only been used for research purposes to understand how beauty influences hiring decisions, both from humans and pre-hire assessment algorithms - it has not been and will not be deployed for any of our customers.

In our assessment algorithms, the inputs are data from recorded video interviews and game-based hiring assessments. What is assessed includes things like language use, working memory, and sometimes nonverbal behaviors such as tone-of-voice. We do not use information such as age, race, gender, sexual orientation, or attractiveness as inputs to predict job-related outcomes. However, just because we do not explicitly use these attributes, that does not mean information that is correlated to these things, known as "proxies", do not exist in the data. Checking for and systematically removing information that serves as a proxy to an attribute is vital to building fairer algorithms.

Perceived vs. Actual Performance

Let’s take a look at some real data in a hiring scenario. For this particular role, we are taking a recorded interview as input and comparing a human ranking (blue) with an algorithm (orange) trained on past data to predict actual objective performance on the job. In the graph below, candidates with similar attractiveness ratings were binned together and the average human rating of the group is shown. Pearson’s correlation coefficient between the human rating and attractiveness is 0.32, for the algorithm it is negligible. This data shows that humans are biased towards attractiveness, yet when we only look at the things that are important to the job, it does not matter (there is a negligibly negative correlation). In this case, an algorithm can be used to assess candidates in a fairer way.

performance graph

Do algorithms always amplify human bias?

In the previous case, we knew both the attractiveness score and the job performance of the people in the sample. What if, however, we don’t have an objective performance metric and an algorithm is trained on biased human decisions? Algorithms trained on biased data can propagate human bias, which is a huge problem for society. However, with a focus on possible biases, algorithmic auditing, and mitigation, we are able to combat these issues.

For a retail role at a hip clothing and housewares store, 10% of applicants were hired. When we looked at these human hiring decisions, we found a bias towards more attractive candidates. Again, candidates with similar attractiveness ratings were binned together and the percentage of that group that was hired is displayed on the y-axis. The chance of being hired is twice as much for someone who is rated by humans as a 7/10 vs someone rated a 3/10. Pearson’s correlation coefficient between human decisions and attractiveness is 0.87. In this case, there was no objective performance metric and an algorithm was trained on this biased human data. However, because the algorithm does not take attractiveness or any behaviors that are highly correlated with attractiveness into consideration, it shows no significant bias (r=0.15 between attractiveness and the algorithm score). In short, the algorithm avoids bias that people might have because it is only focused on data that are related to job performance and that are independent of attractiveness.

human algorithm graph

The same principle applies to a number of groups, such as mitigating bias against older people, racial minorities, and individuals in the LGBTQ community. By choosing our inputs carefully, we can choose to ignore sensitive information and create a pre-hire assessment algorithm that offers a fairer evaluation of job candidates in terms of what is actually important to the work they will be performing.

If you want to learn how humans and AI can mitigate hiring bias, watch this on-demand webinar lead by IO Psychologist Nathan Mondragon.