Fair and inclusive hiring practices are a top priority for recruiting teams in 2018.
According to LinkedIn’s 2018 Global Recruiting Trends Report, 78% of respondents indicated that “Diversity” was “Very Important” or “Extremely Important.” No other trend came close.
There’s good reason for this. Evidence shows that diverse teams put a heavier focus on empirical data, are more innovative, create more revenue, and have higher performance. Of course, making hiring more fair is also the right thing to do.
With attention shifting to talent acquisition’s ability to attract and hire from diverse groups, it is important to understand how employee selection is evaluated from a legal perspective.
Adverse impact is the negative effect an unfair and biased selection procedure has on a protected class. It occurs when a protected group is discriminated against during a selection process, like a hiring or promotion decision.
In the US, protected classes include race, sex, age (40 and over), religion, disability status, and veteran status.
In hiring, adverse impact can be measured across the entire hiring process (percent of applicants who are ultimately hired) or segmented by each step that screens out candidates (resume screen, pre-hire assessment, interview).
To understand why we measure adverse - or disparate - impact in 2018, we need to look at a Supreme Court case from 1971.
In the landmark case Griggs v. Duke Power, Willie Griggs and twelve other African-American employees of Duke Power sued their employer, alleging that the general intelligence test Duke used as a screening tool unfairly impacted African American applicants.
These are the passing rates from Duke’s general intelligence test:
The Supreme Court ruled that if pre-employment tests had a disparate impact on protected groups (such as women and ethnic minorities), the organization requiring the test must prove that the test is “reasonably related” to the duties performed on the job. In practice, this meant most pre-employment assessment providers either:
The Civil Rights Act of 1991 added an additional legal implication: businesses could be required to show that no viable, less discriminatory alternative selection procedure existed.
Generally speaking, it is better to err on the side of caution and remove the potential for adverse impact. The alternative (proving to a court the job-relatedness of a selection procedure and showing that there is no less impactful alternative) is not as attractive.
Guidelines for creating a defensible selection procedure are outlined by the EEOC in the Uniform Guidelines on Employee Selection Procedures.
The most common measure of adverse impact - and the measure used by the Uniform Guidelines on Employee Selection Procedures - is the Four-Fifths Rule.
The Four-Fifths rule states that if the selection rate for a certain group is less than 80 percent of that of the group with the highest selection rate, there is adverse impact on that group.
For example, in the case of Duke Power, we can see that the selection rate of African Americans was 6%, while the selection rate of whites was 58%. Dividing the lowest selection rate (6%) by the higher (58%), we get 6/58 = 10.3%: significantly lower than the legal minimum 80%.
Here’s another example:
Let’s say an organization is looking to fill 25 open positions in its local call center. 500 men and 1000 women apply. Of those applicants, 10 men and 15 women are hired. In this situation the selection rate for men is 2%, while the selection rate for women is 1.5%. Dividing 1.5 by 2 we get 75%: below the cutoff. Despite the fact that more women were hired overall, they were adversely impacted.
Since the EEOC created the Uniform Guidelines on Employee Selection Procedures in 1978, the Four-Fifths rule has been the first analytic step for evaluating adverse impact.
More sophisticated methods of affirming adverse impact exist (such as the z-test and Fisher’s Exact test, which measure whether the impact seen is statistically significant). These statistical methods along with the Four-Fifths Rule are standard procedures deployed in an analysis of adverse impact and should be used by organizations as part of their yearly evaluation of their hiring procedures.
Measuring the potential for adverse impact at each selection step - and addressing issues as necessary - will identify discriminatory practices and ultimately make the hiring process fairer for all applicants.
A single “adverse” screening method has downstream effects on an organization’s ability to build a diverse & inclusive workforce. If a step in the screening process makes biased recommendations, it is borderline irrelevant how fair the next steps of the process are - the candidate pool is already adversely impacted.
This makes the careful evaluation of new screening tools doubly important. If a technology or test actively recommends certain candidates over others, ensure the vendor provides full adverse impact reporting to avoid the potential for discrimination.
Carefully measuring adverse impact can also lead to some uncomfortable conversations. Humans all have unconscious biases that affect how we make decisions. In certain cases, these biases can lead to unintentionally unfair hiring decisions. It is important to address these before they impact your workforce at large. Interview best practices, training, and technology can all help human decision-makers overcome their unconscious biases.
Addressing root causes of adverse impact isn’t just about preempting legal action (though of course this is important). It’s about making hiring fundamentally fairer, and building a more creative, productive, and innovative workforce.