Independent audit affirms the scientific foundation of HireVue assessments

April 7th, 2021
Dr. Nathan Mondragon, Chief IO Psychologist
Artificial Intelligence,
Science
Laptops

At the beginning of the year, we announced that we were undertaking a series of 3rd-party audits focused on different aspects of our technology and assessment solutions.  We committed to sharing those results, and today, I’m excited to share details about the Industrial/Organizational (IO) Psychology behind our interview and game-based assessments. 

IO Psychology is the foundation of everything we do at HireVue. There wouldn’t be an algorithm to write without a strong job analysis, a competency framework, and proper psychometrics and adverse impact testing. In addition to leading the creation of the first-ever online selection assessment in 1996 I have been researching and publishing in this field for over twenty-five years. One critical takeaway is no matter how cool, innovative, or cutting edge the technology, it must always rely on the solid, scientific foundations of IO Psychology to guide our work.

In my recent piece about creating software driven by science, I concluded with something that bears repeating here: HireVue will continue pursuing research and audits regarding the appropriate use of AI and software in hiring, always following the theory and evidence where it leads. 

Audit overview

The goal

The primary goal of the audit was to understand, review, and receive outside IO science insights concerning the development and measurement of our assessment services. The standards against which everything in the audit is measured are the Uniform Guidelines on Employee Selection Procedures, the Society of IO Psychology’s Principles and the Standards for Educational and Psychological Testing. The three areas being audited were:

  • Job analysis procedures
  • OnDemand interview assessments
  • Game-based assessments

The expert 

The audit was conducted by distinguished professor and CEO of Landers Workforce Science LLC, Dr. Richard Landers. Dr. Landers earned his Ph.D. in IO Psychology in 2009 and has since worked as both an academic researcher and private consultant. At the University of Minnesota, he is Principal Investigator for TNTLAB (Testing New Technologies in Learning, Assessment and Behavior), where his research concerns the use of innovative technologies in the domains of psychometric assessment, employee selection, adult learning, and research methods. Research on innovative technologies include game-based assessment, gamification, artificial intelligence, unproctored Internet-based testing, mobile devices, virtual reality, and online social media.

The scope 

Our science team met with Dr. Landers for 18 hours of meetings, he reviewed nearly 1,000 pages of documentation, and we spent countless more hours answering follow-up questions and having discussions to reach the final version of the audit.

General conclusions

A really simple way to think about assessments is as a three-step process that begins with: 

  1. establishing job-relatedness, 
  2. documenting reliability and validity, and 
  3. conducting rigorous adverse-impact testing. 

Taking a holistic look at our processes, Dr. Landers concluded:

“In general, HireVue reaches or exceeds industry standards for the creation of high-stakes assessments, and this audit exposed no weaknesses that critically undermine HireVue’s approach.”

Establishing job-relatedness 

As with all pre-hire assessments the link between the requirements of the target position and what the assessment measures needs to be clearly identified. Conducting an analysis of the target positions, referred to as job analysis, is a foundational step of ensuring the right assessment is deployed. Dr. Landers reviewed the job analytic procedures conducted by our team of IO Psychologists and concluded our process was “of very high quality and rigor in relation to established standards...resulting in the creation of trustworthy content-related validation evidence and subsequent hiring decisions.”

Reliability and validity

Validity refers to the fact that what is being assessed in an interview is actually relevant to tasks performed on the job - not something steeped in bias like where a person went to school, what they wore, or how they can banter about related interests between questions. In regard to validity, Dr. Landers writes, “HireVue has collected a significant amount of evidence to support its claims of reliable and valid testing using its game-based assessments,” and “HireVue’s video-based interviewing platform represents a scientifically well-reasoned approach to providing asynchronous virtual interviews...” 

He makes it clear that there is strength in the novelty of our approach, because “it marries state-of-the-art machine learning research coming out of computer science with IO Psychology to produce something better than either could manage alone [emphasis added].” 

Adverse impact testing

Implementing assessments and hiring tools that are job-related and fair is a key objective for HireVue. Thus, a key part of the IO audit is a detailed review of the adverse impact measurement and mitigation procedures we enact. Dr. Landers’ review of the adverse impact process, analyses, and results of the technical manual “did not indicate any significant adverse impact concerns at the established pass/fail cut scores.” 

Additionally, Dr. Landers recommends that adverse impact studies are conducted on each assessment with customer’s applicant data after implementation - something we have introduced when technically feasible. Finally, Dr. Landers concludes, “Given HireVue’s quite literal removal of adverse impact [mitigation], as expected its predictions evidence little adverse impact, with similar pass rates across race, gender, and age classes, which are standard concerns in IO practice.”

For access to the full report, please visit this page. 

Future research and product improvements

This audit is just one part of our broader efforts to continually fine-tune assessments with regularly scheduled product updates for our customers and their candidates. Our science team is using the audit feedback and scoping improvements to apply to every aspect of our IO framework. One example includes continually enhancing the construct and criterion validation evidence from our assessments. For example, we have validation evidence of our games’ general cognitive abilities, but want to expand these results into specific ability domains (e.g., working memory). 

Audits like this serve many purposes, but one of the most important is how it helps us not lose sight of the fact that what we’re doing touches the lives of millions of people who are looking for work - every choice we make is to democratize that process and give people the fair and efficient hiring experience they deserve.

Ongoing updates about this new research and product improvements can be found on our Science page. In the meantime, to learn more about our assessments, download The Next Generation of Assessments.