Industry leadership: New audit results and decision on visual analysis

January 12th, 2021
Lindsey Zuloaga
Artificial Intelligence,
Science
brainstorm

Today, HireVue is extending our leadership position defining the transparent and appropriate use of AI and software as part of the hiring process. Creating a level playing field for anyone seeking employment, reducing  bias and providing organizations with a more diverse talent pool is at the heart of HireVue’s mission.

We are releasing the results of an algorithmic audit by O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), and we are outlining our plans for additional audits to affirm the efficacy of our solutions.  We are committed to working with third-parties like ORCAA to collaborate on ideas to improve AI models and to leverage software to remove hiring bias and improve diversity. 

Separately, earlier this year, based on our research (published here) on the role of audio vs visual features’ in evaluating job candidates, and seeing significant improvements in natural language processing to date, we made the decision to remove visual analysis from our new assessment models. We will continue to conduct research in this area, establishing and adhering to best practices.

ORCAA audit released as part of HireVue commitment to transparency

Developing hiring solutions that are driven by science means meeting or exceeding the expectations of our own ethical AI principles, as well as legal standards across the globe (i.e. GDPR). In order to do that, HireVue retained third party auditors to evaluate our algorithms, IO Psychology, and the delivery of our assessments. The results of each will be released in the coming months on our Science page. 

Today, we are excited to share the results of our algorithmic audit with O’Neil Risk Consulting & Algorithmic Auditing (ORCAA). ORCAA’s CEO and founder, Cathy O'Neil, is a well-respected data scientist and author of the influential book, Algorithms of Math Destruction. Cathy was recently featured in the popular Netflix documentary “The Social Dilemma.” Cathy and her team audited a representative pre-built assessment used in hiring early career candidates (including from college campuses), seeing it as a complex and challenging use case that would provide valuable lessons about fairness.

After evaluation, ORCAA concluded that, “The HireVue assessments work as advertised with regard to fairness and bias issues; ORCAA did not find any operational risks with respect to clients using them.” Together, HireVue and ORCAA came away with focus areas where improvements can be made to help increase transparency to candidates as well as formulated research questions that go above and beyond to ensure fairness and equal access. 

The audit with ORCAA represents countless hours of work on the part of internal teams, external stakeholders and subject matter experts. External participants included:

In the conclusion of the ORCAA report, the authors suggest that, “Fairness work is a continual improvement process companies navigate – not a checklist provided by regulators or lawmakers.” We couldn’t agree more, and we look forward to working with customers, job-seekers, technology partners, ethicists, legal advisors, and society at large in the months and years ahead to define the best use of AI in hiring. 

For access to the ORCAA report visit https://www.hirevue.com/resources/orcaa-report

Removing visual analysis from assessment models

The goal of HireVue’s assessment models is to correlate an applicant’s interview to how they would perform in a specific role. These models are validated and tested continuously. 

HireVue research, conducted early this year, concluded that for the significant majority of jobs and industries, visual analysis has far less correlation to job performance than other elements of our algorithmic assessment. The predictive power of language has increased greatly with recent advances in natural language processing and consequently, our algorithms do not see significant additional predictive power when non-verbal data is added to language data. For that reason, we made the decision to not use any visual analysis in our pre-hire algorithms going forward. We recommend and hope that this decision becomes an industry standard.

An ongoing commitment to fairness

Our news today reaffirms HireVue’s leadership role in defining the best, most appropriate use of AI in hiring, and our commitment to rigorous, ongoing measurement of the impact our software has on individual candidates, our customers, and society as a whole. 

We follow established best practices in the field of IO Psychology and data privacy to ensure our assessments meet the strict standards of fairness from agencies such as the Equal Employment Opportunity Commision and the American Psychological Association (to name a few). 

In 2019 we became the first HR technology company to create an expert advisory board and publicly release a set of AI ethical principles. We undertook these steps because we know that transparency, concrete guideposts, and the input of experts is the best way to uphold our principles.  You can find more details about our commitment to AI Ethics and standards at www.hirevue.com/why-hirevue/ai-ethics

We were also vocal in our support of the Illinois Artificial Intelligence Video Interview Act (AIVIA), and are open to ongoing involvement in conversations around the regulation and use of AI in hiring. These conversations only advance our goals of building tools that better society while safeguarding privacy and maintaining strict standards of data protection.

Methods and approaches may change based on technological developments, but our commitment to leveling the playing field and eliminating bias for candidates and employers remains the same.