HireVue’s AI Explainability Statement, An HR industry first

March 31st, 2022
Dr. Lindsey Zuloaga, Chief Data Scientist
Artificial Intelligence

In my role as Chief Data Scientist at HireVue, I spend a lot of time in conversation with 3rd party researchers who have questions about AI in hiring. Inquiries come from a range of fields, from AI ethicists to philosophers. As a leader in the HR industry and champion of transparency around AI in hiring, sharing our experience and how technology works in detail creates a shared understanding and helps to clear up false assumptions about AI and hiring more broadly as well as anything specific to HireVue’s products. In partnership with our Chief IO Psychologist, Nathan Mondragon, we have used these opportunities, as well, to identify areas for future research, collaborations, and projects that HireVue can undertake to push the field of AI in hiring to its most ethical and transparent iteration.

The HireVue Science Team is deeply committed to ensuring that our processes are significantly more objective than traditional methods, not just because it’s the right thing to do, but because of the scale at which we operate (we conducted over 7.5 million interviews in 2021).

Furthermore, the Science Team’s commitment to pioneering research and fairness has resulted in several industry firsts, including the creation of our Expert Advisory Board, publishing AI ethical principles, and a commitment to voluntarily undergoing a series of 3rd party audits (including one of the industry’s first algorithmic audits) and publicly publishing the results.

Today I’m excited to say that we get to add another milestone to our list: HireVue is the first HR technology company (and one of the first companies in the world) to release an AI Explainability Statement.

Quickly becoming the new gold standard in transparency, AI explainability statements document meaningful information about the logic involved in a given AI-based solution. HireVue’s statement was reviewed by the ICO and supported by Best Practice AI, together with Simmons & Simmons and Jacob Turner of Fountain Court Chambers. A component of the EU General Data Protection Regulation, explainability statements are written with a multitude of stakeholders in mind, including technology buyers and the end-user (in our case, candidates).

Given that the concept of the explainability statement is so new, I thought it made sense to share an FAQ about the process and its outcomes.

What is the final statement and how is it used?

The intention is that this document is used by multiple audiences, from enterprise companies looking to transform their hiring practices and understand the technology better, to candidates who are applying for their first-ever job that want to know what our algorithms are looking for and how they work. The AI Explainability Statement is a living document that will be updated from time to time as regulations evolve and our software is updated.

Why did HireVue publish an AI Explainability Statement?

As with all of our pioneering steps in transparency, we embarked on this process voluntarily. We can see the momentum from regulatory bodies beginning to discuss the possibility of requiring AI Explainability Statements, and support this path forward for all. Perhaps the best example of the potential requirement is this concept as an important part of the EU’s draft AI Regulation.

The final decision to move forward with the project - whether or not it’s codified into law - was confirmed at one of our frequent internal roundtable discussions. A group of prominent AI researchers and skeptics agreed that it was a meaningful step toward greater transparency, which affirmed for us that our instincts to undertake the project were correct.

How was the statement prepared?

The HireVue team answered extensive and structured questions from the Best Practice AI team, including: questions about the AI design principles we follow; the data types we use; the procedures for the protection and integrity of our data; the consulting procedures to implement the solution; how candidates interact with our system and receive quality feedback; the measurement of bias and fairness with our algorithms; and our eye toward continual improvement to further democratize hiring.

Over the course of many months, HireVue teams worked with these 3rd party reviewers to build a straightforward and simple explanation of how our AI solution works. Finally, the AI Explainability Statement we are sharing today was created as an overview of the most important topics relevant to understanding the HireVue approach and that captured the constant work that HireVue puts into managing and reducing any potential risks.

Who was involved in reviewing the statement?

The AI Explainability Statement was created over the course of a year with multiple reviews and revisions from several working groups, including:

The ICO: The UK’s independent authority set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals.

Best Practice AI Ltd: Best Practice AI is a London-based AI management consultancy that advises corporates, start-ups and investors on AI strategy, implementation, risk, and governance. The firm is a member of the World Economic Forum’s Centre for the Fourth Industrial Revolution and worked on the WEF's Empowering AI Leadership Board and C-suite Toolkits and AI Governance Frameworks. They are on the WEF’s Global AI Council and publish the world’s largest library of AI case studies and use cases at https://www.bestpractice.ai. With their partners they are the world’s leading publishers of AI Explainability Statements.

Simmons & Simmons: Simmons & Simmons is an international law firm with a dedicated AI Group and extensive data protection compliance experience. The firm has around 280 partners and 1300 staff working in Asia, Europe and the Middle East across 21 offices in 19 countries. They work across Asset Management & Investment Funds, Financial Institutions, Healthcare & Life Sciences and Telecoms, Media & Technology (TMT). For more information visit https://www.simmons-simmons.com

Jacob Turner of Fountain Court Chambers: Jacob Turner is a barrister at Fountain Court Chambers. He is the author of Robot Rules: Regulating Artificial Intelligence. He advises governments, regulators and businesses on data protection and AI regulation.

As excited as we are about today’s announcement and our chance to share this work with our customers and the public, our list of firsts won’t stop here as we continue to lead the industry toward greater transparency and regulation that protects candidates, companies, and innovation.