If HR Tech had an unofficial theme this year, it was “Artificial Intelligence.” From enterprises to startups, it seemed like every major player in the HR Tech space was pitching a brand-new AI-powered solution.
With so many vendors selling what appear to be very similar products, buyers of AI-powered technology need to ask questions that separate the great providers from the not-so-great.
1. What data does the solution need to function properly? How does it learn over time?
Most HR AI solutions rely on an area of AI called Machine Learning. Unlike traditional, static computer programs, these AI systems can continuously improve over time. As more data becomes available, an AI algorithm will make better and better decisions. This improvement relies on user input and feedback so the algorithm can be tailored to meet specific business needs.
It is important that you know what sort of input the AI solution requires from your business before you buy – otherwise you may spend the entirety of your contract collecting data, rather than getting results.
2. Is there demonstrable ROI in implementing the AI solution?
Ask your provider for ROI metrics. Implementing an AI solution often requires fundamentally changing the way you do things. Any AI technology worth its salt will mandate some pretty significant process change and alter day-to-day workplace routines (usually this means current employees spend much less time on transactional work).
No matter how powerful or predictive the algorithm is, if it does not deliver demonstrable return on investment, it may get cut the next fiscal year. All the work that went into workplace change and data gathering will need to be rolled back, and implementing change will be more difficult the next time around.
3. Does the solution improve the candidate experience?
This is a common theme in AI-powered software: it performs manual, time-consuming tasks so employees can focus on more fulfilling, strategic, and “essentially human” work. But if you’ve eliminated manual tasks at the expense of the candidate, you are moving in the wrong direction.
Chatbots, for instance, are one of the most popular applications of AI in recruiting. A well-programmed chatbot can screen applicants’ qualifications, answer job-related questions, and schedule interviews. Properly implemented, they can help candidates avoid application “black holes” by delivering immediate feedback on the status of their application. Improperly implemented, they give candidates tepid, horoscopic responses heavy on jargon and lacking in substance – receiving no response at all would be a better experience.
4. What does this AI do that a conventional program cannot?
Machine Learning is marked by its unique ability to learn from data and make decisions or predictions. Does the problem you’re trying to solve really need that level of sophistication?
If it does not, you will be overpaying for an underutilized solution.
5. How do you control for adverse impact and mitigate bias?
With increasing complexity, some artificially intelligent algorithms become “black boxes” and it is difficult to be aware of or control for bias. A “black box” algorithm has such opaque innerworkings, it is impossible for humans to understand the rationale behind the algorithm’s decisions, letting potentially harmful biases go undiscovered. While this sort of approach might work for some applications, it poses a large problem for talent acquisition.
To illustrate, at HireVue a key component of our Assessments solution is our customer’s job performance data. In some situations, that data is biased: for one reason or another, some demographics are underrepresented or a certain demographic is outperforming their otherwise qualified peers. Since the algorithm is trained to recognize patterns in that data, the algorithm will likely mimic that bias. For that reason, we have a process for identifying and removing any features that could play a role in predicting the demographic. This mitigates any adverse impact that may exist in the model.
Of course, it is then important for those organizations to understand why certain demographics in their workforce are underrepresented or underperforming on their chosen metrics.
If your AI solution plays any role in who gets hired and who doesn’t, you need to make sure the vendor can actively test and alter an algorithm to remove the potential for adverse impact.
About the Author:
Lindsey Zuloaga, PhD, is HireVue’s Director of Data Science. She holds a PhD in Applied Physics from Rice University and leads HireVue’s Data Science team, building the sophisticated machine learning algorithms that analyze video interviews and make hiring fairer. Find her on LinkedIn.