Using AI in talent tech can both widen the pool of potential candidates for recruiters, and make it easier for them to identify those best placed for them to engage with. Such functionality can be extremely valuable to an organization, but comes with elements of risks . As recruiters keep raising their expectations for the ease of use of talent tools, the incentives for providers to push “safe”candidates at them using an outdated or biased selection keep getting higher.
Filtering Out Instead of Opening Up: How AI Leave Great Talent Off The Table
One way that AI powers talent technology is by automating the process of finding and selecting candidates on a much larger scale. In the case of suggesting a candidate for an open role, for instance, the algorithm may process hundreds of fields associated with every existing candidate in the database, and return a list of likely possibilities for the recruiter to consider.
This functionality matters because it has a huge impact on the recruiter’s workflow and makes them exponentially faster. With technology like this in place, they do not have to build extensive filters on their Candidate Relationship Management (CRM) tool, or write string after string of boolean search terms. Instead, they might simply indicate an existing profile in the database and ask for “similar candidates”, or they might select a previously built list of requirements and ask the system to find candidates that match it.
These tools can however skew towards providing recruiters with the safest “bets” to maximize the chance of the recruiter finding the tool useful and adopting it further. The wider issues occur when the talent tool has pre-existing filters in place to ensure that it automatically filters for criteria such as college-level degrees, gaps in employment, visa requirements, or the use of specific jargon in a resume.
If these requirements are suggested by default to the recruiter, or—even worse—hard-coded in the algorithm, they will automatically narrow the pool of potential candidates to exclude perfectly eligible talent again and again: people who already hold similar roles but were hired into them when certain degrees or certifications were not required, people who went on medical leave or were made redundant during the pandemic, highly qualified migrants who need work sponsorships, to mention only a few.
How to Make Sure AI is Helping and not of Hurting your Talent Search
One recurring theme of the talent market post-pandemic is the difficulty in finding candidates for all the new roles opening up now. Using processes and technologies that further narrow down the list of potential hires, instead of widening it, is the opposite of what talent acquisition teams should be doing now.
The irony of it is that one of the most powerful ways to use AI in talent is to uncover these untapped talent pools. Well-built models can learn how to infer whether someone is likely to be able to do a job without having to use shortcuts such as degrees or specific keywords. The impact on a company’s approach to hiring is huge: they can get one level deeper in their search for talent, and look directly for skills and capabilities, instead of job titles or past experiences.
The right AI will process hundreds of thousands of candidate profiles simply to learn how to identify a single skill in future candidates’ profiles. That level of insight cannot be replaced by a handful of pre-programmed proxies, such as whether someone uses a specific phrase to describe their skill set, or whether they have a degree for it. Unfortunately, some AI tools do exactly that, only faster and at a larger scale. Instead of getting rid of the processes that narrow down a company’s choices, they supercharge them—hiding them from view behind the UX.
The risk here to companies is obvious. Not only do they keep competing with everyone else for the exact same individuals, they overlook perfectly eligible talent that may well bring much needed diversity to their workforce. In worst case scenarios, they might be perpetuating biases that even harm and exclude a number of candidate populations from their process.
These risks will further increase as regulators raise the bar on what is expected from providers and buyers of AI technology for HR and talent. Not only do talent teams now need to understand how a tool makes a hiring recommendation, they need to be able to interrogate it and track down the data points that it used if an audit is necessary. That is what “explainable AI” is.
If there is only one thing to be remembered from the research highlighted in the Harvard Business School report “Hidden Workers: Untapped Talent”, it’s that technology is only half of the equation when it comes to any talent transformation strategy.
Unexplainable, black-box AI tools usually perpetuate a large part of the behaviors that a talent transformation plan aims to change. When it comes to AI, recruiters have to be able to understand the logic behind their tools, otherwise they are not really given an opportunity to fundamentally change their approach to talent acquisition. And without that change, they will inevitably miss their chance at leading the way in overcoming the challenges of this new and rapidly changing world of work.