Skip to main content

AI for Recruiting: the Good, the Bad, and the Unknown

AI for Recruiting - the Good, the Bad, and the Unknown

This may or may not have made it on your newsfeed in 2018: the UK is making strides in regulating Artificial Intelligence.

The House of Lords’s recently formed AI committee shared a short “AI Code” to try and influence the development of the AI industry, rather than “passively accept its consequences”.

It touches on things like the concentration of data with larger companies, the need for more competition in the industry, and the necessity to educate the public about it. Mostly, it focuses on the ethics of AI, an ambitious undertaking if there ever was one.

Two principles of that code are particularly interesting for recruiters. The first one get back to the timely subject of GDPR and data protection: “Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities”.

Data privacy was already at the center of the public eye because of the GDPR coming into effect, of course, but it’s now an ultra-sensitive subject, partly due to all the great press that Facebook and Cambridge Analytica have been getting in the last couple of years, and the global conversation they have sparked. We’ve touched on the ethics of using candidate data before, so we won’t talk more about this principle.

We’d much rather discuss this one instead: “Artificial intelligence should operate on principles of intelligibility and fairness”. It’s a lot trickier because the issues surrounding artificial intelligence and fairness are not always obvious to buyers of AI-powered software.

Not-so-impartial AI

We think of machines as inherently unbiased and fact-oriented, and in theory, they are. The problem is, machines rely on humans to provide the “facts” they base their decisions on. How does that happen?

First, a quick refresher on what AI is:

artificial-intelligence-quote

The definition above, by Vishal Maini from DeepMind, is one of the most complete out there, and his Machine Learning for Humans is a good place to get a semi-technical intro to the subject.

An important dimension of an artificial intelligence is that it is capable of learning. “Machine learning” enables an AI to look at a data set, such as a series of decisions made by a human based on specific pieces of information, and learn how to make similar decisions based on future pieces of information.

Take your email spam filter, for example. It decides what emails to send to spam based on things like the email provenance and metadata, but also based on what you marked as spam. If you accidentally mark a message from your boss as spam a couple of times, there is a chance your inbox will start filtering out their future emails as well. I know, because I've done it once. My boss probably still thinks it was intentional.

And that's the danger in machine learning: If the past decisions that the AI learns from happen to have a bias embedded in them, it will learn that bias as well. For example, if you subconsciously spam emails from foreign-sounding names more often, your AI-powered spam filter could eventually show you only safe, familiar, western-sounding names.

We’re already seeing examples of that type of bias in decisions where the stakes are far higher, like discriminating against the poor in custodial decisions, or predicting higher probability of criminal behavior based on race, to only mention a couple. And yet, many private companies and public institutions keep blindly relying on AI without understanding these risks.

AI for recruiting: what is it?

In terms of decisions with potentially catastrophic consequences on people’s lives, recruiting is pretty high on the list.

We use AI-powered technology in many aspects of recruiting work:

  • Ideal does initial screening and scoring based on resumes and augmented data
  • X.ai schedules meetings using a bot
  • Filtered.ai generates interview tests for engineers

A bunch of other companies - including Beamery - use AI or machine learning in some form or another to accomplish parts of the recruiting process.

And they should. There is a lot of room to become more efficient as humans by giving the repeated, process-heavy work to machines. That time can then be used on tasks where human judgement or contact is necessary, like meeting with candidates face-to-face or assessing their relational skills.

What can we do about it?

It’s not only on companies to build ethical AI. It’s also on users and buyers of software to understand the risks in relying on machines to make decisions that deeply impact human lives.

What does that mean for companies who hire using AI-powered software? They have to be extremely careful about the data they feed to the software, and understand how the “learning” in “Machine Learning” happens.

As firms tend to be very secretive about the technology behind their products, buyers have to use their position to demand clarity from their vendors, and not be simply satisfied with “oh, it works.”


This is an interesting time in the history of Artificial Intelligence. Both governments (like the UK), and businesses (like DeepMind or Benevolent AI) are taking an interest in the ethical implications of the technology. There is enough public interest around it that people are starting to educate themselves on the subject.

It also looks like we’re on the verge of seeing a sudden and -seemingly- unexpected explosion in the advancement of AI, so now is the time to be paying attention, otherwise it will be over before you’ve even had a chance to take your first look at it.

artificial-intelligence-comic