Skip to main content

Safety, Transparency & Fairness: How HR Teams Can Use AI Responsibly

The UK government recently launched an AI white paper to guide the use of artificial intelligence in the UK, in a bid to drive responsible innovation and maintain public trust in this revolutionary technology.

The white paper doesn’t recommend new legislation, or the introduction of a single “AI regulator”, but proposes that existing regulators in certain sectors should guide and police the development and deployment of AI (with “tailored, context-specific approaches”) based on a non-statutory framework.

This framework consists of five principles: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability & redressability.

While there are certain interesting questions about how an independent UK approach to AI would work beyond the UK, and in an interconnected world, the principles represent a solid starting point for any business looking to deploy AI to speed up their processes, find higher quality answers to questions, and remove some of the human bias from decision-making.

Using AI in Talent Management

For employers and HR teams more specifically, AI can provide huge benefits to all aspects of the talent management process and stages of the talent lifecycle. But, as we have seen with the recent rise of ChatGPT, some of the most promising use cases of AI come with a range of potential pitfalls, and debate about the wider societal impact.

Using innovative new technologies in ways that don’t bring more risk into the business is key. It’s always worth considering how we best protect and empower the people affected most by the technology – that is, the candidates and employees that we are making decisions about. The five principles proposed by the UK government provide a useful start.

Safety, security and robustness

The UK government says that “applications of AI should function in a secure, safe and robust way, where risks are carefully managed.”

For employers, this would mean that any AI system used in HR processes – such as those used to garner recommendations for people to hire, or promote – should be designed and implemented in such a way that it does not create any risks for candidates or employees. It should be protected against cybersecurity threats and should not compromise anyone’s privacy.

For HR teams working with technology vendors, there are relating to the collection and use of data, the models involved, and the experiences served to users. Are they conducting regular audits, monitoring the performance of the AI system, and providing training to employees on the ethical use of AI?

It’s worth noting that certain applications of AI are easier to audit than others. Using ChatGPT, for example, to match jobs to candidates and to rank their suitability, could yield valuable insights for TA teams. But the risk is not simply that these recommendations can be biased or inaccurate, but that it is not possible to actually audit them for bias, or to ensure consistency in how recommendations are made.

Transparency and explainability

The white paper also asks that “organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI.”

Companies must be transparent with job candidates about the use of AI in the recruitment process and the types of data that will be collected and used, or risk losing public trust in the technology.

This also forms part of upcoming legislation from New York City Council that regulates employers’ and employment agencies’ use of “automated employment decision tools” in making employment decisions. Employers there have to comply with various requirements, including: 1. a yearly bias audit; and a summary of the results of such an audit made publicly available; 2. providing candidates with notice of automated processing; and 3. allowing candidates or employees to request an alternative process or accommodation.

HR teams should also be able to understand why certain recommendations were made, and point to the importance of various factors in that recommendation. Explainable AI helps you do that: recommendations can be explained to users and indeed candidates that are affected by resulting decisions.

For recruiters, if you use AI to help you find suitable candidates for a Project Manager position, the tool should make it clear why particular candidates were recommended. It should be obvious why they are relevant (skills, proficiency, experience) rather than just simply given an overall score. It can’t be an opaque box.

Understanding why suggestions are proposed by the AI makes it far more helpful as a tool for HR teams – and fairer.

Fairness

“AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes.” Of course, talent teams want to find ways to be more efficient and speed up time to source and time to hire. But that can’t come at the expense of fairness and equity.

While some people are concerned that machines are learning based on data that contains bias, and could be exacerbating inequalities in society, AI can actually reduce bias. It can help boost diversity in hiring and candidate selection, by removing some of the “old method” of progression – the tap on the shoulder. In a world where some people have additional advantage based on who they know rather than what they know, AI can offer a more objective form of assessment. It removes much of the unfairness inherent in talent-related decisions.

Of course, HR teams who are using AI, or working with AI-powered third-party tools, must be able to reassure themselves (and current and future employees) that there is no bias programmed into the systems they use. As mentioned, they need to be able to point to the rationale behind recommendations that an AI has made, such that there is full transparency in why one person was suggested for a role over another. And they need to comply with all relevant regulations governing the use of AI in the context of the workplace, as well as data protection laws.

AI can also promote fairness in recruitment by identifying and recommending individuals who may have been overlooked or not considered through traditional recruitment methods. By analyzing skills data, the AI algorithm can identify relevant skills for specific job roles and use that information to recommend individuals who possess those skills, even if they have not applied for the job themselves.

This process can help to eliminate bias in recruitment by providing a more objective assessment of a candidate's potential and abilities. Additionally, by identifying individuals who may not have felt confident enough to apply, the AI algorithm can help to increase diversity in the applicant pool, promoting a more inclusive hiring process.

Accountability and governance

The UK government suggests that “measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes.”

Again, HR teams need to ensure that they are asking the right questions of their technology partners and suppliers, including whether any AI tools have been audited by a third party, what their ethical principles are, and whether they have an explainability document.

We highly recommend a ‘human-in-the-loop’ approach to deploying AI in HR, whereby machines are never the ones making decisions about people. A real person should always be reviewing the suggestions made, with an understanding of the factors that contributed to those suggestions, and making the ultimate decision about hiring, reviewing, promoting or redeploying talent.

Contestability and redress

According to the white paper, “people need to have clear routes to dispute harmful outcomes or decisions generated by AI.”

Again, we would suggest that – in the context of employment-related decisions – AI is used as a tool; to serve up recommendations; to clean and enrich data. With humans in the loop, it should be clear who is ultimately responsible and accountable for any decisions made (and how candidates were scored and ranked, for example.)

It’s wise to ensure candidates and employees are well informed of where and how AI is used, the relevant policies are clear and easy to locate, and your AI audit (or that of your provider) is also clearly signposted.

A huge opportunity

Used correctly, AI can help HR teams to identify and reduce bias in decision making, create more personalized candidate and employee experiences, and massively improve efficiency and effectiveness in the team – providing better insights, faster.

The white paper is seeking responses by 21 June 2023, and the UK government then proposes to publish its response by September 2023. Those involved in the development and use of AI in the UK should monitor these developments to see whether the approach set out in the white paper is maintained after the consultation closes, and plan their next steps.

Powerful, explainable AI-powered technologies from Beamery

At Beamery, we propose using explainable AI in conjunction with skills data, to ensure you are creating a fair and equitable process for all candidates and employees. Working with data about people’s skills and potential skills – rather than focus on experience, or protected characteristics – means recommendations are fairer and more objective.

We have recently announced TalentGPT, a proprietary AI technology we’ve developed over the past 3 years. It is built on top of the Beamery Talent Graph – which tracks over 17 billion data points about candidates, companies, skills and jobs – and augments our bias-audited AI models with generative AI from pre-trained LLMs. Crucially, deploying multi-purpose LLMs within these bias-audited AI controls allows Beamery to mitigate the risks typically associated with the use of ChatGPT and other models.

Read our report on the State of AI in Talent Management to learn more about powering talent acquisition, talent mobility and talent retention with cutting-edge AI.

The information provided on this website does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available here are for general informational purposes only.

About the Author

Sultan Murad Saidov is Co-Founder and President of Beamery. Sultan is a frequent speaker on AI, the Future of Work, and Talent Transformation, was listed in the Forbes 30 under 30 list, and is the host of the Talent Blueprint podcast.

Profile Photo of Sultan Saidov