Skip to main content

Why Does Explainable AI Matter In FS&I?

Highly regulated industries like Financial Services and Insurance (FS&I) are all too aware of the importance of transparency and compliance, and when it comes to any new technology or platform, it’s vital that every aspect and result is justifiable and explainable. The implementation of Artificial Intelligence (AI) is no different, whether it’s used to crunch financial data, or sift and segment thousands of resumes.

While we might not need to know exactly why a website recommended a book to us, or how Spotify is curating our playlists, there are applications of AI – especially in regulated sectors like FS&I – that are more important, and we need to understand how they are working. When it comes to things that have a big impact on our lives – money, justice, medical treatment, employment – the technology making recommendations cannot live inside a closed box.

At Beamery, we talk about ‘Explainable AI’ as a key component of our solution, as our Talent Lifecycle Management platform uses AI to make recommendations around the ideal candidate for roles and opportunities inside our clients’ businesses. It is incredibly important that those recommendations can be explained to users and indeed candidates that are affected by resulting decisions. But what does ‘Explainable AI’ mean?

What is Explainable AI?

Often referred to as XAI, Explainable Artificial Intelligence is an approach to AI that makes it possible for users (the humans involved) to understand and trust the results delivered by Machine Learning (ML) algorithms. “Explainable AI” is used to describe an AI model, its expected impact, and its potential biases. Crucially, it is about HOW a model arrived at a particular result.

Explainable AI is about model accuracy, transparency, fairness, and being able to justify outcomes in AI-powered decision making. It is essential for an organization in building trust and confidence when putting AI models into production.

Explainable AI in the context of employment

With NYC laws aimed at reducing bias in automated employment decision tools (AEDT) coming into effect on January 1, 2023, discussions have been reignited about how AI is applied in the hiring process. Regulators are reasonably concerned that machines are learning based on already biased data, and could be making recommendations about who to hire that exacerbate inequalities in society… without anyone even noticing.

HR teams who are using AI or automation, or in the process of selecting technology vendors, must be able to reassure themselves, and current and future employees, that there is no bias programmed into the systems they use. They need to be able to point to the rationale behind decisions that an AI has made, such that there is full transparency in why one person was recommended for a role over another. At the same time, they need to comply with all relevant regulations governing the use of AI in the context of the workplace, as well as data protection laws.

What are the benefits of Explainable AI?

XAI has the potential to help users of AI systems perform better: they can understand and trust how recommendations are being made, see the rationale behind them, and see that good decisions are being made.

It could be seen as promoting a social ‘right to explanation’ i.e. people have a right to an explanation for a decision made by an algorithm, even where there is no legal right or regulatory requirement.

As well as aiding users, XAI aids… itself. The system is more easily improved if we can see what has been done and why: when we can join the dots between inputs and outputs, we can make the model better.

How we approach Explainable AI at Beamery

Transparency with explainability

We help users understand why we predict that someone will be a good match for a role, using a few key explanation layers, which users can provide to candidates, that clarify the AI recommendation. Users can see each component’s weight (or influence) in the decision (the mix and weight of skills, seniority, proficiency, and industry) and users can also see what skills impacted the recommendations the most.

No Personally Identifiable Information (PII) is used for training, validation or recommendation When training our models, we use anonymized third-party datasets, which means our models are not explicitly aware of race, ethnicity, gender, nationality, income, sexual orientation, and political or religious beliefs. Additionally, when we are validating our models against test data sets, non-personally-identifiable data is used. When we match candidates in the CRM, our models never see the PII of any candidate. All they see are skills, roles and experience.

That said, most state-of-the-art methods to audit for bias create validation sets, which may contain some voluntarily-provided demographic information. Our external auditors may make use of candidate demographic features that they obtain outside of our internal processes, which are deleted following the audit process, and never retained. These data points are used only in assessing model performance for potential bias.

Human governance

Our models are not meant to replace humans; instead, they give relevant information to human decision makers to help them make better choices. And that’s what makes Beamery’s AI-driven Talent Lifecycle Management platform ideal for the FS&I sector – not only is every AI-based recommendation transparent and explainable, it’s designed to work hand-in-hand with skilled HR teams, ensuring the very best of both worlds.

To learn more about the importance of explainable AI and how it underpins Beamery’s Talent Lifecycle Management Platform, read our Explainability Statement.