Skip to main content

Beamery Completes AI Audit for Bias

London, UK - November 1st 2022: Beamery, the leader in Talent Lifecycle Management, has today confirmed that it has completed a third-party audit for bias in its Artificial Intelligence capabilities, which involved rigorous testing of Beamery’s models. Beamery has partnered on this ongoing auditing process with Parity, a company whose mission is to end algorithmic inequality.

As a responsible AI vendor, Beamery has taken the proactive step to conduct an audit from an independent third-party auditor and make a summary available to customers, in order to support them as they work to demonstrate compliance with new regulations focused on the use of AI in recruiting. For example, New York City’s (NYC) Automated Employment Decision Tools (AEDT) law comes into effect on January 1st, 2023, with draft details on compliance requirements recently released.

Beamery’s AI audit was undertaken with existing and proposed global regulatory frameworks in mind and focused on bias issues while working in compliance with global data privacy regulations. The company remains committed to its AI auditing journey and will evaluate, and consider, further auditing as ongoing legislation guidelines are finalized, as well as working closely with its legal advisors to facilitate Beamery’s compliance.

In parallel to the AI audit, Beamery has also released its AI Explainability Statement, written to openly share its AI processes and principles with customers, candidates, and employees. Beamery anticipates that its AI Explainability Statement will assist customers in fulfilling their diligence, notice and transparency requirements under applicable law.

“Beamery was founded with a mission to create equal access to work – a goal that more organizations are now progressing, with the adoption of new AI technologies that reduce unconscious bias in the hiring process,” said Sultan Saidov, Co-Founder and President at Beamery.

“A critical way in which AI can reduce bias is by providing transparency into skills. For example, recruiters can create jobs and focus their hiring on identifying the most important skills, rather than taking the more bias-prone traditional approach – such as years of experience, or where somebody went to school. For AI to live up to its potential in providing social benefit, there has to be governance of how it is created and used. There must be transparency and auditability of the AI models and its impacts. There is currently a lack of clarity on what this needs to look like, which is why we believe we have a duty to help set the standard in the HR industry by creating the benchmark for AI that is explainable, transparent, ethical and compliant with upcoming regulatory standards. To this end, we hope that the AI explainability statement and third party audit information helps guide other organizations on how to approach the important ethical, legal, technological and human considerations of using AI in HR.”

Transparency with Explainable AI

Artificial Intelligence and machine learning are powerful business tools and, for candidates, AI can lead to greater personalization: surfacing higher quality career paths, job, and gig matches based on existing skills and considering potential – all with greater relevance. The AI in Beamery provides useful recommendations that help users discover suitable matches, rather than programmatically steering people into one role.

In order to ensure customers know how Beamery predicts someone as a good match for a role, key explanation layers have been put in place in the product. For example, Beamery’s AI can articulate the mix and weight of skills, seniority, proficiency, and industry relevance given to a recommendation, ensuring that users can explain effectively what skills impacted the recommendation and which did not. Further reinforcing its commitment to transparency in AI, Beamery is also transparent about how data may be used in AI activities.

“There is a significant challenge for businesses and HR teams using AI today in that they must reassure all stakeholders that these tools are privacy-conscious and that they don’t discriminate against disadvantaged communities,” said Liz O’Sullivan, CEO of Parity AI. “To do this, businesses must be able to demonstrate that their systems comply with all relevant regulations, including local, federal, and international human rights, civil rights, and data protection laws. We are delighted to work with the Beamery team as an example of a company that genuinely cares about minimizing unintentional algorithmic bias, in order to serve their communities well. We look forward to further supporting the company as new regulations arise.”