New Regulations Around AI In HR: Why Explainability Is Key
Artificial Intelligence (AI) now plays a significant role in many aspects of our everyday lives. Used in 56% of organizations in at least one function, AI tools are being adopted by an increasing number of People teams for a range of purposes. AI can support acquisition, retention, engagement and DE&I goals by identifying the right person for each role (skills matching), showing employees their ideal career path, removing unconscious bias from the hiring process, and uncovering higher quality candidates (that may have been overlooked).
Although it is sensibly being used to speed up and even improve specific HR processes, there are some important things to consider when implementing AI tools.
- Bias in the tool: AI requires large amounts of data to inform its decision making. Do you know what data it is being trained on? People need to choose what historical data is used when building algorithms for recruitment chatbots, screening resumes, or matching candidates to roles, for example. This inevitably exposes the model to the same human biases that have affected past hiring decisions. Is the data used for training the AI models, and the outcomes of those models, auditable? Have you minimized the chance of the model learning biases from historical data?
- Lack of transparency: There are many laws to protect certain individuals from discrimination in the employment space. In the event of a dispute, would you be able to explain how your AI-powered software made a decision or recommendation? If an underrepresented group was categorically denied a role or interview, do you know why?
- Ethics in AI: Can a machine make ethical decisions? If your HR department is tasked with offering fair recruitment processes, will a machine be able to do the necessary screening (beyond looking for relevant experience) to ensure the least biased approach has been taken?
While usage of AI is on the rise, knowledge around best practices are slow to dissipate. While laws and regulations have lagged behind technological progress, there is now a growing trend towards examining and regulating the use of AI in hiring, promotions, and other employment decisions.
New AI regulations in NYC
Last November, the New York City Council passed a bill that regulates employers’ and employment agencies’ use of “automated employment decision tools” in making employment decisions. The new law takes effect on 1st January 2023.
Employers have to comply with some new requirements, including: 1. a yearly bias audit; and a summary of the results of such an audit made publicly available; 2. providing candidates with notice of automated processing; and 3. allowing candidates or employees to request an alternative process or accommodation.
What does this mean for AI in HR?
AI is not being banned, it’s simply being regulated, and practices are becoming more transparent. Governments (starting with New York) are now stipulating what protections need to be in place in order to best utilize AI in HR/Talent. These requirements will vary by state and country. In New York, for example, AI must be audited for bias before it is used, and candidates given notice of the fact that an automated employment decision tool will be used in assessing them; and the job qualifications and characteristics that the tool will use in the assessment.
While NYC is leading the country in regulation, new AI laws are soon to follow. At Beamery, we are developing products to enable our customer’s compliance on a global scale.
The importance of explainability
The challenge for businesses and HR teams who are using AI/automation, or in the process of selecting technology vendors, is to be able to reassure current and future employees (and the world at large) that there is no subconscious bias programmed into the systems they use. They need to avoid opaque AI, and be able to point to ‘thinking’ behind decisions that an AI has made. And they need to, of course, comply with all relevant regulations governing the use of AI in the context of the workplace, as well as data protection laws.
- Bias: To solve this challenge, ensure that datasets and models are vetted for potential biases and cleaned before using them to develop new AI tools.
- Transparency: Ensure all users are aware of when and how AI is being used. Gather preferences/consent from candidates before using their data in the process. A third-party audit will help with compliance, as well as that all-important explainability. Prepare a public-facing explainability statement.
- Ethics: Human intervention will be necessary during important stages of the hiring process, particularly around hiring decisions. Ensure you handle candidates’ data responsibly.
‘Explainable AI’ ensures that users always understand how candidate recommendations have been made: they can see the components that were weighed up in the decision making process, and what elements impacted the recommendations the most. AI can run powerful initial appraisals, but the technology is most effective with human-in-the-loop guidance – in short, it’s the human, not the AI, that makes the final decision. The AI element makes the recruitment faster and more efficient, but does not replace humans.
At Beamery, we are pleased to see increased government action on AI. But we also believe that we have a duty to act now to ensure our AI is explainable, transparent and ethical, that we take every possible effort to eliminate bias in the system, and make our AI systems responsible by design.
While the NYC law is currently unclear as to how the independent auditing requirements are to be fulfilled by employers, as a responsible service provider, we are proactively partnering with industry-leading lawyers, and an AI Governance platform, to ensure Beamery and our clients receive the highest quality advice.
AI can enable positive social benefits, including equal access to meaningful work, skills, and careers for people of all backgrounds (the Beamery mission). Our AI systems identify underlying skills to create equal access to meaningful work and decouple career progression from other traditionally limiting factors. Our AI is supercharging everyday tasks in the talent lifecycle, not replacing them, and it will never make decisions without a human or replace them.