Artificial Intelligence (AI) has become part of our everyday language and life. But as common as AI is in business (with 56% of companies using it at least one way already), it is still a divisive topic — particularly when it comes to HR and hiring.
Most of the people who have negative feelings towards AI have concerns about the ethics of the technology. Is it really fair? Can it be trusted to make hiring recommendations and decisions?
While many organizations, business leaders and talent acquisition teams have faith in AI and its capabilities in the workplace, some stakeholders still need some convincing. As more and more AI regulations are put in place by governments around the world, the question is still there in regards to whether or not AI can be used as a truly ethical hiring tool.
How do people feel about AI in HR and recruiting?
Out of the 7,504 people surveyed, 30% said they believe that AI removes human unconscious bias in the workplace.
25% said that AI is more efficient at identifying individuals for promotion than humans alone, and 29% felt that it can help more efficiently define progression goals in the workplace.
When it comes to recruiting, 38% of people said that they believe AI can make the overall recruitment process quicker, and 35% said it helps companies identify suitable candidates more efficiently than they would be able to otherwise.
Even though these numbers represent positive feelings about AI in the workplace, there are still some people who are not confident in the capabilities and ethics of AI. For example, only 18% of people we asked said AI helps companies achieve DE&I goals — and 17% said they couldn’t see how it made the workplace more fair.
Regardless of whether these negative feelings stem from misunderstanding or a lack of education on the technology, they are important considerations for HR teams and business leaders to keep in mind when using AI in HR and hiring practices.
The truth about ethics and AI
From an ethics perspective, out of thousands of people we surveyed, 51% feel that all AI algorithms “adhere to ethical standards”, either some or all of the time. And in our previous Talent Index survey report, 62% of respondents felt positively in some way about AI being used in the workplace.
But is AI really ethical? And how can you tell?
Unethical AI models do exist…
AI is becoming more and more common among businesses, and the reality is that the law has lagged behind the advancement of technology. Government organizations are now implementing new regulations to make sure that there are safeguards in place for ethical AI practices, and like many cases, the tech industry has been reacting faster to these regulations than other industries.
The truth is that questionable and flat-out unethical AI models and use cases do exist, and there are certainly issues to watch out for if you plan to use AI in your hiring process. The first problem is that AI models can actually create bias in the workplace. AI and machine learning are built to learn and the technology learns from humans (who probably have at least some level of unconscious bias). It’s important to know exactly where your AI is pulling data from to create its hiring recommendations, to ensure the decisions are ethical and that bias has been reduced.
Another potential issue is non-compliance. When AI models don’t meet regulatory requirements or fail to act in accordance with commands, there can be serious consequences for the companies using the technology, and those who are affected by their use of AI. The US and EU are expected to align regulations around the technology, and as AI regulations become more specific and more stringent (to protect candidates and employees), it’s important to keep up with the changes and ensure the AI you’re using is compliant on a global level.
Another threat to ethics in AI is the lack of transparency. It’s crucial that, as a company, you’re able to explain exactly what you are using AI for and when. It should be very clear to both the users and the organization how the software came to the decision to (for example) recommend one candidate over another, and where that recommendation came from.
New regulations will go a long way in requiring organizations to prove how they use AI in HR and recruiting is ethical, but AI vendors must do their part to build ethical models and tools as well.
How can you make sure your AI is explainable and ethical?
AI can be an excellent tool for hiring ethically, when you have the right solution with the right features and capabilities. But there are things to consider when adding AI into the mix as a strategic part of your HR and recruiting processes.
The first thing to consider is the vendor you choose. Ensure that the AI software you pick is one that is able to help eliminate bias and discrimination in hiring, and makes hiring decisions based on skills, qualifications, and candidate aspirations.
You must also look for explainability in AI. If it’s difficult to explain how the AI works, where the data comes from, and how recommendations and decisions are made, that’s a bad sign. A great indicator of an ethical AI solution is if it’s been independently audited. When AI-driven software has been properly audited by an external organization that holds no stakes in the game, that’s a good sign that it’s both compliant and ethical.
Additionally, for an AI product to be considered truly ‘ethical’ for recruiting and hiring, it needs to be transparent. It’s so important that candidates and employees are informed of how the AI is being used and what qualities they are being assessed on. The software should also have the ability to give candidates the option to set their communications preferences (and give consent) during the hiring process.
How utilizing ethical AI can take your HR team to the next level
Identify and reduce bias
Even with the very best intentions to create an inclusive culture, unconscious bias may exist in your organization’s hiring process. When you implement an ethical AI software, the technology can help identify patterns in hiring that may indicate bias and/or discriminatory practices in the workplace.
For example, if there’s one group of people who are being given job offers (or promotions) significantly less often than other groups, the AI-driven software can notify the HR team of the issue, and it can be addressed and corrected.
Having full knowledge and transparency of these patterns is crucial to creating a truly diverse, inclusive and fair workplace.
Exceed DE&I targets
59% of organizational leaders say there is more emphasis on strategic DE&I goals this year, and have become a significant part of key business conversations. Candidates (particularly younger ones) expect DE&I to be a major part of your business goals — it’s not good enough to just check the box on diversity.
When you invest in ethical AI-driven software, it can help you make strategic hiring decisions to match the right talent with the right skills, open roles, and build a pipeline of candidates to fill future vacancies. These matches should be made based on characteristics such as skills, experience and career aspirations.
An ethical AI solution can help you make smart hiring decisions based on the right candidate characteristics, while still meeting and exceeding your DE&I targets.
Create personalized candidate and employee experiences
Personalization is key — AI has the ability to make inferences and recommend specific roles to interested candidates based on their skills, career goals and previous roles they’ve held.
A high-quality AI-driven software can create these personalized career paths for existing employees as well, whether that includes lateral moves to other departments where they might be a great fit, or vertically through promotions.
For prospective candidates, AI can go beyond job recommendations. Based on the person’s interests and the preferences they’ve set, the technology can help recruiters choose content that will resonate with them the most, and they can share that information with candidates to help build relationships and encourage them to apply.
The most important thing to remember when creating these personalized experiences is remaining compliant to AI regulations. When candidates choose their communication or content preferences, adhere to those chosen preferences and be transparent about the information you collect, use, and store.
Future-proof against evolving AI regulations
Governments are taking action to regulate AI. It’s critical to comply and keep up with new regulations. The EU’s GDPR is in place to protect consumers’ data privacy, but it has a direct impact on AI as well. More recently, the New York City Council released regulations that require AI to be audited for bias before it can be used, and companies must notify candidates that an automated employment decision tool will be used and which qualifications and characteristics they will be assessed on.
It’s all about transparency, accountability, and explainability. There’s no doubt that more and more of these regulations will be announced over time. They will vary state by state, so it’s important that your AI software is compliant on a global level, and has the ability to keep up with evolving rules and regulations.
Beamery’s Explainable AI
At Beamery, we’re committed to building explainable, AI-driven products that are compliant with regulations on a global scale. We believe in putting DE&I at the heart of business, and in equal access to work for all. Everyone deserves to be given a seat at the table, and utilizing explainable and ethical AI allows us and other organizations to do just that. We actively use transparent, explainable AI to fill the open roles on our own growing team, and we have many customers who are doing the same. AI is a game-changer for HR and hiring, but the technology must be built on ethics and compliance, to build a diverse workforce of the future.