New York City’s Automated Employment Decision Tool (AEDT) is one of the first of the laws in the United States which is aimed at the reduction of bias using AI based recruitment and employment decisions. Under this law, it will be illegal for any employment agency to use AI or any algorithm-based technologies for the evaluation of NYC candidates for employment. This is unless it conducts an audit which is independent of bias before using the AI employment tools. Thus, New York City employers will be the people undertaking the compliance obligations around these AI tools, and not the software engineers who create them. This law is already passed, however due to the high volume of public comments, it will be enforced only after the second public hearing.
Moreover, the Artificial Intelligence Act (AI Act) is a regulation proposed on 21st April 2021 by the European Commission in order to introduce a common legal framework for AI. This regulates the providers of the system of artificial intelligence and entities using them in a professional manner.
This further begs the question: Is AI biased?
The answer is pretty simple. Yes. There are growing concerns around AI bias wherein it makes decisions that are systematically unfair to certain individuals/groups of people. AI bias most definitely has the ability to cause real bias as human beings are the ones who choose the data that the algorithms use and decide how the results of those particular algorithms are applied. Without the usage of extensive testing and diversification in teams, it is very easy for such unconscious biases to enter these machine learning models.
For example: a study conducted by the US Department of Commerce found out that facial recognition of AI often misidentifies people of color, which could later lead to wrongful arrests if used by the law enforcement as facial recognition.
Thus, companies need to attract customers by ensuring ways to use their AI models in a careful way, and test their programs in order to identify any form of potential bias.
Thus, it has become exceedingly important for employers to be aware of the new changes in AI regulations around the world as more companies are using AI in their recruitment and HR processes.
Forrester estimates that almost 100% of the organisations will be using AI by the year 2025, and that the AI software market will reach around $37 billion by the same year.
Steps for Staying Compliant with AI for Hiring:
1. Audit your AI
It is very important to have your AI audited for bias by a third party as one of the biggest concerns that AI regulations seek is to address the potential for bias in recruitment. AI takes cues and learns from human behaviors and if these human traits have even an unconscious level of bias, it can seep into AI technology as well.
In fact, this auditing will become integral in places like New York City as the AEDT law will require employers who use AI recruitment to have the independent auditing of their technology. This auditing will be an additional factor of accountability for recruiters while using AI and this can further help companies with non-compliance issues.
2. Consider Having an Alternative Process of Selection
When candidates are notified that an AI driven tool is used in their hiring process, they will have the right to request an alternative process. This is in accordance to the new regulations in New York City. Therefore, as a recruiter, it is imperative to think of alternative processes such as manual assessments of their skills. This is because AI is still relatively new and it is important for the recruiters to handle such requests, in order to remain compliant.
3.Inform the Candidates that You’re Using AI
Transparency is a fundamental part of being compliant with AI. Let the candidates know that an AI-driven tool is being used as part of the process for their evaluation. If the software is compliant, the process should be made explicitly clear to the user How the AI came to the particular decision must be explicable and easily understandable.
Thus, employers must strive for achieving a sense of compliance on a global scale. This should be done not just to avoid a bad reputation but also to be morally fair and ethical while using artificial intelligence for recruitment. This will faciliate in creating a more diverse and inclusive workplace.
Is BarRaiser AI Law Compliant?
BarRaiser is a platform to make interviews unbiased, consistent and structured. We utilise tools such as an Interview Assistant Ai tool to provide interviewers with questions, notes, alerts and a world-class experience to candidates. It also consists of a candidate scorecard, feedback tool, smart scheduling and interview training. BarRaiser AI is most definitely compliant as we have ensured its capacity to process large amounts of data for recruitment quickly and it has the potential to benefit the company by understanding the simple compliance obligations, making decisions in an unbiased way and taking appropriate action.
We have ensured the operation of our AI within a specific set of privacy and security requirements and guidelines. It is imperative to merge these compliance policies in our everyday procedures and systems to ensure responsibility on behalf of BarRaiser. AI when used with proper checks and balances can help eliminate bias. Read our view on how models like GPT can help remove bias.
Read More: Some Common Myths about AI Hiring Software.
Navigating AI Compliance in 2023
It is very important to be aware on exactly what constitutes the use of AI as an automated hiring decision tool, especially as we all continue to view the legislation and laws around this topic. Do consult your legal counsel for help and make sure to understand review the processes regarding AI. Recruiters must also consistently update their hiring process and background screening checks in order to be in order with the current and future AI employment tool legislation.