Well, we all know that the emergence of artificial intelligence (AI) has transformed many aspects of society and the economy, bringing new opportunities and raising ethical and regulatory concerns. Indeed, in just a few years, the development of AI has permeated society as a whole, including generative AI such as ChatGPT, Gemini AI, and the popular Apple intelligence. The artificial intelligence market is expected to be worth more than $1.8 trillion worldwide by 2030, up from $95 billion in 2021. However, AI-specific regional regulations are still needed to govern its development and resolve certain legal ambiguities.
To encourage companies to develop these innovative systems while reducing public distrust of AI, the European Union developed its own regulatory framework, the AI Act, which the European Council adopted on May 21, 2024. In this article, I will explain everything about the EU AI Act.
Also Read: AI In The Driver Seat: 11 Ways AI Will Change Hiring In 2024
The Objectives of the EU AI Act
The European Act on Artificial Intelligence (AI Act) was formally approved by the Council of the EU on May 21, 2024, after being unanimously approved by MEPs (Members of the European Parliament) last March. The text was first published on April 21, 2021, and has been substantially revised following the remarkable progress made in generative AI, especially ChatGPT, launched in late 2022.
This new law, which harmonizes existing rules related to AI, demonstrates the European Union’s desire to establish a solid legal and regulatory framework for developing and deploying trustworthy, ethical, transparent, and safe AI systems. This important document is the first law at the global level to regulate artificial intelligence specifically. The AI Act aims to encourage innovation while ensuring that these artificial intelligence systems strictly respect the fundamental rights of European citizens. The EU wants to be a benchmark for ethical and regulatory AI.
Which AI Systems and Models are Affected by the EU AI Act?
The most crucial innovation brought about by the EU AI Act is the risk-based approach to artificial intelligence. The regulation adopted by the European Council classifies AI systems into four risk categories:
Also Read: What Are AI Interviews? Everything You Need To Know
Unacceptable Risk:
AI systems that involve unacceptable risks or that are considered a threat to people, such as manipulating particularly vulnerable people or groups of people (e.g., voice-controlled toys that may lead to dangerous behavior in children) and social assessment (classification of people according to their socio-economic situation, behavior, values, etc.). Biometric classification and identification of people, including in real-time and remotely, such as facial recognition, are currently prohibited in the European Union.
High-Risk AI Systems:
Any AI system or model that poses a risk to people’s health, safety, or fundamental rights. This includes systems that create profiles and automatically process personal data to evaluate individuals’ lives, such as work performance, economic situation, preferences, location, etc. These systems fall into this classification. Concerns include SIAs (Security Industry Associations) involved in the operation of critical infrastructure where public safety is at stake (e.g., transport, water, gas, electricity), as well as private and public services (e.g., assessing emergency calls, triaging emergency patients, assessing ability to pay), education and training (e.g., placement in facilities, evaluating academic performance), employment (e.g., AI used for recruiting and screening candidates), and the application of law and court decisions (e.g., assessing the risk of criminality or recidivism, profiling). Exceptions include searching for missing persons and defending against imminent terrorist threats.
Low-Risk AI Systems:
Low-risk AI systems, such as chatbots, recommendation systems, and deepfakes, must be transparent and provide users with information about the AI-generated content and AI-generated text they interact with.
Minimal-Risk AI Systems:
No specific obligations are imposed on minimal-risk AI systems, such as spam filters and artificial intelligence in video games, but providers are expected to submit a code of conduct.
Therefore, companies deploying artificial intelligence will have their obligations systematically determined according to the risk level of their AI systems. Some of these are particularly relevant for general AI models, including generative AI like ChatGPT-4, which is not considered high risk but must follow specific rules such as transparency of AI training data, production of technical documentation, and the aggregation of training datasets for algorithms. Moreover, these companies must comply with existing regulations on copyright and related rights, where exceptions still exist for AI training, and companies are obliged to recognize and respect the rights of rights holders to oppose the mining and automated analysis of their data.
Also Read: The Ethical Dimension of AI in Recruitment: Balancing Technology and Human Judgment
What are the Penalties for Non-Compliance with the AI Law?
The AI Law provides for various sanctions. In case of violation of the European AI Regulation, the proposed fines could be up to 35 million euros or 7% of global turnover for prohibited uses, 1 million euros or 3% of turnover for violating obligations, and 7.5 million euros or 1% of turnover for providing inaccurate information. However, these upper limits are less significant for SMEs (Small and Medium Enterprises) and start-ups. In addition to these financial sanctions, affected companies may be forced to withdraw their AI systems from the market, resulting in additional costs.
At the end
The new regulation, the EU AI Act, aims to ensure that AI systems are safe, ethical, and transparent and respect people’s fundamental rights. These AI systems are classified in the regulation according to four risk categories: unacceptable risk, high risk, low risk, and minimal risk. The regulation requires companies developing or using AI systems to comply based on the category of risk involved. It can be costly for a company not to comply with this requirement, with fines and forced withdrawal of AI systems from the market being some of the penalties that will be imposed.
Are you looking for the best video interview platform. You’re at the right place, try out BarRaiser a perfect solution for all you hiring need.