It’s official: The European Union’s risk-based regulation for applications of artificial intelligence has come into force starting Thursday, August 1, 2024.
This starts the clock on a series of staggered compliance deadlines that the law will apply to different types of AI developers and applications. Most provisions will be fully applicable by mid-2026. But the first deadline, which enforces bans on a small number of prohibited uses of AI in specific contexts, such as law enforcement use of remote biometrics in public places, will apply in just six months’ time.
Under the bloc’s approach, most applications of AI are considered low/no-risk, so they will not be in scope of the regulation at all.
A subset of potential uses of AI are classified as high risk, such as biometrics and facial recognition, AI-based medical software, or AI used in domains like education and employment. Their developers will need to ensure compliance with risk and quality management obligations, including undertaking a pre-market conformity assessment — with the possibility of being subject to regulatory audit. High-risk systems used by public sector authorities or their suppliers will also have to be registered in an EU database.
A third “limited risk” tier applies to AI technologies such as chatbots or tools that could be used to produce deepfakes. These will have to meet some transparency requirements to ensure users are not deceived.
Penalties are also tiered, with fines of up to 7% of global annual turnover for violations of banned AI applications; up to 3% for breaches of other obligations; and up to 1.5% for supplying incorrect information to regulators.
Another important strand of the law applies to developers of so-called general purpose AIs (GPAIs). Again, the EU has taken a risk-based approach, with most GPAI developers facing light transparency requirements — though they will need to provide a summary of training data and commit to having policies to ensure they respect copyright rules, among other requirements.
Just a subset of the most powerful models will be expected to undertake risk assessment and mitigation measures, too. Currently these GPAIs with the potential to post a systemic risk are defined as models trained using a total computing power of more than 10^25 FLOPs.
While enforcement of the AI Act’s general rules is devolved to member state-level bodies, rules for GPAIs are enforced at the EU level.
What exactly GPAI developers will need to do to comply with the AI Act is still being discussed, as Codes of Practice are yet to be drawn up. Earlier this week, the AI Office, a strategic oversight and AI-ecosystem building body, kicked off a consultation and call for participation in this rule-making process, saying it expects to finalize the Codes in April 2025.
In its own primer for the AI Act late last month, OpenAI, the maker of the GPT large language model that underpins ChatGPT, wrote that it anticipated working “closely with the EU AI Office and other relevant authorities as the new law is implemented in the coming months.” That includes putting together technical documentation and other guidance for downstream providers and deployers of its GPAI models.
“If your organization is trying to determine how to comply with the AI Act, you should first attempt to classify any AI systems in scope. Identify what GPAI and other AI systems you use, determine how they are classified, and consider what obligations flow from your use cases,” OpenAI added, offering some compliance guidance of its own to AI developers. “You should also determine whether you are a provider or deployer with respect to any AI systems in scope. These issues can be complex so you should consult with legal counsel if you have questions.”
Exact requirements for high-risk AI systems under the Act are also a work in progress, as European standards bodies are involved in developing these stipulations.
The Commission has given the standards bodies until April 2025 to do this work, after which it will evaluate what they’ve come up with. The standards will then need to be endorsed by the EU before they come into force for developers.
This report was updated with some additional detail regarding penalties and obligations. We also clarified that registration in the EU database applies to high-risk systems that are deployed in the public sector.