The EU AI Act Is Officially Effective: What’s Next and What Now
NOTE: This blog was originally published on 3/24/2024 and has been updated as of 8/1/2024 now that the EU AI Act has been published in the Official Journal of the European Union and “enter[s] into force” 20 days thereafter, or on August 1, 2024.
In a move that now positions the 27-nation bloc as a global leader in regulating artificial intelligence (AI), the European Union’s (EU) AI Act (EU AI Act) has been published in the Official Journal of the EU, which serves as formal notification of the new law. When it comes into effect on August 1, 2024, this pioneering legislation will set an official precedent for other jurisdictions grappling with the challenges of AI regulation.
First proposed five years ago, the EU AI Act garnered overwhelming support from the European Parliament. With 523 votes in favor, 46 against, and 49 abstentions, the legislation—and the significant governance milestone it represents—reflects a broad consensus among policymakers on the need to establish robust rules to regulate the use of AI technologies.
As a top cybersecurity assessment firm, we have been at the forefront of the latest discussions and developments regarding AI—we know how closely organizations are following the progress of what has become the foremost debate in tech, and we want to help disseminate this latest development.
In this article, we’ll provide a brief overview of what’s in the EU AI Act and its implications for the rest of the world, as well as details on what you can do right now to get started validating the trustworthiness of your AI systems.
What is the EU AI Act?
In regulating AI applications, the newly passed EU AI Act takes a risk-based approach—the legislation categorizes AI systems based on their level of risk, ranging from low to high to unacceptable:
- Low-risk applications, such as content recommendation systems, are only subject to voluntary requirements and codes of conduct.
- High-risk applications, or those that have the capability to negatively affect human safety or fundamental rights, including those used in medical devices and critical infrastructure, face stricter scrutiny and compliance requirements that include adequate risk assessments, logging and monitoring mandates, and human oversight.
- Unacceptable risk applications, or those considered a threat to people—like social scoring systems—will be banned outright.
Importantly, the AI Act also addresses the emergence of generative AI models, which have rapidly evolved to produce lifelike responses, images, and more—though these systems will not be classified as high-risk, developers will have to comply with the Act’s transparency requirements that include the provision of detailed summaries of training data as well as EU copyright law.
The EU AI Act’s Enforcement Timeline
Though now passed, the expected effective dates for different requirements within the Act are expected to roll out in stages:
- July 2024: Formal adoption of the law.
- August 2024: EU AI Act effective date (entered into force).
- February 2025 (6 months post-effective date): The following elements will apply:
- Chapter I; and
- Chapter II (prohibitions on unacceptable risk AI) will apply.
- August 2025 (one-year post-effective date): The following elements will apply:
- Chapter III Section 4 (notifying authorities);
- Chapter V (general purpose AI models);
- Chapter VII (governance);
- Chapter XII (confidentiality and penalties); and
- Article 78 (confidentiality).
- Still not applicable: Article 101 (fines for general purpose AI (GPAI) providers).
- August 2026 (2 years post-effective date): The remainder of the AI Act will apply, except Article 6(1).
- August 2027 (3 years post-effective date): Article 6(1) (classification rules for high-risk AI systems) and the corresponding obligations in the Regulation will apply.
As these provisions roll out in stages, it’ll be important to remain vigilant as to what will be required and by when. Enforcement mechanisms, including the establishment of AI watchdog agencies in EU member states, will play a critical role in overseeing compliance and addressing potential violations, which could see companies hit with fines ranging from 7.5 million to 35 million euros ($8.2m to $38.2m), depending on the type of infringement and your firm’s size.
The Global Implications of the EU AI Act
Though this regulation will only cover AI systems “placed on the market, put into service or used in the EU,” the passage of the EU AI Act is still expected to have far-reaching influence beyond the borders of the European Union—as the first major set of regulatory guidelines for AI, the legislation is likely to shape further global discussions on AI governance.
Moreover, governments around the world, including the United States and China, are closely monitoring the EU's regulatory approach and may follow suit with their own initiatives, so—even if your organization is not directly subject to the EU AI Act’s provisions, and especially if it is—it may be a good idea to get started in proving the trustworthiness of your AI systems.
What You Can Do Right Now to Better Secure Your AI Systems
While international regulatory uncertainty does remain to an extent, there are still proactive measures organizations can take to both better secure their systems and prepare themselves for AI governance:
- ISO 42001 (and the related certification): As the world’s first—and right now, the only—artificial intelligence management system standard, getting certified against ISO 42001 would mean taking a risk-based approach in implementing the standard’s holistic safeguards to address the security, safety, privacy, fairness, transparency, and data quality of AI systems throughout their life cycle.
- Guidelines for Secure AI System Development: Released by the UK National Security Centre (NCSC) and the US Cybersecurity and Infrastructure Security Agency (CISA), these guidelines—when implemented—can help developers reduce system risks before security issues arise.
- NIST AI Risk Management Framework (RMF): By following this set of guidelines and best practices, you can better manage the risks associated with AI systems—for further validation of the robustness and security of your AI systems, you can also undergo a comprehensive assessment against these guidelines.
- HITRUST + AI Certification: Now that HITRUST has added optional AI risk management requirements, including them within your certification would confirm your AI safeguards regarding sensitive data as well as your mitigation of related cybersecurity threats.
- Privacy Impact Assessment (PIA): A PIA specific to your AI system(s) would shed light on the privacy implications of the data that is collected, used, and/or shared within that solution, as well as what ramifications exist at the state, national, and international levels should you fall victim to a breach.
- Penetration Test of AI System(s): Using simulated attack vectors that can range from training data poisoning to prompt injection, a penetration test will identify any security weaknesses and unaccounted-for risks within your AI applications.
Though all these potential avenues feature different parameters and require different levels of effort, each can help better secure your AI and position your organization well for future regulations and current ones, like the EU AI Act.
Looking Forward to a World of Secure AI
By setting clear rules and standards with the passage of its AI Act, the European Union aims to harness the potential of AI as a tool for societal progress while minimizing risks, safeguarding fundamental rights and values, and ensuring accountability.
As the global community continues to grapple with the ethical and societal implications of AI, the EU's leadership in this area is poised to shape the future of international AI governance—a future that is quickly evolving.
Organizations need to be ready as this landscape continues to shift, and our dedicated AI services team is prepared to help—contact us today with any questions you may have about your AI security options and how to get started in proving the trustworthiness of your applications.
About DANNY MANIMBO
Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.