SchellmanCON is back! Join us for our virtual conference on March 6 & 7, 2025

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

An Update on the Global Regulatory Landscape for Artificial Intelligence (March 2024)

Cybersecurity Assessments | Artificial Intelligence

Trying to keep up with the rapidly emerging and evolving governance of AI? Struggling to figure out how to address customer misgivings about your AI systems?

As a leading cybersecurity assessment firm, we’re keeping close track of all the latest developments in both AI regulations and frameworks, and in this article, we’ll provide you with the latest updates on artificial intelligence governance and standards that can help (as of March 2024).

The New AI Requirements for U.S. Federal Agencies

In a significant move towards better-safeguarding citizens and promoting accountability, the White House, through Vice President Harris, has unveiled three new requirements to aid in governing the use of artificial intelligence (AI) by federal agencies.

Scheduled to take effect on December 1, 2024, these mandates cover a broad range of applications and aim to prevent discrimination in AI use. Given that deadline, federal organizations will need to begin making adjustments to comply with these three new requirements:

  1. Federal government agencies must rigorously verify that their AI tools do not compromise the rights or safety of the American people—that includes ensuring that AI systems, such as those employed in healthcare settings like VA hospitals, do not produce racially biased outcomes in diagnoses.

  2. Federal agencies will be obligated to publish online an annual, comprehensive list of their AI systems along with an assessment of associated risks and the strategies in place to manage those risks. As transparency is key to accountability, this measure aims to enhance public awareness and understanding of how AI is utilized within the government.

  3. Each federal agency must designate a chief AI officer who should be tasked with overseeing all AI technologies employed by the agency. A critical aspect of responsible AI use is internal oversight, and this rule will ensure that there is consistent and effective oversight of AI deployment across each agency.

Developed in consultation with experts from various sectors, these new requirements come at a time when the American federal government has been rapidly deploying AI tools in a variety of use cases—AI is now in use for anything from the monitoring of volcano activity to the tracking of wildfires to the DHS’ training of immigration officers.

As AI use in public services will no doubt continue to expand, these guardrails will help make public services more effective and reduce the risk that these systems could lead to new threats to national security.

Biden's Executive Order on AI

Of course, this is only just the latest step taken by the Biden administration to promote the responsible use of AI.

Back in October 2023, the president responded to the growing importance of AI in national security, economic prosperity, and technological innovation by issuing a major executive order. The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence directed federal agencies to:

  • Prioritize AI in their research and development agendas;
  • Establish guidelines for ethical AI use; and
  • Promote collaboration with private sector, academic, and international partners.

Though this EO—and the aforementioned new rules for federal agencies that dropped in March 2024— reaffirm America’s commitment to both AI innovation and respect for human rights and democratic values, there are limits to what the U.S. government can accomplish by executive action and Congress has yet to pass new legislation that could set basic ground rules for the AI industry.

And while congressional action regarding AI isn’t expected in 2024, the new guardrails set forth by the Biden administration should help prevent government abuse of the technology while remaining in alignment with the broader, global goal of responsible governance.

The EU AI Act - A Landmark in AI Regulation

Conversely, the European Union (EU) did manage to pass a first-of-its-kind AI law in March 2024, giving the 27-nation bloc a leg up on the United States in regulating critical and disruptive technology.

Now that final approval has been given—with 523 votes in favor, 46 against, and 49 abstentions—the EU's comprehensive AI Act has set an ambitious global standard for AI governance that aims to balance innovation with fundamental rights protection. The regulation takes a risk-based approach that will see AI systems categorized according to their risk levels from low to high to unacceptable.

While those classified as “unacceptable” will be banned outright, the higher the risk of your system, the stricter the regulations it will be required to meet—AI systems in use within sectors such as employment, law enforcement, and critical infrastructure are expected to fall into the high-risk category and therefore face stricter requirements. However, Open Source AI will generally be excluded from the regulation’s scope.

With the expectation that the Act will come into full effect in May 2027, a gradual rollout of requirements will take place ahead of that, and the ripple effect on organizations will be felt beyond the borders of the EU as the AI regulatory landscape continues to evolve.

What Can You Do to Secure Your AI Now?

As these new requirements continue to drop with different nations—like Singapore and Canada—taking even more different (sector-specific) approaches to governing AI, one way organizations can choose to cope with all the changes is to adopt an established framework.

Not only will compliance with trusted standards position you well to adapt and meet the changing domestic/international laws regarding AI, but you’ll also be able to assure your customers that—in the meantime—you are taking steps to responsibly manage your systems amidst regulatory “chaos.”

NIST AI Risk Management Framework (AI RMF)

In this, one option organizations have is NIST’s AI Risk Management Framework (AI RMF). Designed to help organizations across all sectors manage risks associated with the design, development, deployment, and use of AI systems, the AI RMF is a flexible guide that organizations can adapt to their specific needs and contexts.

In its aim to enhance overall public confidence in AI technologies, the framework encourages a continuous risk assessment and management process to promote the development of AI systems that are secure, transparent, and aligned with ethical principles and societal values.

Organizations that adopt the NIST AI RMF framework also have the option to have their implementations assessed by an independent third party who can further affirm your efforts to maintain accountability, reliability, explainability, and fairness within your AI systems.

ISO 42001 Certification

Those more familiar with ISO standards—or those wanting to take a more stringent, comprehensive approach to AI management—may instead want to explore ISO 42001 certification.

While integrable with other ISO certifications that you may already have, ISO 42001 also takes a similarly structured, risk-based approach as those other ISO standards—you’re just asked to apply the requirements to facilitate the establishment, implementation, maintenance, and continuous improvement of an artificial intelligence management system (AIMS).

Given the sterling reputation of ISO globally, the appeal of ISO 42001 certification is obvious and many organizations have already begun the work to build an AIMS so that they can more holistically manage those systems while also enhancing their quality, security, reliability, and ethical use. If you’re interested in also pursuing this route, you may want to consider a gap assessment to start, as such an investment bodes well for your eventual certification.

Moving Forward with Responsible AI Management

As the immersion of artificial intelligence into society remains ongoing, the expanded use of AI—and the related expanded risks regarding how to keep systems ethical, safe, and equitable—has also meant the emergence of different approaches to the technology’s governance.

So far, America’s Biden administration has taken several executive steps as it waits for Congress to catch up, while the EU has set the standard for sweeping regulation through its AI Act. As things will most definitely continue to change on all fronts for AI, regulators and industry players must seize opportunities for international collaboration to promote a harmonized and governed AI ecosystem so as to avoid more global disparity in future standards for the technology.

In the meantime, if you’re interested in learning more about what you can do right now to instill and improve trust in your AI systems, contact us today so that our expert team can help you determine which solution is right for you right now.

About DANNY MANIMBO

Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.