SchellmanCON is back! Join us for our virtual conference on March 6 & 7, 2025

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

How to Incorporate AI Controls into Your SOC 2 Examination

ISO Certifications | SOC Examinations | Artificial Intelligence

For anyone immersed in digital technology, you know that artificial intelligence (AI) is all the rage right now, and for good reason, the use cases for this technology are growing all the time. But as AI continues to enmesh with daily life as well as business, security concerns have grown in parallel, as have questions regarding the implications on organizations and their ongoing compliance efforts. At the top of mind for many has been how AI factors into SOC 2 examinations.

As assessors who have been performing SOC examinations for over two decades now, we’re entirely familiar with the standard and have helped countless clients incorporate new implementations into their examinations, and AI is coming up more and more. Amidst the release of ISO 42001 and the ongoing hyper-focus on AI governance, two of the more common questions we’ve been getting are:

  • How can my SOC 2 include AI-specific considerations?
  • How does ISO 42001 relate to SOC 2?

Since we’re also the first ISO 42001 ANAB-accredited Certification Body, we’re equipped to answer both, and we’ll do that in this blog. Read on to understand more about the relevance of SOC 2 criteria to AI implementations, how ISO 42001 certification is a more comprehensive option for demonstrating trustworthy AI use, and what organizations currently pursuing SOC 2 can do to demonstrate their AI controls in lieu of getting certified.

Does SOC 2 Criteria Address AI?

 

While the SOC 2 trust services criteria were not designed to cover AI-specific risks, controls, and considerations in their entirety, there are certainly interplays given that the SOC 2 categories cover the following areas that are relevant to AI:

  • Security;
  • Privacy;
  • Availability;
  • Confidentiality; and
  • Processing Integrity.

Security & Privacy

Given that security is always a top consideration with any technology, as is privacy, there are also certainly AI considerations in these areas—e.g., regarding safe access to large language models (LLMs), safeguarding the software development lifecycle (SDLC) of AI systems, protecting the privacy of personally identifiable information (PII) whose information is being processed by AI systems.

Availability & Confidentiality

There’s also no doubt that the availability of an AI system is relevant because they’re often used to support critical tasks, so monitoring them becomes critical to maintaining their accessibility.

Confidentiality is an area of similarly key concern—AI applications processing sensitive information necessitate that data retention and disposal practices be implemented.

Processing Integrity

As it’s generally only relevant to organizations involved in transaction processing—think payroll / tax return providers, claims processors, etc.—processing integrity is typically one of the rarest categories to be included in SOC 2 examinations.

That being said, it’s very pertinent to AI systems, given that organizations would want to verify the processing of transactions by those applications is complete, valid, accurate, timely, and authorized, while invalid outputs / unexpected results—e.g., as a result of hallucinations, data poisoning, improper algorithmic configurations, etc.—are monitored and remediated when discovered.

SOC 2 vs. ISO 42001 for AI

 

All that said, as relevant as the individual criteria categories are to AI systems, there are still areas that are unique to AI that are not necessarily covered—or intended to be covered by—the existing SOC 2 categories mentioned above. Those areas include:

  • Fairness;
  • Bias;
  • Responsible and ethical use; and
  • Safety, among others.

Given these gaps, SOC 2 is not intended to be a comprehensive AI risk management framework. Neither are other popular security- and privacy-focused standards like ISO 27001 and ISO 27701. Rather, ISO 42001 was created to fill this void and cover critical AI risks.

A holistic, international management system standard solely focused on providing organizations with a way of demonstrating the responsible and trustworthy use of AI, ISO 42001—and its comprehensive governance framework that integrates and mandates security, transparency, and ethical principles throughout the AI lifecycle—is currently the optimal avenue to safeguard AI systems.

But as anyone who has pursued any ISO certification before knows, building the requisite management system is resource-intensive and ISO 42001 and its AI management system (AIMS) is no exception. This is likely why some organizations have raised the question regarding SOC 2 and whether it’s at all suitable for AI.

2 Solutions for Organizations Pursuing SOC 2 with AI Controls

 

In fact, there are some things that organizations who currently don’t have the resources to implement an AIMS / pursue ISO 42001 certification but are performing annual SOC 2 examinations can do to demonstrate the implementation of their AI controls.

1. Re-Evaluate Your Current In-Scope Trust Services Categories and Underlying Controls

If your SOC 2 only covers Security at the moment, the first thing you can do is consider expanding to include other categories that may be relevant to the operation of your AI system.

You should also ensure that your current SOC 2 criteria include AI-specific controls, such as:

  • Access controls in the CC6 criteria;
  • Monitoring the AI system for intended use and remediating areas of unintended use or abuse in criteria CC4 and the processing integrity category;
  • SDLC controls in criterion CC8.1; and
  • Supply chain controls for (AI) vendors / suppliers in criterion CC9.2, etc.

2. Consider a SOC 2+ Report

Your other option is to expand to a SOC 2+ report. While fairly unfamiliar as a concept to a lot of organizations, this type of examination allows organizations a lot of flexibility to stack various control frameworks into the testing of design (Type 1) or operational (Type 2) effectiveness of controls within Section 4 of the report.

Historically, these other frameworks have included:

Now, the possible frameworks for SOC 2+ reports also include ISO 42001.

Yes, organizations looking to cover AI-specific risks not included in their current SOC 2 trust services categories could look to add testing of Annex A of ISO 42001—38 controls total—to Section 4 of their reports to demonstrate the AI controls they have in place such as those pertaining to:

  • AI system impact assessments;
  • Data for AI systems; and
  • Responsible use of AI systems, etc.

Safeguarding Your AI Systems for the Future

 

Everything indicates that artificial intelligence is here to stay, and such an outlook has the organizations providing and using it looking for ways to prove the trustworthiness of their AI applications. While ISO 42001 has emerged as the flagship AI standard (for now)—given its specificity to the technology and focus on all its nuances—not all organizations have the time and money it takes to build the required management system to secure their AI.

Hence why many are looking for ways to instead adapt their annual SOC 2 examinations, and we’ve just given you two ideas on how to do that. And if you’d like to learn more about your organization’s options in addressing your AI implementations—and how our team can streamline the possible additional testing/overarching compliance process—please contact us today.

 

About DANNY MANIMBO

Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.