How to Assess and Treat AI Risks and Impacts with ISO/IEC 42001:2023
ISO/IEC 42001:2023 is rapidly becoming the global standard for Artificial Intelligence (AI ) governance. While it is a close cousin of ISO/IEC 27001:2022, ISO 42001—rather than focusing primarily on cyber and information security—takes a more holistic approach to risk management for AI systems.
At StackAware, they chose to implement ISO 42001 and subsequently performed the AI risk assessment, impact assessment, and risk treatments required to comply with the framework. And when their complete AI Management System (AIMS) was ready, they engaged Schellman—one of the first firms accredited for ISO 42001—to certify it.
In this blog post written in collaboration with StackAware, we’ll detail how they built their AIMS—including how they satisfied the critical AI risk requirements outlined in clauses 6.1.2-6.1.4—before we offer our assessor perspective on their efforts so that you can leverage this insight when establishing your AIMS.
What are the ISO/IEC 42001:2023 Risk Requirements?
As its purpose is to assist organizations in ensuring that their AI systems are developed, deployed, and managed responsibly and securely, ISO 42001’s clauses 6.1.2-6.1.4 require organizations to perform three key things:
- AI risk assessment
- AI impact assessment
- AI risk treatment
1. AI Risk Assessment
A successfully certified AIMS must document and undergo a process that measures AI-related risks and their potential consequences to the organization, individuals, and society at large. This process must include an assessment of risk likelihood and impact, as well as a comparison against risk criteria and AI objectives.
When StackAware performed its ISO 42001 risk assessment, the results revealed the organization’s vulnerability to AI-related cybersecurity risks from:
- Prompt injection
- Unintended training
- Unanticipated data retention
It also highlighted the potential for broader implications of StackAware’s AI use, including:
- Political bias
- Model collapse
- 3rd party copyright infringement
2. AI Impact Assessment
ISO 42001 also requires a separate but related AI impact assessment that is focused on external entities—like groups of individuals and broader societies—rather than your organization and your AI-related objectives and use cases.
While the standard is less prescriptive in terms of how this impact assessment must be conducted, it can—and should—be used to inform your AI risk assessment.
Some issues StackAware identified as part of their AI impact analysis included:
- Legal, governmental, and public policy: StackAware is an OpenAI customer and uses the company’s Whisper application programming interface (API) for speech-to-text transcription. Some of the potential public policy impacts of that technology include:
- The increased public access to written information, especially related to public proceedings such as trials.
- The risk of malicious actors spreading misinformation if they successfully evade OpenAI’s safety layers. Specifically, scammers or even state-sponsored groups could potentially cause confusion during key moments by mimicking the voices of government officials.
- Environmental sustainability: StackAware is also an avid user of OpenAI’s generative pre-trained transformer (GPT)-4 model, and while information regarding GPT-4 is scarce, independent researchers suggest that training its predecessor, GPT-3, required the evaporation of 700,000 liters of clean freshwater, suggesting a major sustainability impact of continuous AI use.
- Economic disruption: Although AI in general has the potential to massively increase economic output over the coming decades, at the same time, it could also eliminate entire categories of occupations. This is something StackAware weighed against their extremely limited manpower and need to rapidly iterate.
3. AI Risk Treatment
Once the organizational risks and broader impacts are clear, ISO 42001’s requisite next step is to treat them appropriately.
When they did so, StackAware used the traditional four approaches—accept, avoid, transfer, and mitigate—to risk management. Some examples include:
- Accepted Risk: While OpenAI’s breach in early 2023 showed that cross-tenant data leakage is certainly a risk, StackAware decided that they didn’t have enough leverage to negotiate a single-tenant architecture with OpenAI. Because of the benefits of using the company’s products, StackAware accepted the possibility of it happening again.
- Avoided Risk: Because the company doesn’t train the underlying GPT-4 model on AI-generated material (or at all), StackAware avoided the risk of triggering a model collapse. With that said, there is some risk that OpenAI could do so on its own accord, with the same result, but StackAware accepted this marginal risk.
- Transferred Risk: Due to the uncertainty regarding the applicability of copyright law as it relates to generative AI, StackAware leveraged OpenAI’s indemnification provisions to transfer some litigation risk to them.
- Mitigated Risk: StackAware’s AI policy requires that employees and contractors opt out of training third-party AI models to the maximum extent possible. StackAware also leverages ChatGPT Team, which disables all training on user inputs. These measures reduce the risk of sensitive information being regurgitated to other OpenAI customers through unintended training.
To help StackAware avoid, transfer, and mitigate these risks, the company applied all ISO 42001 Annex A controls. They also further bolstered their cybersecurity posture as it relates to AI by maintaining a vulnerability disclosure policy (VDP) that solicits confidential notifications from ethical hackers about potential flaws in StackAware networks or AI systems.
ISO 42001 Risk Management from the Assessor’s Perspective
AI is not inherently ‘good’ or ‘evil’, ‘fair’ or ‘biased’, ‘ethical’ or ‘unethical’ although it can be or can seem so—like with most things, there are advantages and drawbacks to this advanced technology.
While AI can facilitate positive progress—like the automation of difficult or dangerous jobs, faster and more accurate analysis of large data sets, advances in healthcare, and more—there are also concerns about AI’s potentially negative effects, including harm due to unwanted bias, environmental damage, and unwanted reductions in workforce.
To reassure consumers that they can trust your systems in light of all these worries, it’s imperative that organizations providing, developing, and/or using AI in the delivery of their services have robust AI risk management processes in place to foster transparency and trustworthiness of systems using AI technologies.
StackAware’s experience in getting ISO 42001 certified proves that its framework would make for a great choice in this, as it lays out the requirements for performing risk assessments, risk mitigation (treatment), and system impact assessments on AI systems. To get started, it may suit your organization to first have a gap assessment performed.
We’ve also identified some AI benefits for those that are already ISO 9001 certified or interested in achieving that certification—check our article on how ISO 9001 can assist in ISO 42001 certification for more details.
That being said, there are also other, complementary standards you can reference for additional guidance when performing such AI risk management efforts such as:
- ISO/IEC 23894 (provides guidance on managing risks specific to AI)
- ISO/IEC 38507 (provides guidance the governance implications of the use of AI)
- ISO/IEC DIS 42005 (at the time of the writing of this article, is still in Draft form, but provides guidance performing AI system impact assessments)
Moving Forward with Your ISO 42001 Certification
As AI use proliferates across the globe, ISO/IEC 42001:2023 continues its emergence as a key governance framework that can help organizations effectively and appropriately measure and treat the related impacts and risks while reaping the benefits AI offers.
Understanding paragraphs 6.1.2-6.1.4 of the standard will be critical to your certification, and hopefully, this insight into StackAware’s experience satisfying those requirements and having them certified will aid in your build-out of your AIMS.
If you’re interested in learning more about StackAware and how they help AI-powered companies manage cybersecurity, compliance, and privacy risk, book a call today. Or if you’d like to understand more about ISO 42001 certification and how Schellman may be the right fit for you too, contact us today.
About the Authors
Walter Haydock is the Founder and CEO of StackAware, which helps AI-powered companies manage cybersecurity, compliance, and privacy risk.
Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.
About Schellman
Schellman is a leading provider of attestation and compliance services. We are the only company in the world that is a CPA firm, a globally licensed PCI Qualified Security Assessor, an ISO Certification Body, HITRUST CSF Assessor, a FedRAMP 3PAO, and most recently, an APEC Accountability Agent. Renowned for expertise tempered by practical experience, Schellman's professionals provide superior client service balanced by steadfast independence. Our approach builds successful, long-term relationships and allows our clients to achieve multiple compliance objectives through a single third-party assessor.