AI Compliance Path: Mapping HITRUST AI Security Assessment to ISO 42001
Artificial Intelligence | HITRUST | ISO 42001
Published: Apr 15, 2025
As AI continues to transform industries worldwide and organizations continue to innovate their use of AI in regular practice, they are also faced with growing pressure to demonstrate that their AI systems are secure, trustworthy, and responsible. With regulatory scrutiny and public concern over widespread use of AI on the rise, aligning with established frameworks and standards has become essential for maintaining credibility and mitigating risk.
Two notable regulatory frameworks have emerged to help organizations navigate the complex world of AI compliance and governance: the HITRUST AI Security and Assurance Program and ISO 42001 Certification, the first international standard focused on AI management systems. In this article, we’ll explain in more detail what the HITRUST AI Security Assessment involves, including the scoring breakdown and how it differs from the HITRUST AI RM Assessment, as well as how to advance and map HITRUST AI Security Assessment to ISO 42001 for better alignment.
What is the HITRUST AI Security Assessment?
In late November 2024, HITRUST announced the addition of the HITRUST AI Security Assessment to its offerings. This assessment adds up to 44 AI-specific additional requirements to a HITRUST r2, e1, or i1 assessment. With the selection of the “Security for AI Systems” factor and an additional report credit in MyCSF, AI providers can be on their way to certifying their AI systems.
It is important to note that up to 44 AI specific requirements will be added in addition to the base requirements, shown in parenthesis relative to each assessment, within the e1 (44), i1 (182), or r2 (varies based on tailoring questions specific to an organization). The number of additional requirements varies based on an organization's AI model.
To gain a better idea of the number to expect for your specific organization, these are the three components that HITRUST considers in order to tailor your assessment:
- Type of AI model(s) used
- Rule-based AI model (+27 requirements)
- Non-generative ML model (+36 requirements)
- Generative AI model (+41 requirements)
- Was covered/confidential information used to train, tune, or enhance the model? (+3 requirements)
- Is the model confidential to your organization (not open source)? (+2 requirements)
As such, you can see that if your organization is using a generative AI model that used covered and/or confidential information to train the model, you could expect to see all 44 potential requirements. However, if your organization is using a rule-based AI model that is open-source and does not use confidential and/or covered data to train, you would only see 27 additional requirements. As a note, the minimum number of additional requirements an organization could see based on selections to these questions is 27 requirements.
Now, you may be wondering: what if I am using a generative AI model, with confidential information used to tune it, and the model is confidential to the organization? That would add 46 requirements according to the above explanation, right? Not quite. Within the last two questions, two of the potentially added requirements are duplicates surrounding encryption and data minimization. Therefore, if the answer to both of the last two questions are “yes”, those duplicate requirements only result in three unique requirements being added, for a total of 44.
Scoring of the HITRUST AI Security Assessment
Now for likely the most pressing question, what does an organization need to pass the HITRUST AI Security Assessment and to gain certification? For an ai1 assessment, which is the assessment used for organizations who added the factor onto an e1 or i1 assessment, the average score for all AI security requirements based on the organization's tailored requirement set must be a minimum of 83. For an ai2 assessment, which is when the factor was selected on an r2 assessment, that average score must be a minimum of 62.
Let’s dive into a high-level overview of how HITRUST scoring works. An ai1 assessment is scored in the same manner as an i1 or e1 assessment, just isolated to only the AI requirements. Therefore, it only tests the implemented control maturity for each requirement. Each requirement receives a score of 0-100%, in 25% increments only. The average of those percentages must be a minimum of 83 percent to pass.
Scoring for the ai2 assessment is the same process as scoring an r2 assessment, once again just isolated to the AI requirements. Where the ai1 only tested implemented, the ai2 tests the full control maturity model which includes policy, procedure, implemented, measured, and managed.
Each maturity level is weighted as follows:
- Policy – 15%
- Procedure – 20%
- Implemented – 40%
- Measured – 10%
- Managed – 15%
Each of the maturity levels can receive a score of 0, 25, 50, 75, or 100%. The scores for each maturity level, based on the associated weights, are then calculated into an overall requirement score. The average of these overall requirement scores for all the AI requirements must score a minimum of a 62 to pass.
Key scoring considerations to keep in mind:
- To gain certification for the HITRUST AI Security Assessment, the score is an average of all the AI requirements across all domains (up to 44 requirements). This differs from the regular e1, i1, and r2 HITRUST assessments where there needs to be a passing score in each domain to receive certification.
- Certification for the HITRUST AI Security Assessment can only be granted if the organization also passes the associated e1, i1, or r2 assessment they combined the AI assessment with.
For a more detailed description of the scoring for a HITRUST r2 assessment, please refer to this article which breaks down the scoring rubric.
HITRUST AI Security Assessment vs. HITRUST AI RM Assessment
The HITRUST AI Security Assessment arrived a few months following the HITRUST AI Risk Management (AI RM) Assessment. So, how exactly is it different?
HITRUST AI Security Assessment |
HITRUST AI RM Assessment |
|
---|---|---|
Number of Requirements |
Up to 44 additional requirements added onto an e1, i1, or r2 |
51 |
Certification? |
Yes |
No |
Standalone? |
No |
Yes |
Potential Paths for AI Certification
There are many paths that an organization may take to achieve certification for their AI systems.
For organizations who are unsure of the span of their AI system security, it may be wise to begin with a standalone AI-specific assessment such as the HITRUST AI Risk Management (AI RM) Assessment. This assessment can be a good starting point to assess an organization’s risk within their AI processes, policies, and systems while testing compliance against several frameworks.
For organizations already conducting a HITRUST assessment it may be beneficial to pursue the HITRUST AI Security Assessment by adding the applicable factor. Or, once again, if the organization is unsure where they stand, they can begin with the AI RM assessment. For organizations that are not conducting a HITRUST assessment and are outside of the healthcare space, it may be beneficial to pursue an ISO 42001 certification.
All in all, it comes down to the specific needs of the organization, but regardless, organizations can rest assured that Schellman can assist with whichever route is taken given our comprehensive suite of AI services.
How to Advance to 42001
As aforementioned, depending on an organization’s environment and tailoring selections within MyCSF, different sets of requirements can be applicable. However, if we look at all 44 potential requirements that could be added during a HITRUST AI Security Assessment, 12 are mapped to ISO 42001 requirements. This can give an organization confidence that they are on the right track if they are interested in progressing from the HITRUST AI Security Assessment to an ISO 42001 certification.
Mapping: HITRUST AI Security Assessment to ISO 42001
Reference the chart below for a detailed mapping from applicable HITRUST AI Security Assessment requirements to the ISO 42001 requirements. It should be acknowledged that while a HITRUST AI Security Assessment requirement may be mapped to an ISO/IEC 42001:2023 requirement and/or Annex A control, the aspects covered in the HITRUST assessment may not fully cover every portion of the ISO 42001 requirement/control. As HITRUST mentions in their documentation, these requirements are “a complement to, not a replacement for, ISO/IEC 42001:2023”.
HITRUST AI Security Assessment Requirement |
ISO/IEC 42001:2023 Requirements |
---|---|
The organization performs security assessments (e.g., AI red teaming, penetration testing) of the AI system which include consideration of AI-specific security threats (e.g., poisoning, model inversion) (1) prior to deployment of new models, (2) prior to deployment of new or significantly modified supporting infrastructure (e.g., migration to a new cloud-based AI platform), (3) regularly (at least annually) thereafter. The organization (4) takes appropriate risk treatment measures (including implementing any additional countermeasures) deemed necessary based on the results. |
8. Operation > Operational planning and control > Paragraph 3
Clause 6.1.3 AI Risk Treatment - Taking the risk assessment results into account, the organization shall define an AI risk treatment process to: a) select appropriate AI risk treatment options; b) determine all controls that are necessary to implement the AI risk treatment options chosen and compare the controls with those in Annex A to verify that no necessary controls have been omitted;
A.6.2.4 AI system verification and validation The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
A.6.2.5 AI system deployment The organization shall document a deployment plan and ensure that appropriate requirements are met prior to deployment.
A.6.2.6 AI system operation and monitoring The organization shall define and document the necessary elements for the ongoing operation of the AI system. At the minimum, this should include system and performance monitoring, repairs, updates and support |
The organization formally defines the roles and responsibilities for the (1) governance, (2) security, and (3) risk management of the organization's deployed AI systems within the organization (e.g., by extending a pre-existing RACI chart or creating a new one specific to AI). The organization formally (4) assigns human accountability for the actions performed by, outputs produced by, and decisions made by the organization’s deployed AI systems. |
5. Leadership > 5.3. Roles, responsibilities, and authorities Annex A > A.3. Internal organization > A.3.2. AI roles and responsibilities
A.4.6 Human resources As part of resource identification, the organization should document information about the human resources and their competences utilized for the development, deployment, operation, change management, maintenance, transfer and decommissioning, as well as verification and integration of the AI system.
A.10.2 Allocating responsibilities The organization shall ensure that responsibilities within their AI system life cycle are allocated between the organization, its partners, suppliers, customers and third parties. |
As appropriate to the organization’s AI deployment context, the stated scope and contents of the organization’s written policies—in areas including but not limited to (1) security administration, (2) data governance, (3) software development, (4) risk management, (5) incident management, (6) business continuity, and (7) disaster recovery —explicitly includes the organization’s AI systems and their AI specificities. |
5. Leadership > 5.2. AI policy Annex A > A.2. Policies related to AI > A.2.2. AI policy Annex A > A.2. Policies related to AI > A.2.3. Alignment with other organizational policies
4.3 Determining the scope of the AI management system The organization shall determine the boundaries and applicability of the AI management system to establish its scope. |
The organization provides training no less than annually on AI security topics (e.g., vulnerabilities, threats, organizational policy requirements) for all teams involved in AI software and model creation and deployment, including (as applicable) (1) development, (2) data science, and (3) cybersecurity personnel. |
7. Support > 7.2. Competence 7. Support > 7.3. Awareness |
Changes to AI models (including upgrading to new model versions and moving to completely different models) are consistently (1) documented, (2) tested, and (3) approved in accordance with the organization’s software change control policy prior to deployment. When upgrading to a newer version of an externally developed model, the organization (4) obtains and reviews the release notes describing the model's update. |
8. Operation > Operational planning and control > Paragraph 5 Annex A > A.6. AI system life cycle > A.6.2.2. AI system requirements and specification Annex A > A.6. AI system life cycle > A.6.2.3. Documentation of AI system design and development Annex A > A.6. AI system life cycle > A.6.2.4. AI system verification and validation Annex A > A.6. AI system life cycle > A.6.2.5. AI system deployment
A.6.2.7 AI system technical documentation The organization shall determine what AI system technical documentation is needed for each relevant category of interested parties, such as users, partners, supervisory authorities, and provide the technical documentation to them in the appropriate form.
A.10.3 Suppliers The organization shall establish a process to ensure that its usage of services, products or materials provided by suppliers aligns with the organization’s approach to the responsible development and use of AI systems |
Changes to language model tools such as agents and plugins are consistently (1) documented, (2) tested, and (3) approved in accordance with the organization’s software change control policy prior to deployment. |
8. Operation > Operational planning and control > Paragraph 5 Annex A > A.6. AI system life cycle > A.6.2.2. AI system requirements and specification Annex A > A.6. AI system life cycle > A.6.2.3. Documentation of AI system design and development Annex A > A.6. AI system life cycle > A.6.2.4. AI system verification and validation Annex A > A.6. AI system life cycle > A.6.2.5. AI system deployment
A.6.2.7 AI system technical documentation The organization shall determine what AI system technical documentation is needed for each relevant category of interested parties, such as users, partners, supervisory authorities, and provide the technical documentation to them in the appropriate form. |
Documentation of the overall AI system discusses the creation, operation, and lifecycle management of any (1) models, (2) datasets (including data used for training, tuning, and prompt enhancement via RAG), (3) configurations (e.g., metaprompts), (4) language model tools such as agents and plugins maintained by the organization, as applicable. Documentation of the overall AI system also describes the (5) tooling resources (e.g., AI platforms, model engineering environments, pipeline configurations), (6) system and computing resources, and (7) human resources needed for the development and operation of the AI system. |
Annex A > A.4. Resources for AI systems > A.4. Resources for AI systems (all items)
A.6.2.7 AI system technical documentation The organization shall determine what AI system technical documentation is needed for each relevant category of interested parties, such as users, partners, supervisory authorities, and provide the technical documentation to them in the appropriate form. |
The organization (1) performs an assessment to identify and evaluate its compliance with applicable legal and regulatory requirements addressing the development and deployment of AI systems, including potential liability for harmful, infringing, or damaging outputs or behaviors. This assessment is performed (2) prior to deployment of the AI system and (3) regularly (at least annually) thereafter. |
4. Context of the organization > 4.1. Understanding the organization and its context > Note 2, bullet a, sub-bullet 1
6.1.4 AI system impact assessment |
Agreements between the organization and external commercial providers of AI system components and services clearly communicate the organization’s AI security requirements, including agreements with providers of AI (1) models, (2) datasets, (3) software packages, (4) platforms and computing infrastructure, (5) language model tools such as agents and plugins, and (6) contracted AI system-related services (e.g., outsourced AI system development), as applicable. |
Annex A > A.10. Third-party and customer relationships > A.10.2. Allocating responsibilities Annex A > A.10. Third-party and customer relationships > A.10.3. Suppliers |
The AI system logs all inputs (prompts, queries, inference requests) to and outputs (inferences, responses, conclusions) from the AI model, including (1) the exact input (e.g., the prompt, the API call), (2) the date and time of the input, (3) the user account making the request, (4) where the request originated, (5) the exact output provided, and (6) the version of the model used. AI system logs are (7) managed (i.e., retained, protected, and sanitized) in accordance with the organization’s policy requirements. |
Annex A > A.6. AI system life cycle > A.6.2.8. AI system recording of event logs |
The organization maintains a documented inventory of data used to (1) train, test, and validate AI models; (2) fine-tune AI models; and (3) enhance AI prompts via RAG, as applicable. At minimum, this inventory contains the data (4) provenance and (5) sensitivity level (e.g., protected, confidential, public). This inventory is (6) periodically (at least semiannually) reviewed and updated. |
Annex A > A.7. Data for AI systems > A.7.3. Acquisition of data Annex A > A.7. Data for AI systems > A.7.5. Data provenance
A.4.3 Data resources As part of resource identification, the organization shall document information about the data resources utilized for the AI system.
A.7.2 Data for development and enhancement of AI system The organization shall define, document and implement data management processes related to the development of AI systems. |
The organization’s established security incident detection and response processes address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion) through (1) updates to the organization’s security incident plans / playbooks; (2) consideration of AI-specific threats in security incident tabletop exercises; (3) recording the specifics of AI-specific security incidents that have occurred; and incorporating (4) logs and (5) alerts from deployed AI systems into the organization’s monitoring and security incident detection tools. |
Annex A > A.8. Information for interested parties of AI systems > A.8.4. Communication of incidents
A.3.3 Reporting of concerns The organization shall define and put in place a process to report concerns about the organization’s role with respect to an AI system throughout its life cycle. |
Moving Forward in Your AI Compliance Journey
The need for structured, trustworthy, and aligned AI governance has never been greater. By using HITRUST to establish a strong baseline of responsible AI use, and aligning with ISO 42001 for long-term governance, organizations can create a comprehensive and forward-thinking approach to AI compliance. This layered AI strategy not only demonstrates regulatory compliance but also helps build public and stakeholder trust and confidence in use of AI systems.
If you’re ready to begin your HITRUST + AI Certification or ISO 42001 Certification journey today, or have additional questions about the process or requirements, Schellman can help. Contact us today to learn more information about our services and we’ll get back to you shortly.
In the meantime, discover other helpful AI insights in these additional resources:
About Jerrad Bartczak
Jerrad Bartczak is a Senior Associate – AI within the AI Practice at Schellman, based in New York. He specializes in AI assessments including ISO 42001 and HITRUST + AI, while staying current on worldwide AI compliance and governance developments. He also possesses in-depth compliance knowledge cultivated through years of experience conducting HITRUST, SOC 1, SOC 2, DEA EPCS and HIPAA audits. Jerrad maintains CISSP, CISA, CCSFP, CCSK and Security+ certifications.