WEF AI Governance Alliance Briefing Papers: An Overview
In January 2024, the AI Governance Alliance—an arm of the World Economic Forum (WEF)— released a series of three papers covering several important artificial intelligence (AI) topics:
- Paper #1 – Presidio AI Framework: Towards Safe Generative AI Models
- Paper #2 – Unlocking Value from Generative AI: Guidance for Responsible Transformation
- Paper #3 – Generative AI Governance: Shaping a Collective Global Future
This move signals back to one of the hottest hot topics at Davos and the WEF this year—the intersection of AI and cybersecurity, and how to build widespread trust in this incredibly promising technology that is increasingly becoming enmeshed in the business landscape.
As cybersecurity experts keeping up with the latest developments in AI, in this blog post, we will highlight the important points discussed in these papers so that you too can more fully understand where the technology is headed.
What is the AI Governance Alliance?
But first, let’s establish the weight of these papers (and why you should care about their findings). After all, who is the AI Governance Alliance?
Created as a new initiative by the WEF in June 2023, this coalition was brought together “to champion responsible global design and release of transparent and inclusive AI systems.” Made up of over 250 people from 200 organizations, the Alliance is comprised of diverse leaders from many different cornerstone sectors, including industry, government, academia, and civil society. To serve its overarching purpose of addressing the many facets of AI, the Alliance has structured its focus around three core concepts:
- Safe Systems and Technologies
- Responsible Applications and Transformation
- Resilient Governance and Regulation
What are the AI Governance Alliance Briefing Papers?
As one of their first moves, they’ve released this briefing paper series—a collection of insight regarding responsible AI development, implementation, and governance with the intent to help guide decision-makers toward a transparent future where AI only enhances societal progress.
Paper #1 – Presidio AI Framework: Towards Safe Generative AI Models
When developing any technology, best practice is always to ensure security is by design rather than an afterthought, and the Alliance says AI should be no different in the first paper of its series.
To minimize risk, the Presidio AI Framework recommends a shared responsibility between stakeholders so that all available knowledge is leveraged to protect against threats to AI systems throughout the development lifecycle—something they break down into six phases:
Phase |
What Happens? |
---|---|
Data Management Phase |
Consideration of data access and data source(s) |
Foundation Model Building Phase |
Design, data acquisition, model training, etc. of an AI system |
Foundation Model Release Phase |
Determination of access to the model (e.g., is it fully closed, API, fully open) |
Model Adaptation Phase |
Adaptation of a model to perform generative AI tasks |
Model Integration Phase |
Integration of the model into an application |
Model Usage Phase |
Engagement with the model by users using natural language prompts |
To mitigate risk and avoid having to scramble in response to an event, the framework urges the implementation of “guardrails” throughout the phases that will also ensure the safety and trustworthiness of your AI systems.
Despite their recommendations for the implementation of these procedural/technical safety measures at every step of the way, the paper only provides specific strategies for three phases:
Phase |
Guardrail |
---|---|
Foundation Model Building Phase |
|
Foundation Model Release Phase |
|
Model Adaptation Phase |
|
With this proactive guardrail approach, the framework aims to set the stage for more ethical AI development that keeps downstream stakeholders in mind during all phases through the securing of systems and creating detailed documentation to effectively troubleshoot issues when they arise.
Paper #2 – Unlocking Value from Generative AI: Guidance for Responsible Transformation
If you’ve been keeping up with the news, you likely know that use cases for AI are rising exponentially, be they in the medical, art, or technology industries—the possibilities for artificial intelligence are almost endless.
In their second paper, the Alliance groups these AI use cases into three categories:
- Enhancing enterprise productivity;
- Creating new products or services; and
- Redefining industries and societies.
However, even if AI has a possible use case, the paper argues that to move forward with actual implementation, the value of a potential AI system should be evaluated using three elements:
Value Element |
Questions to Ask |
---|---|
Business impact |
|
Operational readiness |
|
Investment strategy |
Do you have the financial resources available to integrate AI into your organization? That includes the funds it’ll take to:
|
Once you’ve answered these questions, you can better decide if implementation of AI is suitable, and if it is, you must move forward responsibly, as careless implementation of AI systems can lead to job displacement, the spread of misinformation through AI hallucination, and sustainability issues due to the large amount of energy needed to train AI models.
To decrease these and other risks as you harness generative AI’s benefits, focus particularly on strategies addressing the following during implementation:
- Accountability (multi-stakeholder governance, distributed ownership)
- Trust (transparency, education)
- Challenges to scale (organization-wide implementation of AI)
- Human impact (considerations of an evolving workforce, impact on employees)
Paper #3 – Generative AI Governance: Shaping a Collective Global Future
While the first two papers speak to AI development and implementation on an organizational scale, the last one focuses on how to address AI governance by taking a four-pronged approach:
- Risk-based – classifying and prioritizing risks of harm caused by AI systems
- Rules-based – detailed rules, standards, or requirements for AI systems
- Principles-based – principles or guidelines for AI systems
- Outcomes-based – measured outcomes that do not enforce specific processes
However, the Alliance also stresses that this should be done globally rather than by individual nations—as you can imagine, if every country implemented its own AI frameworks and laws, it’d likely quickly lead to incompatibility or confusion. What if one nation took a rules-based approach to AI whereas another country took an outcomes-based approach?
To avoid such clashing and develop an international governance framework that works for all, collaboration among diverse perspectives—like those from stakeholders in academia, industry, government, and other groups— will be key. Even more important will be a willingness to compromise between all parties involved, which should also include those in the Global South, who this paper stresses should have input in AI development and governance decisions.
Though the Global South has not historically been associated with technological advancement due to their lack of infrastructure, data, and technology talent in the workforce, the Alliance argues that excluding these nations would create a greater power imbalance and digital divide. For AI to reach its full potential, all parties must have access to its power. Including these countries would not only mean even more diverse perspectives to help shape the global framework, but their involvement could also spur improvement of those citizens’ lives through the implementation of AI in areas like healthcare and education.
Next Steps for Your AI Systems
Though the future of AI remains in flux as both the technology and standards to govern its use continue to rapidly evolve and emerge, the WEF’s AI Governance Alliance—and the insight within its series of 3 briefing papers—can still serve as a helpful guiding light regarding the development, implementation, and governance of artificial intelligence.
That being said, these papers aren’t the only available resource for those looking to proactively build trust in their AI systems—check out our articles on these other documents and published frameworks that can also help shape a more responsible approach to AI:
- Should You Get an ISO 42001 Gap Assessment?
- An Explanation of the Guidelines for Secure AI System Development
- NIST's AI Risk Management Framework Explained
Should you have any further questions regarding the current landscape of AI regulations and how your organization can get started in validating the security of your systems, contact us today to speak with our dedicated AI team.
About Jerrad Bartczak
Jerrad Bartczak is a Senior Associate with Schellman based in New York. In his work ensuring that clients maintain an effective system of controls within their organization, he has experience conducting HITRUST, SOC 1, SOC 2, and HIPAA audits and maintains CISA, CCSFP, CCSK certifications.