Upcoming Webinar | From Advisory to Audit: Navigating ISO 42001 Implementation and Certification on November 13th @ 1:00 PM ET

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

Blog

The Schellman Blog

Stay up to date with the latest compliance news from the Schellman blog.

Blog Feature

Penetration Testing | Artificial Intelligence | ISO 42001

By: Josh Tomkiel
November 3rd, 2025

Not only is artificial intelligence changing how businesses operate; it's also changing how cybercriminals attack. As organizations rush to adopt AI systems, they face new security risks that traditional defenses can't handle.

Blog Feature

Cybersecurity Assessments | Artificial Intelligence

By: Sully Perella
October 15th, 2025

People interact with Artificial Intelligence (AI) in many ways, but most commonly through written prompts, which is the method that's also the most familiar avenue for basic prompt-hacking techniques. However, the real concern for organizations lies beyond these simple exploits, with sophisticated attacks targeting enterprise AI systems. In this article, we'll explain how an attacker can weaponize AI assistants to extract proprietary data, manipulate decision-making, and even infiltrate corporate networks.

Blog Feature

Artificial Intelligence

By: Sully Perella
October 6th, 2025

If you thought developing and implementing your AI system was a challenge, just wait until you attempt to ensure your AI system complies with conflicting international laws simultaneously.

Blog Feature

Artificial Intelligence | ISO 42001

By: Danny Manimbo
September 29th, 2025

As artificial intelligence continues to become widely embedded in critical business decisions, strategies, and processes, it increasingly faces growing scrutiny from regulators, customers, and the public. While AI offers unprecedented opportunities for operational enhancements and innovation, it also introduces new risks.

Blog Feature

Artificial Intelligence | ISO 42001

By: Schellman
September 25th, 2025

Colorado is leading the charge of U.S. AI policy with the Consumer Protections for Artificial Intelligence (SB24-205) law. This law, commonly referred to as the Colorado AI Act (CO AI Act), is the first enacted comprehensive state law regulating high-risk AI systems. Signed in May 2024, it sets a precedent for balancing innovation with consumer protection through requirements on transparency, accountability, and fairness.

Blog Feature

Artificial Intelligence

By: Sully Perella
September 16th, 2025

The S&P study on Generative AI asserts that, “The percentage of companies abandoning the majority of their AI initiatives before they reach production has surged from 17% to 42% year over year, with organizations on average reporting that 46% of projects are scrapped between proof of concept and broad adoption.”

Blog Feature

Artificial Intelligence | ISO 42001

By: Mike Somody
September 8th, 2025

Organizations are under increasing pressure to secure and govern their AI systems responsibly. Fortunately, industry frameworks are stepping in to help, including the Cloud Security Alliance (CSA) Artificial Intelligence Controls Matrix (AICM), which maps to the ISO 42001 standard for AI management systems. Together, these frameworks provide a powerful roadmap for aligning AI governance with established security and compliance practices.

Blog Feature

Artificial Intelligence | ISO 42001

By: Danny Manimbo
August 18th, 2025

As the need for innovative artificial intelligence grows, regulatory bodies are working quickly to create frameworks that balance acceleration with safety, accountability, and trust. Notably, the European Union’s AI Act is poised to reshape how organizations approach AI governance, especially when it comes to general-purpose AI (GPAI) models.

{