ISO Certifications | Artificial Intelligence
By:
Danny Manimbo
December 9th, 2024
Since the release of ISO 42001 in late December 2023, it’s been a year of discovery and education regarding this new flagship artificial intelligence (AI) standard in terms of determining its applicability, use case(s), and benefits to organizations. For those who have since determined ISO 42001 is the right framework for them, the next step has been to prepare for certification, and that involves more than a few steps.
Cybersecurity Assessments | Artificial Intelligence
By:
Avani Desai
November 13th, 2024
Even as AI systems become more advanced and enmeshed in daily operations, concerns regarding whether large language models (LLMs) are generating accurate and true information remain paramount throughout the business landscape. Unfortunately, the potential for AI to generate false or misleading information—often referred to as AI “hallucinations”—is very real, and though the possibility poses some significant cybersecurity challenges, there are ways organizations deploying this technology can mitigate the risks.
ISO Certifications | SOC Examinations | Artificial Intelligence
By:
Danny Manimbo
November 4th, 2024
For anyone immersed in digital technology, you know that artificial intelligence (AI) is all the rage right now, and for good reason, the use cases for this technology are growing all the time. But as AI continues to enmesh with daily life as well as business, security concerns have grown in parallel, as have questions regarding the implications on organizations and their ongoing compliance efforts. At the top of mind for many has been how AI factors into SOC 2 examinations.
Cybersecurity Assessments | Artificial Intelligence
By:
Sully Perella
October 31st, 2024
Artificial intelligence (AI)—you’ve heard of it, you’re likely using it, and you know it’s already used everywhere and its reach will only likely increase. These days, the term "AI" is thrown around frequently, but because this technology is actually made up of many different subsets that generally all get thrown under the umbrella of AI, it can sometimes lead to confusion.
ISO Certifications | Artificial Intelligence | ISO 42001
By:
Danny Manimbo
October 24th, 2024
Since its publication in December 2023, business leaders are still wrapping their heads around the ISO 42001 standard. The framework is designed to help any organization who provides, develops, or uses artificial intelligence (AI) products and services to do so in a trustworthy and responsible manner, guided by the requirements and safeguards that the standard defines—including clearly defining your AI role.
ISO Certifications | Artificial Intelligence | ISO 42001
By:
Megan Sajewski
October 21st, 2024
When seeking ISO 42001:2023 certification, you need to ensure that your artificial intelligence management system (AIMS) aligns with the standard’s framework clauses (4-10), each of which focuses on a specific facet—context, leadership, planning, support, operation, performance evaluation, and improvement.
Penetration Testing | Artificial Intelligence
By:
Cory Rey
October 17th, 2024
With proven real-life use cases, it’s a no-brainer that companies are looking for ways to integrate large language models (LLMs) into their existing offerings to generate content. A combination that’s often referred to as Generative AI, LLMs enable chat interfaces to have a human-like, complex conversation with customers and respond dynamically, saving you time and money. However, with all these new, exciting bits of technology come related security risks—some that can arise even at the moment of initial implementation.
Penetration Testing | Artificial Intelligence
By:
Josh Tomkiel
October 11th, 2024
Need for Secure LLM Deployments As businesses increasingly integrate AI-powered Large Language Models (LLMs) into their operations via GenAI (Generative AI) solutions, ensuring the security of these systems is on the top of everyone’s mind. "AI Red Teaming" (which is closer to Penetration Testing than a Red Team Assessment) is a methodology to identify vulnerabilities within GenAI deployments proactively. By leveraging industry-recognized frameworks, we can help your organization verify that your LLM infrastructure and execution is done securely.