Schellman becomes The First ISO 42001 ANAB Accredited Certification Body!

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
ESG & Sustainability
ESG & Sustainability
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

Blog

The Schellman Blog

Stay up to date with the latest compliance news from the Schellman blog.

Blog Feature

Penetration Testing

By: Tyler Petersen
November 15th, 2024

Out of all the types of penetration testing we perform at Schellman, physical security is frequently overlooked due to the fact many compliance frameworks simply don’t mandate this type of testing. Of course protecting your physical infrastructure can be challenging. Many organizations struggle to identify and address vulnerabilities, leaving them vulnerable to theft, vandalism, and other threats. The good news is, you're already taking the right steps! By reading this, you're demonstrating a commitment to physical security.

Blog Feature

Penetration Testing

By: Austin Bentley
November 8th, 2024

Maybe it’s time for your yearly pen test. Or, maybe you’re building up your very own internal pen test team. Navigating this journey can be challenging, but we’re committed to making it easy for you. Fortunately, we bring a wealth of insight from our “other side of the table” perspective. This multipart series will prepare you for concerns on both sides of the table, so you can be certain you’re ready for your next engagement.

Blog Feature

Penetration Testing

By: Ryan Warren
November 1st, 2024

While many companies are moving to the cloud, it's still common to find Active Directory (AD) deployed locally in Windows environments. During internal network pen tests, I was pretty comfortable with lateral movement and privilege escalation (via missing patches or LLMNR/NBT-NS/IPv6, open network shares, etc.) but felt lacking in how I could leverage attacks against AD to provide more impact during the assessment. In my journey to get better at attacking AD, I was able to enroll in different free and paid courses. This blog post will provide you with an overview of the four I found to be most beneficial personally.

Blog Feature

Penetration Testing

By: Dan Groner
October 22nd, 2024

With so much business now being done online and digitally, much—if not most—of organizational security concerns focus on beefing up technical controls. But, in fact, the human element of cybersecurity is often where the most impactful failures occur.

Blog Feature

Penetration Testing | Red Team Assessments

By: Jonathan Garella
October 18th, 2024

Thinking Inside the Box Traditional red teaming approaches often focus on external threats—simulating how an outside attacker might breach a company’s defenses. This method is undeniably valuable, offering insights into how well an organization can withstand external cyberattacks. However, this "outside-in" perspective can sometimes overlook another aspect of security: the risks that arise from within the organization itself. While traditional red teaming is crucial for understanding external threats, thinking inside the box—examining internal processes, workflows, and implicit trusts—can reveal vulnerabilities that are just as dangerous, if not more so to an organization.

Blog Feature

Penetration Testing

By: Cory Rey
October 17th, 2024

With proven real-life use cases, it’s a no-brainer that companies are looking for ways to integrate large language models (LLMs) into their existing offerings to generate content. A combination that’s often referred to as Generative AI, LLMs enable chat interfaces to have a human-like, complex conversation with customers and respond dynamically, saving you time and money. However, with all these new, exciting bits of technology come related security risks—some that can arise even at the moment of initial implementation.

Blog Feature

Penetration Testing | Artificial Intelligence

By: Josh Tomkiel
October 11th, 2024

Need for Secure LLM Deployments As businesses increasingly integrate AI-powered Large Language Models (LLMs) into their operations via GenAI (Generative AI) solutions, ensuring the security of these systems is on the top of everyone’s mind. "AI Red Teaming" (which is closer to Penetration Testing than a Red Team Assessment) is a methodology to identify vulnerabilities within GenAI deployments proactively. By leveraging industry-recognized frameworks, we can help your organization verify that your LLM infrastructure and execution is done securely.

Blog Feature

Penetration Testing

By: Austin Bentley
October 4th, 2024

You’ve got a system that needs to be tested, but you’re not really certain about which environment the testing should occur in. Or, maybe you’re feeling uneasy about testing within production. Many have been in your exact same shoes in the past -- below, we’ll help assist you in making this important decision.

{