FedRAMP | Penetration Testing | Red Team Assessments
By:
Clint Mueller
December 16th, 2024
Since the beginning of 2024, FedRAMP Revision 5 has mandated that organizations not only perform traditional penetration tests, but also undergo comprehensive red team engagements. This new requirement reflects a broader emphasis on assessing not just technical vulnerabilities, but also the effectiveness of an organization’s overall security posture, including it’s response to sophisticated and realistic threats. Over the past year, we’ve conducted many red team exercises, each tailored to different organizational environments and threat landscapes. These engagements have varied significantly in scope and complexity, offering us a wealth of insights into both our successes and the challenges we’ve faced.
By:
Gabriel Rivera
December 4th, 2024
Among the several offerings the Sektor7 Institute has related to evasion, privilege escalation, malware development, and persistence, cyber security professionals of various disciplines, from red team operators to incident responders- can all find something of value in Sektor7 Institute’s RED TEAM Operator: Windows Evasion Course.
By:
Tyler Petersen
November 15th, 2024
Out of all the types of penetration testing we perform at Schellman, physical security is frequently overlooked due to the fact many compliance frameworks simply don’t mandate this type of testing. Of course protecting your physical infrastructure can be challenging. Many organizations struggle to identify and address vulnerabilities, leaving them vulnerable to theft, vandalism, and other threats. The good news is, you're already taking the right steps! By reading this, you're demonstrating a commitment to physical security.
By:
Austin Bentley
November 8th, 2024
Maybe it’s time for your yearly pen test. Or, maybe you’re building up your very own internal pen test team. Navigating this journey can be challenging, but we’re committed to making it easy for you. Fortunately, we bring a wealth of insight from our “other side of the table” perspective. This multipart series will prepare you for concerns on both sides of the table, so you can be certain you’re ready for your next engagement.
By:
Ryan Warren
November 1st, 2024
While many companies are moving to the cloud, it's still common to find Active Directory (AD) deployed locally in Windows environments. During internal network pen tests, I was pretty comfortable with lateral movement and privilege escalation (via missing patches or LLMNR/NBT-NS/IPv6, open network shares, etc.) but felt lacking in how I could leverage attacks against AD to provide more impact during the assessment. In my journey to get better at attacking AD, I was able to enroll in different free and paid courses. This blog post will provide you with an overview of the four I found to be most beneficial personally.
By:
Dan Groner
October 22nd, 2024
With so much business now being done online and digitally, much—if not most—of organizational security concerns focus on beefing up technical controls. But, in fact, the human element of cybersecurity is often where the most impactful failures occur.
Penetration Testing | Red Team Assessments
By:
Jonathan Garella
October 18th, 2024
Thinking Inside the Box Traditional red teaming approaches often focus on external threats—simulating how an outside attacker might breach a company’s defenses. This method is undeniably valuable, offering insights into how well an organization can withstand external cyberattacks. However, this "outside-in" perspective can sometimes overlook another aspect of security: the risks that arise from within the organization itself. While traditional red teaming is crucial for understanding external threats, thinking inside the box—examining internal processes, workflows, and implicit trusts—can reveal vulnerabilities that are just as dangerous, if not more so to an organization.
By:
Cory Rey
October 17th, 2024
With proven real-life use cases, it’s a no-brainer that companies are looking for ways to integrate large language models (LLMs) into their existing offerings to generate content. A combination that’s often referred to as Generative AI, LLMs enable chat interfaces to have a human-like, complex conversation with customers and respond dynamically, saving you time and money. However, with all these new, exciting bits of technology come related security risks—some that can arise even at the moment of initial implementation.