Schellman becomes The First ISO 42001 ANAB Accredited Certification Body!

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

Cloud Database Security And Fedramp

Cloud Computing | FedRAMP | Federal Assessments

Many cloud service providers (CSPs) are not fully addressing the database scanning requirements for FedRAMP and have questions related to database security and FedRAMP.  This article details the issues associated with not meeting the database scanning requirement, the most common reasons why this occurs, what can be done to improve this and what to consider with database security beyond scanning. 

In early 2011, the Federal government published a cloud computing strategy, which has become known as the Cloud First policy due to a focus on evaluating cloud offerings before making new capital investments.  The goal in this effort is to reduce inefficiencies in the government’s use of Information Technology (IT).  This document was an initial catalyst for government adoption of cloud services, which although slow, has been increasing.  Cloud Service Providers (CSPs) which historically focused on private sector clients, began tailoring services for government agencies.  This document also mentions the involvement of the Federal Risk and Authorization Management Program (FedRAMP) to provide a standard, centralized approach to assessing and authorizing cloud computing services and products.  While FedRAMP actually began in 2010, the Cloud First policy, along with subsequent memorandums, raised awareness of the program.  As FedRAMP has matured over the past five years, more CSPs and government agencies are participating.  While changes have and continue to occur within FedRAMP, the first step in the FedRAMP process has remained a Security Assessment by a Third Party Assessment Organization (3PAO) against a baseline set of requirement s from the National Institute of Standards and Technology (NIST) 800-53 publication that covers Security and Privacy Controls for Federal Information Systems and Organizations.

 

During the planning phase of a FedRAMP assessment there are many security topics a CSP and 3PAO should explore.  One of the questions a 3PAO will ask of the CSP is how vulnerability scanning is being conducted.  Before beginning the assessment phase, a 3PAO may ask the CSP for the most recent scanning results or scanning results from the previous two months to understand remediation efforts.  Most CSPs are able to generate some evidence and/or artifacts from a recent infrastructure or web application scan, albeit sometimes unauthenticated, but many do not have any evidence related to database scanning.  This is problematic for the CSP from a compliance and a security perspective.  The compliance impact is immediately identifiable as the CSP is missing a core FedRAMP requirement.  However, the security implication of not assessing the repositories where the data is stored is also a concern.   

The Main Difficulties with Database Scanning

The published FedRAMP guidance provides specific details relating to the types of scans to be conducted (infrastructure, web application and database), the frequency (monthly) and style (authenticated);  however, there is no guidance on acceptable tools, policies or approaches.   Much of the interpretation falls onto the CSP and 3PAO and there is a variety of interpretations of the database scanning requirements by each of them.  Unfortunately, many CSPs fall short of meeting the requirements, either from not conducting the scans at all, or because the controls in place to meet the database scanning requirement are not adequate.  Based on historical projects, questions received from CSPs, and discussions with joint authorization board (JAB) members and individual agencies, we have identified five primary reasons CSPs are struggling with database scanning. 

1. Comprehending the Need for Database Scanning
As simple as it seems, there is frequently a disconnect between what an agency (and the FedRAMP requirements) intend to see with database scanning and what a CSP performs. Frequently, CSPs consider scanning databases to be an easy task as the database server is a part of the existing infrastructure and should be covered in the infrastructure scans.  The disconnect is in what the database scan is supposed to detect.  The typical authenticated infrastructure scan will detect vulnerabilities in databases related to missing patches or releases.  However, the credentials used in an infrastructure scan likely will not be able to assess security at the database level.  Some examples include authentication settings, authorization and privileges management, and logging and monitoring settings.  An infrastructure scan of a MongoDB may come back clean, particularly since there are only 11 Common Vulnerabilities and Exposures (CVE) Identifier Numbers in the NIST National Vulnerability Database (NVD).  However, the infrastructure scan will likely not detect that a MongoDB instance does not require authentication, which was identified as a prevalent configuration within NoSQL databases on the Internet in July 2015 through various Shodan searches.

A common complaint from CSPs is that this type of information relates to baseline configurations and it should be categorized as policy compliance rather than vulnerability scanning.  This is understandable since most vulnerability scanning vendors categorize these types of checks as compliance checks as opposed to vulnerabilities.  However, information relating to database accounts, group membership, requiring authentication to access the database and other items, which is often identified in policy checks, is exactly what an agency wants to know from a database scan.  In 2004, NIST started the Software Assurance Metrics and Tool Evaluation (SAMATE) project to establish a methodology for evaluating software assurance tools.  While the site hasn’t been updated much in recent years, the project is live and the site does provide a list of twenty or so typical checks expected in database scanners.

2. Understanding what is a Database
Another concept that hasn’t been as easy as it seems, is explaining what defines a database, since a logical argument would be if it isn’t a database, it doesn’t need a database scan. The question on what is a database came to the forefront when organizations started storing data in containers that would not typically be labelled as a database.  Examples include Amazon Web Services (AWS) Simple Storage Service (S3), Microsoft Azure’s Blob Storage and folders at other cloud storage providers, such as Box  A CSP’s application can read and write to these systems just like a traditional database.  However, these systems don’t fit the image of what a database has been historically considered.  As such, the question that arises is what is a database?  The semantics of the term database can vary and may bring about discussions about data models, implementations, etc.  When asked “What is a Database?” most information technology (IT) professionals will mention something related to a backend repository holding data.  Webster’s Dictionary has a simple non-technical definition which is, “a collection of data that is organized especially to be used by a computer.”  The question of what needs to be scanned varies and during considerations on whether a system should be scanned, the question of how to scan it quickly arises.

3. Tools Do Not Support Database Platforms In Use
The tool set for infrastructure scans are well known, as are web application scanners. Conversely, many CSPs do not know what tool to use for database scanning and one frequent comment from CSPs is that there is no tool available that can scan the platform in use.  The original database scanning tools only needed to support a few types of databases, typically relational databases such as Microsoft SQL (MSSQL), Oracle or MySQL.  However, in the past five years the rise of distributed storage (e.g. Hadoop) and the use of NoSQL databases, such as Cassandra, Dynamo, MongoDB and others, has greatly increased.  Support for newer database types is growing, but it is slow.  With the explosion of different database options, many of which have existed for only short time, it is very difficult for vendors to support all types.   Occasionally, open source scripts may exist, but after the initial development, support can quickly wain for open source solutions.  For example, a script may be available for MongoDB 2.6, but not MongoDB 3.2.

One complaint heard from CSPs is that there are a few commercial-off-the-shelf tools specifically targeted at databases, but these tools typically support relational databases and are expensive.  Licensing is often based on the number of database instances, which may easily get into the hundreds in a cloud environment. 

4. Choosing the Right Policy
For databases that are supported by tools, there may be more than one policy available for use by the scanner. The Center for Internet Security (CIS) publishes benchmarks, which are often used by CSPs, especially those using MySQL. However, many CSPs do not know the difference between the audit levels. A Level 1 audit is not as stringent as a Level 2, but settings required in a Level 2 audit may impact performance. Additionally, when the benchmarks are used without any alteration there are usually some false positives as the benchmarks do not take into considerations different implementation or environments. For example, a CSP may have very robust logging and monitoring in place, but fail a CIS benchmark check because the error log is in different location from what is commonly expected.

5. Ensuring all Databases are Scanned
Even CSPs with a mature database scanning program may not be scanning all databases that are within the boundary. This differs from considering “What is a Database?” The backend repository for a software-as-a-service (SaaS) provider’s flagship offering may be getting scanned with credentials on a monthly basis, but what about the databases in the ancillary and support systems? A compromise to one of these systems could jeopardize the security of the cloud environment. The aforementioned difficulties relating to tool selection, support and policy selection arise when the database in use by the other systems is different than the database for the flagship offering.

Alternative Considerations

The aforementioned challenges can be difficult to address, but typically they are not insurmountable.  One strength CSPs often have in place is a knowledgeable development staff.  If a tool or script doesn’t exist, there is often a willingness to write one.  Compensating controls may also exist which may provide a similar benefit to scanning, especially for databases that do not have available options.  One example is a combination of a strong inventory management process and database monitoring tools.  A CSP may be able to walk through the database security configuration in a primary build, show how Puppet, Chef, or other inventory, configuration or orchestration tools manage the image, and then use database monitoring tools to ensure unintended changes are not conducted.  Database monitoring tools do not scan the database, but rather sit in front of the database and analyze the traffic being transmitted.  If a modification to a user or logging table can be detected by the monitoring tools, the need to scan for that exact setting may not be needed.  While FedRAMP calls out database scanning, a better description for the requirement may be a monthly comprehensive database reviews (which could be supported or conducted by scanning).       

Holistically Planning for Database Security

While the results of the database scan are one component of ensuring data is maintained securely, there are a number of other controls that will include the database environment.  Having the following thoroughly documented in the System Security Plan (SSP), procedures, and other system documentation will help ensure the database environment is well understood and can save time during a FedRAMP assessment.  The parentheses identify some FedRAMP controls where the database implementations can be addressed in the SSP.

  1. Database Version – What database platform is in use and what version? Document the current version and the roadmap for the future.  (Inventory)
  2. Database Administrators – Who should have access to all databases? This should be a short list of users.  All other accounts should have a documented justification.  (AC-2, AC-5)
  3. System Accounts – What applications have system or machine accounts on the database? These applications should have a documented justification.  (AC-2)
  4. Authentication – How do users access the databases (e.g. two-factor, local authentication, etc.)? Ensure system accounts follow the same process of authentication.  (IA-2)
  5. Authorization – What privileges and/or roles are granted to database users? Ensure the implementation of role based access control matches the design.  (AC-2, AC-5)
  6. Periodic Review – Are user accounts reviewed regularly for employees or applications that no longer require access? (AC-2)
  7. Encryption – What is used to secure data in transport and at rest? The data should be encrypted using a module that is FIPS 140-2 validated.  (SC-8, SC-13, SC-28)
  8. Virtual Local Area Networks (VLANs) – What identifier is used to categorize databases? Note how databases are segmented from the other parts of the environment. (SC-7)
  9. Access Control Lists (ACLs) – What rules are in place to protect the database environment? If there are specific ACLs supporting and securing the databases, have those extracted or know what strings to search for easy review. (AC-3, SC-7)
  10. Logging – Are local settings used, or does logging occur via other means? Confirm that the logging enabled would support an investigation should a breach occur. (AU-2)
  11. Configuration Baseline and Settings – How are the configuration baseline and settings reviewed and how often? There should be a documented process for the build and the periodic review. (CM-2, CM-6)

Beyond FedRAMP

While much of the content presented is directly related to FedRAMP, the challenges and concepts likely exist in many organizations, regardless of compliance requirements.  Databases are often thought of as relatively static pieces of the environment, but there can be a lot of moving pieces to secure them.  With a general understanding of database security, some forethought on the potential compliance issues and a willingness to consider the spirit of the requirement, in addition to what has been written, an organization can improve the security around one of their most important assets, their customer’s data. 

Below provides a list of the URLs for the Security page of the platforms mentioned.

About MATT WILGUS

Matt Wilgus is a Principal at Schellman, where he heads the delivery of Schellman’s penetration testing services related to FedRAMP and PCI assessments, as well as other regulatory and compliance programs. Matt has over 20 years’ experience in information security, with a focus on identifying, exploiting and remediating vulnerabilities. In addition, he has vast experience enhancing client security programs while effectively meeting compliance requirements. Matt has a strong background in network and application penetration testing, although over the past 10 years most of his focus has been on the application side, with extensive experience testing some of the most well-known IaaS, PaaS and SaaS providers.