The difference between a successful CMMC assessment and a failed one often comes down to one thing: whether the evidence an organization provides matches what the assessor is actually looking for.
DIBCAC (Defense Industrial Base Cybersecurity Assessment Center) publishes objective evidence lists that describe, domain by domain, the specific artifacts, configurations, and documentation that assessors use to make Met or Not Met determinations. These lists are the closest thing to a definitive answer to the question "what do I need to show?"
This article breaks down the key evidence requirements by domain, drawn from the DIBCAC Objective Evidence Lists (July 2025), with practical notes on what constitutes strong evidence versus what gets flagged as insufficient.
How Assessors Use Evidence
Before the domain-by-domain breakdown, understand the three-method model:
- Examine: Review of documentation, configurations, reports, plans, and records. The largest category of evidence gathering.
- Interview: Conversations with personnel who implement, manage, or are subject to the controls. Assessors speak with system administrators, security staff, end users, and management.
- Test: Direct technical verification that controls are operational. This includes running queries, pulling logs, testing authentication flows, reviewing network configurations in live systems, and confirming that technical controls behave as documented.
A Met determination requires all three methods to be consistent and corroborating. A policy document that describes a control that is not technically implemented produces a Not Met finding at the test phase regardless of how thorough the documentation is.
Access Control (AC) — 22 Requirements at Level 2
Access control generates more assessment findings than any other domain. The volume of requirements and the complexity of modern access control environments creates numerous opportunities for gaps.
What assessors examine:
- Access control policy: current, approved, with a recent review date and authorizing signature
- User account inventory: complete list of all accounts in scope, with account type, role, owner, and access scope documented
- Access review records: evidence that someone actually reviewed the list, made decisions, and those decisions were acted upon (not just a spreadsheet with no follow-up actions)
- Privileged access management documentation: how privileged accounts are managed differently from standard accounts
- Remote access authorization records: documented approval for each remote access method
What assessors interview about:
- How new user access is requested, approved, and provisioned
- What happens to access when an employee leaves or changes roles
- How privileged access is managed and who holds privileged accounts
- How external system connections are authorized
What assessors test:
- Live demonstration of access control policy enforcement in the identity platform
- Verification that at least one user account shows proper MFA enforcement (not just enabled, enforced)
- Pull a sample of user accounts and verify role assignment matches documented roles
- Test remote access: attempt to authenticate via VPN or remote desktop and verify MFA challenge appears
Strong vs. weak evidence:
A strong access review artifact is a dated spreadsheet showing all accounts, reviewer initials, review decisions, and a follow-up ticket or email showing accounts that were changed or removed. A weak artifact is an undated list of users with no evidence of review actions taken.
Audit and Accountability (AU) — 9 Requirements
Audit logging failures are among the most commonly cited gaps in Level 2 assessments. Organizations often have logging configured but cannot demonstrate that logs are reviewed, retained appropriately, or that log integrity is protected.
What assessors examine:
- Log management policy specifying what is logged, retention periods, and review cadence
- SIEM or log management platform configuration showing all in-scope systems are feeding logs
- Audit log retention records confirming the retention period meets requirements
- Log review records: evidence that logs are regularly reviewed and that anomalies generate action
What assessors interview about:
- Who reviews the logs, how often, and what they look for
- How alerts are generated and what happens when something suspicious is detected
- How log integrity is protected (who has the ability to modify or delete logs)
What assessors test:
- Pull current logs from the SIEM and verify key event types are being captured: logon/logoff, privilege use, account management, policy changes, file access on CUI stores
- Verify that logs from all in-scope systems are present in the log management platform
- Test log protection: confirm that standard users cannot modify or delete audit logs
Strong vs. weak evidence: A strong log review record is a weekly SIEM review report with an analyst's notes, or a ticket system showing alerts generated from log analysis and resolved. A weak artifact is a screenshot of a SIEM dashboard with no evidence anyone looked at it.
Configuration Management (CM) — 9 Requirements
Configuration management evidence demonstrates that systems are built to a known, secure baseline and that changes are controlled.
What assessors examine:
- Configuration baseline documents for each OS type, server type, and network device type in scope
- Group policy objects or configuration management platform exports showing baseline enforcement
- Change management policy and procedure
- Change request records showing requests, approvals, implementation, and verification
- Software inventory showing only authorized software is installed on in-scope systems
What assessors interview about:
- How configuration baselines are established and maintained
- How changes are requested, reviewed, approved, and implemented
- How unauthorized software is detected and addressed
What assessors test:
- Review the current configuration of a sample workstation against the documented baseline
- Pull a list of installed software from a sample system and compare against the authorized software list
- Review recent change tickets to verify the documented process was followed
Strong vs. weak evidence: A strong configuration baseline is a specific, documented hardening standard (CIS benchmark or equivalent) with evidence that the standard is enforced via group policy or a configuration management tool. A weak baseline document is a generic statement that systems are "hardened to industry standards" without specifics.
Identification and Authentication (IA) — 11 Requirements
MFA is the centerpiece of IA assessment, and it is a 5-point requirement. Assessors specifically test for the difference between MFA being available and MFA being enforced.
What assessors examine:
- Identity platform configuration screenshots showing MFA enforcement policy (conditional access or equivalent)
- Authentication policy document
- Password policy configuration showing minimum length, complexity, and lockout settings
- MFA enrollment records showing coverage rates across the user population
- Service account inventory with owners, purpose, and credential management approach documented
What assessors interview about:
- How MFA is managed and what happens if a user loses their MFA device
- Whether any accounts are exempt from MFA and why
- How service account credentials are managed and rotated
- How authentication failures are detected and responded to
What assessors test:
- Attempt to authenticate to a CUI-scope application with just a username and password — the authentication should fail or require an MFA challenge
- Test privileged account authentication specifically: privilege access should require separate MFA factor
- Review conditional access policies to confirm there are no exemptions that would allow password-only authentication
Strong vs. weak evidence: Strong MFA evidence is a screenshot of a conditional access policy with "Grant access — Require multi-factor authentication" configured, combined with a test showing the MFA challenge appears during login. Weak evidence is a screenshot showing MFA is "enabled" in the user settings without demonstration of enforcement.
Incident Response (IR) — 3 Requirements
Three requirements, but assessors probe deeply into whether incident response is a real program or a document on a shelf.
What assessors examine:
- Current incident response plan with version date, approval signature, and scope statement
- Incident response team roster with roles and contact information
- Tabletop exercise or drill records: scenario, date, participants, findings, and after-action items
- Incident log or tracking system (even if no incidents have occurred, the system must exist)
- Evidence of US-CERT reporting procedure: how the organization would report an incident within required timeframes
What assessors interview about:
- What constitutes a reportable security incident for this organization
- Walk through what happens from detection to containment to reporting
- When the IR plan was last tested and what changed as a result
What assessors test:
- Ask a system administrator to locate the incident response plan — if they cannot find it, that is a finding
- Ask a help desk or end user how they would report a suspected security incident — if they do not know, that is a finding
Strong vs. weak evidence: A strong IR exercise record includes a realistic scenario, a list of participants with roles, a timeline of the exercise, and an after-action document identifying improvements made to the IR plan. A weak artifact is a meeting invite for a "security meeting" with no agenda or documented outcomes.
Media Protection (MP) — 9 Requirements at Level 2
Media protection encompasses both physical media and digital storage, including cloud storage and backup systems.
What assessors examine:
- Media protection policy covering CUI media marking, handling, transportation, and disposal
- Media sanitization procedure referencing NIST 800-88 or equivalent
- Sanitization records for disposed media (certificate of destruction or sanitization log)
- CUI marking procedures (how CUI documents are labeled, both electronic and physical)
- Records of media containing CUI: who holds it, where it is stored, how it is controlled
What assessors interview about:
- How employees know what media contains CUI and how it should be handled
- What happens when a laptop that may contain CUI is lost or stolen
- How old equipment is disposed of
What assessors test:
- Review a sample of digital documents to verify CUI marking is applied
- Pull a disposal record and verify the media listed was properly sanitized before disposal
Risk Assessment (RA) — 3 Requirements
Risk assessment evidence demonstrates that the organization systematically identifies, evaluates, and manages security risk, not just responds to incidents after they occur.
What assessors examine:
- Risk assessment policy and procedure
- Completed risk assessment documentation: methodology used, assets assessed, threats and vulnerabilities identified, risk ratings, and risk treatment decisions
- Risk register or risk tracking system showing open risks with owners and treatment status
- Evidence that risk assessment results informed security decisions (e.g., a risk assessment finding that led to a specific remediation action)
- Vulnerability scanning results with risk-based prioritization of remediation
What assessors interview about:
- How often risk assessments are conducted
- Who owns the risk assessment process
- How risk assessment results translate into security investment and remediation decisions
Strong vs. weak evidence: A strong risk assessment artifact is a dated, completed risk assessment that identifies specific threats to CUI assets, rates each risk, and documents the organization's treatment decision (accept, mitigate, transfer, avoid) with assigned owners. A weak artifact is a generic risk template that has never been completed with organization-specific data.
System and Communications Protection (SC) — 16 Requirements
Network architecture, encryption, and boundary protection are the core of SC evidence. FIPS-validated cryptography (SC.L2-3.13.11) is a 5-point requirement that assessors specifically test.
What assessors examine:
- Network diagram showing CUI environment boundary, internet-facing services, internal segments, and external connections
- Firewall rule set documentation with business justification for each rule
- Encryption policy specifying requirements for data in transit and at rest
- FIPS validation documentation for cryptographic modules (NIST CMVP certificate or product documentation)
- VPN configuration showing encryption standards and authentication requirements
What assessors interview about:
- How the network architecture supports CUI protection
- How encryption is managed and verified
- How network changes are reviewed for security impact
What assessors test:
- Review the current firewall rule set and compare to the documented rule set
- Test a TLS connection to a CUI-handling application and verify the cipher suite is FIPS-validated
- Verify that the encryption on backup systems covering CUI data uses FIPS-validated cryptography
Strong vs. weak evidence: Strong FIPS encryption evidence includes the NIST CMVP validation certificate for the cryptographic module in use, plus a screenshot showing that specific module is in use for CUI data. Weak evidence is a statement that "we use AES-256 encryption" without evidence of FIPS module validation.
System and Information Integrity (SI) — 7 Requirements
SI evidence covers malware protection, patch management, and security alerting. Antivirus and patch management gaps are high-frequency assessment findings.
What assessors examine:
- Endpoint protection platform management console screenshot showing all in-scope systems are enrolled and definitions are current
- Vulnerability scan results showing patch status across in-scope systems
- Patch management policy with defined remediation timelines by severity
- Security alert configuration showing what alerts are generated and to whom
- Evidence of security alert review: ticket records, email notifications acted upon, or SOC activity records
What assessors interview about:
- How quickly critical patches are deployed after release
- What happens when a system is found to have missing patches
- How security alerts are managed and escalated
What assessors test:
- Pull current endpoint protection status from the management console to verify all in-scope systems show current definitions
- Review the most recent vulnerability scan for any critical or high findings that have exceeded the documented remediation timeline
Final Preparation: The Evidence Package Review
Before a C3PAO assessment begins, conduct an internal evidence review using these criteria:
- Every requirement has at least one examine artifact
- All examine artifacts are dated within 90 days for technical configurations (policy documents can be older if recently reviewed)
- Evidence is organized by domain and requirement with clear naming conventions
- Interview subjects have been briefed on what controls exist and how to describe them in their own words
- SHA-256 hashes are prepared for all artifacts intended for eMASS submission
Key Takeaways
- DIBCAC assessors use examine, interview, and test: all three must be consistent for a Met determination
- Access control and audit logging generate the most findings
- MFA must be enforced, not just enabled: assessors test this specifically
- FIPS-validated cryptography (SC.L2-3.13.11) requires documentation of the specific validated module
- Evidence must be current, specific, and traceable to your actual environment
- Organize evidence by domain and requirement before the assessment begins
Learn More
For the full CMMC framework overview, see the CMMC 101: The Complete Guide to CMMC Compliance for Defense Contractors.
Related articles in this series:
Preparing for a C3PAO assessment and want an evidence package review? NR Labs reviews evidence packages against DIBCAC objective evidence standards before assessment day. Contact us to schedule a pre-assessment review.
Frequently Asked Questions
What is the difference between MFA being "enabled" versus "enforced" in a CMMC assessment?
Assessors verify that MFA is enforced, not merely available. "Enabled" means the option exists but users can bypass it. "Enforced" means authentication fails without a valid second factor. The distinction is critical: a conditional access policy that allows fallback to password-only authentication will result in a Not Met finding for IA.L2-3.5.3, even if MFA is technically configured in the environment.
How should organizations name and organize evidence artifacts for CMMC assessment?
Use a consistent naming convention that includes the requirement ID, system or scope identifier, and ISO 8601 timestamp (e.g., AC.L2-3.1.1_AzureAD_UserAccounts_2026-03-01.csv). Organize evidence by domain and requirement, mirroring the CMMC assessment structure. SHA-256 hashes should accompany all artifacts submitted to eMASS. Retaining prior months' artifacts demonstrates consistent implementation over time.
Which CMMC domains generate the most assessment findings?
Access Control (AC) and Audit and Accountability (AU) consistently generate the most findings. AC findings commonly involve incomplete user account inventories, MFA enforcement gaps, and poorly documented conditional access policies. AU findings typically involve incomplete log source enrollment, insufficient retention periods, and lack of periodic log review documentation. Organizations should prioritize these domains in assessment preparation.