AI and CMMC: What Defense Contractors Need to Know Now

CMMC

Defense contractors are adopting AI tools faster than their compliance programs are keeping up. Code generation tools, AI-assisted document drafting, large language models integrated into workflows, and AI-powered IT management platforms are moving through the defense supply chain at a pace that CMMC program frameworks were not designed to handle.

This creates a real and immediate problem. AI tools can inadvertently expand the CMMC Assessment Boundary, introduce new attack vectors into CUI environments, and create evidence and documentation practices that complicate assessments. At the same time, AI presents genuine opportunities for CMMC compliance programs, including evidence collection automation, SSP generation, and continuous monitoring.

This article covers both sides of the AI and CMMC relationship: the compliance risks that AI adoption creates and the compliance opportunities that AI-enabled programs can leverage.

The Risk Side: Where AI Creates CMMC Problems

AI Tools Can Process CUI Without Authorization

The most immediate CMMC risk from AI adoption is inadvertent CUI exposure. When an employee pastes a CUI document excerpt into a commercial large language model (ChatGPT, Copilot, Gemini), they may be transmitting CUI to a cloud environment that is not FedRAMP authorized and has not been assessed under your CMMC boundary.

This is not a hypothetical. It is happening in defense supply chain organizations right now. Engineers paste technical specifications into AI tools to generate explanations. Program managers use AI to summarize contract documents. Security professionals use AI to draft SSP content from raw notes that contain system descriptions.

For CMMC purposes:
- The AI tool's backend infrastructure is not in your CMMC Assessment Boundary
- The AI provider is almost certainly not a FedRAMP Moderate authorized cloud service
- Transmitting CUI to that environment is a potential breach of CUI handling requirements and a CMMC scope violation

The solution is not necessarily banning AI tools. It is establishing clear policies that define which AI tools are authorized for use with which categories of information, and ensuring employees understand that commercial AI tools cannot process CUI without an appropriate authorization.

AI Tools May Be In-Scope Assets You Have Not Identified

If your organization deploys an on-premises or private AI model that processes CUI, that AI system is an in-scope asset. Its training data, inference environment, API endpoints, and access controls must all be addressed in your SSP and assessed under CMMC.

Similarly, if an AI-powered IT management tool (an AI-assisted SIEM, an AI-based EDR platform, or an AI-powered patch management system) manages or monitors in-scope assets, that tool itself is an in-scope security asset with CMMC assessment implications.

Many organizations are expanding their AI-powered tool footprint without updating their CMMC Assessment Boundary documentation. The result is a growing gap between the SSP's description of the environment and the actual environment, which assessors will identify.

AI-Generated Documents Need Verification

AI tools are widely used to draft policies, procedures, and SSP content. The risk is that AI-generated content looks authoritative but may be inaccurate, generic, or describe controls that do not match the actual technical environment.

An SSP section drafted by an AI tool that accurately describes a generic CMMC Level 2 access control implementation but does not reflect your specific configuration creates the same problem as a generic template: when an assessor compares documentation to the technical environment, the mismatch generates findings.

AI-assisted drafting is useful for structure and completeness checks. Every implementation description still needs to be verified against the actual environment before it goes into a compliance artifact.

AI Expands the Adversary Toolkit

The threat landscape driving CMMC requirements is not static. Nation-state adversaries targeting the defense supply chain are adopting AI-enabled attack capabilities:

  • AI-powered phishing: Large language models generate highly personalized, contextually accurate phishing messages at scale. The social engineering quality of AI-generated phishing significantly exceeds historical bulk phishing campaigns.
  • AI-assisted vulnerability exploitation: AI tools accelerate the identification of exploitable vulnerabilities and the generation of custom exploitation code. The time between vulnerability disclosure and active exploitation is compressing.
  • AI-powered lateral movement: Adversaries are using AI to analyze captured network data and identify the fastest paths to high-value assets, including CUI repositories.

The CMMC controls designed to address these threats remain valid (access control, multi-factor authentication, patch management, audit logging). But the threat environment has become more sophisticated, and the controls need to be implemented and maintained with that context in mind.

NIST has responded with the Cyber AI Profile (preliminary draft, 2025), which maps the NIST Cybersecurity Framework 2.0 to AI-specific cybersecurity considerations. The profile addresses both securing AI systems and using AI to defend against AI-enabled threats. Defense contractors maintaining mature CMMC programs should monitor this guidance as it develops.

The Opportunity Side: How AI Enhances CMMC Compliance

Evidence Collection Automation

The most mature AI application in CMMC compliance today is automated evidence collection. AI-powered tools can:

  • Query identity platforms, configuration management systems, and security tools via API
  • Process raw output into consistently formatted evidence artifacts
  • Flag anomalies (accounts with unexpected permissions, systems not enrolled in endpoint protection, patches that exceed remediation timelines)
  • Generate collection summaries and hash manifests for eMASS submission

This is not AI in the headline-grabbing sense. It is AI as applied automation: structured querying, pattern recognition in log data, and report generation. These capabilities are available in existing tools (Microsoft Sentinel, Splunk, CrowdStrike) and in custom scripts that leverage model APIs for log analysis.

For defense contractors with technical teams, building AI-assisted evidence collection pipelines reduces the labor cost of compliance maintenance and improves evidence consistency. For contractors without internal technical capacity, CMMC RPOs with engineering capabilities are building managed evidence collection programs that leverage these tools.

SSP Drafting and Completeness Checking

AI tools are effective for SSP drafting support when used correctly. Specifically:

  • Completeness checking: An AI assistant can review a draft SSP and identify missing implementation descriptions, inconsistencies between sections, or controls that are documented without evidence references
  • Initial drafting from technical documentation: An AI tool can process network diagrams, system inventories, and technical runbooks to generate initial SSP implementation descriptions, which a human then verifies against the actual environment
  • Cross-referencing: AI can map existing policy documents to the applicable CMMC requirements they satisfy, identifying policy gaps and redundancies

The key discipline: AI output requires human verification. A completeness check that an AI produces is useful, but a human with actual knowledge of the environment must confirm that the implementation descriptions accurately reflect technical reality.

Continuous Monitoring and Anomaly Detection

AI-powered continuous monitoring is increasingly relevant to CMMC compliance, particularly for the audit and accountability domain. SIEM platforms with AI-based anomaly detection can:

  • Identify unusual access patterns that suggest credential compromise
  • Detect configuration drift from established baselines
  • Surface anomalous network behavior that may indicate lateral movement
  • Generate continuous compliance scoring against configured policies

For organizations maintaining Level 2 certifications over a three-year cycle, AI-powered monitoring provides ongoing assurance that the controls assessed at certification remain in place. It surfaces compliance degradation before it becomes an assessment finding.

AI-Powered Risk Assessment

AI tools can process vulnerability scan data, threat intelligence feeds, and configuration data to generate risk-prioritized remediation recommendations. Rather than manually triaging hundreds of vulnerability findings, an AI-assisted risk prioritization engine can weight vulnerabilities by:

  • Exploitability in the current threat landscape
  • Presence of CUI on the affected system
  • Whether the vulnerability affects a CMMC 5-point requirement
  • Known adversary TTPs targeting the defense supply chain

This prioritization directly addresses one of the most common challenges in CMMC programs: knowing which of many gaps to fix first given limited resources.

How to Govern AI Tools in a CMMC Environment

Organizations that want to leverage AI tools without creating compliance risk need a governance framework. Here is the minimum viable approach:

Step 1: Inventory AI Tools in Use

Identify every AI tool currently deployed in your environment, including commercial AI assistants used by employees on work devices. Categorize them by:
- Whether they are deployed on-premises, in a private cloud, or via a commercial cloud API
- Whether they can access, process, or store CUI
- Whether they provide security functions for in-scope systems (AI-powered EDR, AI-powered SIEM)

Step 2: Apply the CUI Processing Question

For each AI tool that can process user input: can a user input CUI into this tool? If yes, is the tool's backend FedRAMP Moderate authorized?

  • If the tool is not FedRAMP Moderate authorized and can process CUI: prohibit CUI input via policy and technical controls (data loss prevention rules that flag CUI category markers)
  • If the tool provides security functions for in-scope systems: include it in your CMMC Assessment Boundary and SSP

Step 3: Update the SSP

Document every AI tool that is an in-scope asset. For AI-powered security tools (AI-enhanced EDR, AI SIEM), document the tool, the security function it performs, and how it is managed and controlled. For AI tools that have been determined not to process CUI, document the justification and the controls that prevent CUI from entering the tool.

Step 4: AI Acceptable Use Policy

Add AI tool governance to your acceptable use policy. The policy should define:
- Which AI tools are authorized for use on work systems
- Which categories of information cannot be input into any AI tool (CUI, FCI, proprietary data)
- The approval process for adding new AI tools to the authorized list
- How violations are reported and addressed

Step 5: Train Employees

Include AI tool governance in annual security awareness training. Employees need to understand which AI tools they can use, which information categories cannot be processed by AI tools, and why. Abstract policy without context does not change behavior.

What NIST Says: The Cyber AI Profile

In 2025, NIST released a preliminary draft of the "Cybersecurity Framework Profile for Artificial Intelligence" (Cyber AI Profile). The profile addresses two focus areas:

  1. Securing AI systems: How to apply CSF 2.0 outcomes to AI system components, including data pipelines, model development environments, inference infrastructure, and APIs
  2. AI-enabled cyber defense: How AI capabilities can be applied to improve threat detection, vulnerability management, and incident response

The Cyber AI Profile maps directly to CSF 2.0 functions (Govern, Identify, Protect, Detect, Respond, Recover), which in turn map substantially to CMMC Level 2 controls. Organizations implementing CMMC Level 2 who also need to secure AI systems will find the Cyber AI Profile provides the framework extension they need.

The profile is preliminary and will be refined based on public comments. However, the direction is clear: NIST is building AI security into its mainstream cybersecurity framework, and that framework is the foundation of CMMC. Defense contractors deploying AI systems should monitor the Cyber AI Profile's development and begin mapping their AI system governance to its outcomes.

Practical Guidance: Right Now

If you have commercial AI tools in use in your organization today:

  1. Determine whether CUI could be reaching them. Review how employees use AI tools in their day-to-day work. Ask specifically about AI assistants used for drafting technical documents, summarizing contract data, or answering questions about program specifications.
  2. If CUI is reaching commercial AI tools, address it now. Implement data loss prevention controls, add an AI acceptable use policy, and provide immediate guidance to employees on which information cannot be input into commercial AI tools.
  3. Update your SSP. If AI-powered security tools are in scope, document them. If commercial AI tools have been confirmed not to process CUI, document the basis for that determination.
  4. Consider the AI-positive opportunity. Evaluate whether your CMMC evidence collection and monitoring program could benefit from automation tools that use AI capabilities. The compliance burden of maintaining a Level 2 program over a three-year cycle can be meaningfully reduced with the right automation investment.

Key Takeaways

  • Commercial AI tools can inadvertently receive CUI if employees use them to process work documents; commercial AI backends are not FedRAMP Moderate authorized
  • AI-powered security tools managing in-scope systems are in-scope CMMC assets that must be in the SSP
  • AI-generated SSP content must be verified against the actual technical environment
  • Nation-state adversaries are using AI to enhance attack capabilities against the DIB; CMMC controls remain the defensive baseline
  • AI offers genuine compliance opportunities: evidence collection automation, SSP drafting support, continuous monitoring, risk prioritization
  • NIST's Cyber AI Profile maps AI security to CSF 2.0 and is directly relevant to CMMC environments with AI deployments

Using AI tools in your operations and want to understand the CMMC implications? NR Labs helps defense contractors govern AI tool adoption, update CMMC Assessment Boundaries to account for AI systems, and leverage AI automation for compliance program efficiency. Contact us to discuss your AI and CMMC situation.

Frequently Asked Questions

Can employees use ChatGPT or Copilot to process CUI?

Generally no, unless the AI tool is deployed within the organization's CMMC Assessment Boundary with appropriate security controls. Commercial AI tools like ChatGPT, Google Gemini, and Microsoft Copilot process data on external infrastructure that is not controlled by the contractor. Inputting CUI into these tools constitutes unauthorized disclosure of controlled information outside the authorized boundary, regardless of the tool's password protection or terms of service.

If an AI-powered security tool manages in-scope systems, does it need to be in the SSP?

Yes. Any AI-powered tool that processes, stores, or transmits CUI, or that provides security functions for systems within the CMMC Assessment Boundary, must be documented in the System Security Plan. This includes AI-powered SIEM tools, endpoint detection platforms, and automated monitoring systems. The tool's data flows, access controls, and security configuration must be described and assessed against applicable CMMC requirements.

How can AI help automate CMMC compliance without creating new risks?

AI can support CMMC compliance through automated evidence collection scripting, continuous configuration monitoring, anomaly detection in security logs, and policy document drafting assistance. The key is ensuring AI tools used for these purposes operate within the CMMC boundary with appropriate controls, do not transmit CUI to external services, and that AI-generated outputs (like draft policies or evidence summaries) are reviewed by qualified humans before submission.