Patch Management for CMMC: What "Timely" Actually Looks Like in Practice
A Monthly Patch Tuesday Habit Is Not a Patch Management Program
NIST SP 800-171 requires identifying, reporting, and correcting system flaws in a timely manner. But "timely" is organization-defined — and "we patch monthly" does not define a timeline, a severity model, an exception process, or a verification method. Here is what a defensible patch management program actually looks like for CMMC Level 2.
Why "We Patch Monthly" Is Not Enough
Most defense contractors patch their systems. The problem is not that patching doesn't happen. The problem is that it happens informally — on an IT technician's schedule, without documented severity thresholds, without a defined response for emergency vulnerabilities, and without evidence that patches were verified after installation.
An assessor evaluating SI.L2-3.14.1 (flaw remediation) and RA.L2-3.11.3 (remediate vulnerabilities in accordance with risk assessments) is not asking whether you patch. They are asking whether you have a governed, repeatable, severity-differentiated process that produces evidence at every step.
The controls form a chain. Vulnerability scanning identifies the flaws. Risk assessment prioritizes them. Patching remediates them. Verification confirms the fix. Documentation ties the chain together. A monthly cadence is one element of the chain — and not the most important one.
Severity, Exploitability, and Environment-Specific Risk
Not every vulnerability warrants the same response timeline. NIST SP 800-171 requires remediation "in accordance with risk assessments" — which means the contractor must evaluate each flaw's risk in the context of their specific environment, not simply apply the vendor's severity rating blindly.
In practice, this means defining a severity-based remediation framework that considers three factors:
CVSS Base Severity
The Common Vulnerability Scoring System provides a standardized severity rating — Critical (9.0–10.0), High (7.0–8.9), Medium (4.0–6.9), Low (0.1–3.9). This is the starting point. Most organizations map their SLA tiers directly to CVSS ranges. But CVSS alone does not account for your specific environment — a Critical vulnerability in a service you don't run is not actually critical to you.
Known Exploitability
Is the vulnerability being actively exploited in the wild? CISA's Known Exploited Vulnerabilities (KEV) catalog is the authoritative source. A Medium-severity vulnerability that appears on the KEV catalog — meaning adversaries are actively using it — warrants a faster response than a Critical vulnerability with no known exploit. Your patch policy should include a provision for accelerated remediation when a CVE appears on the KEV catalog, regardless of its CVSS score.
Environment-Specific Exposure
Does the vulnerable system face the internet? Does it process CUI? Is it in the enclave or in the corporate network? A Critical vulnerability on an internet-facing VPN gateway that authenticates remote CUI users is a different risk than the same vulnerability on an internal print server that never touches CUI. Your risk assessment should adjust the effective priority based on the asset's role, exposure, and data classification.
| Effective Priority | Criteria | Remediation SLA |
|---|---|---|
| P1 — Emergency | CVSS Critical and actively exploited (KEV) and affects CUI-handling or internet-facing systems | 72 hours or less |
| P2 — Critical | CVSS Critical or High, or any severity on KEV catalog | 14 days |
| P3 — Standard | CVSS High or Medium, no active exploitation, not directly internet-facing | 30 days |
| P4 — Routine | CVSS Medium or Low, internal systems, no known exploitation | 60 days or next maintenance window |
Emergency Patching Versus Standard Maintenance Windows
Most organizations operate on a standard patch cycle — Patch Tuesday for Microsoft products, a monthly or biweekly maintenance window for everything else. This cadence handles the bulk of routine patching. But it does not handle the scenario that matters most: a critical zero-day with active exploitation disclosed on a Wednesday morning.
Your patch management policy must define two distinct pathways:
Scheduled Maintenance Window
Patches are tested in a staging environment (if available), approved by the change advisory board or IT lead, deployed during the scheduled window, and verified post-deployment. This pathway handles P3 and P4 priorities. It is planned, predictable, and minimally disruptive. Most patches flow through this channel.
Out-of-Cycle Deployment
A critical vulnerability with active exploitation triggers the emergency pathway. Testing is compressed or bypassed. Approval is expedited — often a single decision-maker rather than a full review. Deployment happens immediately, outside the normal maintenance window. The risk of the patch breaking something is accepted because the risk of not patching is higher. This pathway handles P1 and sometimes P2 priorities.
The emergency pathway must be documented in the patch policy — not invented during a crisis. The policy should specify: what triggers the emergency pathway (e.g., KEV listing, vendor advisory rated Critical, CISA alert), who authorizes the out-of-cycle deployment, what testing is required (or explicitly waived), and how the deployment is documented after the fact. An assessor will ask: "What happens when a zero-day drops? Walk me through your process." If the answer is "we figure it out," the process is undocumented and the control is at risk.
Third-Party Applications and Firmware
Windows updates get all the attention. Third-party software and firmware get almost none — and they are where some of the most exploitable vulnerabilities live.
NIST SP 800-171 does not limit flaw remediation to operating system patches. SI.L2-3.14.1 applies to system flaws — which includes the operating system, installed applications, browser plugins, runtime environments, drivers, and firmware. If it runs on an in-scope system and it has a known vulnerability, it is covered by the control.
Third-Party Desktop Applications
Adobe Acrobat, Google Chrome, Mozilla Firefox, Java, 7-Zip, Zoom, Webex — any software installed on in-scope endpoints. Windows Update does not patch these. You need a third-party patch management tool — Intune with Winget, Patch My PC, PDQ Deploy, Ninite Pro, or similar — or a documented manual process with verification. Unpatched third-party software is one of the most common authenticated scan findings and one of the easiest to remediate.
Server Applications
SQL Server, IIS, Apache, line-of-business applications, ERP systems, engineering software. These often have their own patch cycles independent of the operating system. Some require downtime to update. Some have vendor-imposed restrictions on when patches can be applied. Your patch management program must track these independently — they are not covered by WSUS or Intune OS update policies.
Network Device Firmware
Firewalls, switches, wireless access points, VPN concentrators, and other network appliances run firmware — not a traditional operating system. Firmware updates are released by the manufacturer, often on an irregular schedule. They require manual download, staging, and deployment — usually through the device's management console. Firmware vulnerabilities are real and exploitable. Your patch program must include a process for tracking vendor advisories, testing firmware updates, and deploying them within the defined SLA.
BIOS and UEFI Updates
Endpoint BIOS and UEFI firmware is another category that Windows Update does not cover. Dell, HP, and Lenovo publish BIOS updates through their own management tools (Dell Command Update, HP Support Assistant, Lenovo System Update). These updates address hardware-level vulnerabilities — including speculative execution flaws and secure boot bypasses — that operating system patches cannot fix. Include BIOS update checks in your quarterly or semiannual maintenance cycle.
How to Handle Systems That Cannot Be Patched Quickly
Not every system can accept a patch on the SLA timeline. Some cannot be patched at all — at least not without operational impact that exceeds the vulnerability risk. These exceptions are normal. What is not normal is ignoring them.
Common scenarios where patching is delayed or infeasible:
- Vendor-certified configurations — An engineering application or ERP system is certified by the vendor to run on a specific OS version and patch level. Applying a newer patch may break certification, void support, or cause application failures. The vendor has not yet certified the patch.
- Operational equipment — A CNC machine, a test bench controller, or an industrial system runs embedded software that cannot be updated without the manufacturer's involvement. Downtime for the update requires scheduling weeks in advance and may affect production commitments.
- Legacy systems pending decommission — An older server running Windows Server 2012 R2 is scheduled for decommission in 90 days. It is still in scope. Patches for the OS are no longer available. The system cannot be upgraded without replacing the application it runs.
- Patch stability concerns — A recently released patch has known issues reported by other organizations. Deploying it immediately risks system instability. The IT team is waiting for a revised patch or for stability confirmation before deploying.
In each case, the correct response is the same: document the exception, define a compensating control, set a review date, and track it.
Exception Tracking and POA&M Interaction
Patch exceptions — systems that cannot meet the defined SLA — must be tracked in a register that the assessor can review. For some organizations, this register is a standalone document. For others, it is integrated into the vulnerability remediation tracker. Either approach works, as long as the information is complete and current.
Each exception entry should include:
POA&M interaction: If a patch exception results in a CMMC assessment objective being scored Not Met, the finding may be placed on a Plan of Action and Milestones — provided the control is not one of the practices excluded from POA&M eligibility. The POA&M must include the same information as the exception register plus the specific CMMC practice affected and the milestone plan for closure. Under CMMC rules, POA&M items must be closed within 180 days of the Conditional CMMC Status Date. A patch exception that has been open for longer than that — or that has no realistic path to closure — cannot be placed on a POA&M. It is a hard failure.
Evidence Model for Repeatable Patch Governance
The assessment evidence package for patch management must demonstrate the entire lifecycle: policy, execution, verification, and exception handling. Here is what a complete package looks like — without overbuilding.
The Bottom Line
Patch management for CMMC is not about patching speed. It is about governed, severity-differentiated, evidence-producing patch operations that align what your policy says with what your systems show. The assessor evaluates the chain: policy defines the SLAs, vulnerability scans identify what needs patching, deployment records show what was patched, verification scans confirm the fix, and exceptions are documented with compensating controls and target dates.
Every link in that chain must produce evidence. A missing link — undefined SLAs, untracked third-party software, unverified deployments, or undocumented exceptions — is a finding. The patch itself is one step. The program around it is the control.
"Timely" does not mean fast. It means governed. It means you defined the timeline, you met the timeline, and you can prove it. And when you could not meet the timeline, you documented why, implemented a compensating control, and tracked the exception to closure. That is what a patch management program looks like for CMMC.