Patch Management Flaw Remediation // 11 MIN READ

Patch Management for CMMC: What "Timely" Actually Looks Like in Practice

A Monthly Patch Tuesday Habit Is Not a Patch Management Program

NIST SP 800-171 requires identifying, reporting, and correcting system flaws in a timely manner. But "timely" is organization-defined — and "we patch monthly" does not define a timeline, a severity model, an exception process, or a verification method. Here is what a defensible patch management program actually looks like for CMMC Level 2.

Why "We Patch Monthly" Is Not Enough

Most defense contractors patch their systems. The problem is not that patching doesn't happen. The problem is that it happens informally — on an IT technician's schedule, without documented severity thresholds, without a defined response for emergency vulnerabilities, and without evidence that patches were verified after installation.

An assessor evaluating SI.L2-3.14.1 (flaw remediation) and RA.L2-3.11.3 (remediate vulnerabilities in accordance with risk assessments) is not asking whether you patch. They are asking whether you have a governed, repeatable, severity-differentiated process that produces evidence at every step.

"We patch monthly" answers the wrong question. It tells the assessor when you patch. It does not tell them how you decide which patches are urgent, what happens when a critical zero-day is disclosed between maintenance windows, how you handle systems that cannot accept a patch immediately, or how you verify that the patch was successfully applied. Those are the questions the assessment objectives require you to answer.

The controls form a chain. Vulnerability scanning identifies the flaws. Risk assessment prioritizes them. Patching remediates them. Verification confirms the fix. Documentation ties the chain together. A monthly cadence is one element of the chain — and not the most important one.

Severity, Exploitability, and Environment-Specific Risk

Not every vulnerability warrants the same response timeline. NIST SP 800-171 requires remediation "in accordance with risk assessments" — which means the contractor must evaluate each flaw's risk in the context of their specific environment, not simply apply the vendor's severity rating blindly.

In practice, this means defining a severity-based remediation framework that considers three factors:

01

CVSS Base Severity

The Common Vulnerability Scoring System provides a standardized severity rating — Critical (9.0–10.0), High (7.0–8.9), Medium (4.0–6.9), Low (0.1–3.9). This is the starting point. Most organizations map their SLA tiers directly to CVSS ranges. But CVSS alone does not account for your specific environment — a Critical vulnerability in a service you don't run is not actually critical to you.

02

Known Exploitability

Is the vulnerability being actively exploited in the wild? CISA's Known Exploited Vulnerabilities (KEV) catalog is the authoritative source. A Medium-severity vulnerability that appears on the KEV catalog — meaning adversaries are actively using it — warrants a faster response than a Critical vulnerability with no known exploit. Your patch policy should include a provision for accelerated remediation when a CVE appears on the KEV catalog, regardless of its CVSS score.

03

Environment-Specific Exposure

Does the vulnerable system face the internet? Does it process CUI? Is it in the enclave or in the corporate network? A Critical vulnerability on an internet-facing VPN gateway that authenticates remote CUI users is a different risk than the same vulnerability on an internal print server that never touches CUI. Your risk assessment should adjust the effective priority based on the asset's role, exposure, and data classification.

A defensible patch SLA table incorporates all three factors. The baseline is CVSS severity. The accelerator is KEV listing or active exploitation intelligence. The modifier is the asset's role in the CUI environment. Together, they produce a response timeline that is risk-informed — not calendar-driven.
Effective Priority Criteria Remediation SLA
P1 — Emergency CVSS Critical and actively exploited (KEV) and affects CUI-handling or internet-facing systems 72 hours or less
P2 — Critical CVSS Critical or High, or any severity on KEV catalog 14 days
P3 — Standard CVSS High or Medium, no active exploitation, not directly internet-facing 30 days
P4 — Routine CVSS Medium or Low, internal systems, no known exploitation 60 days or next maintenance window
Define SLAs you can consistently meet. The assessor will compare your stated timelines to your actual patch history. If your policy says P1 within 72 hours but your records show 10-day remediation for a KEV-listed CVE, the gap is a finding — not because the patch was slow, but because the execution did not match the policy.

Emergency Patching Versus Standard Maintenance Windows

Most organizations operate on a standard patch cycle — Patch Tuesday for Microsoft products, a monthly or biweekly maintenance window for everything else. This cadence handles the bulk of routine patching. But it does not handle the scenario that matters most: a critical zero-day with active exploitation disclosed on a Wednesday morning.

Your patch management policy must define two distinct pathways:

Standard Pathway

Scheduled Maintenance Window

Patches are tested in a staging environment (if available), approved by the change advisory board or IT lead, deployed during the scheduled window, and verified post-deployment. This pathway handles P3 and P4 priorities. It is planned, predictable, and minimally disruptive. Most patches flow through this channel.

Emergency Pathway

Out-of-Cycle Deployment

A critical vulnerability with active exploitation triggers the emergency pathway. Testing is compressed or bypassed. Approval is expedited — often a single decision-maker rather than a full review. Deployment happens immediately, outside the normal maintenance window. The risk of the patch breaking something is accepted because the risk of not patching is higher. This pathway handles P1 and sometimes P2 priorities.

The emergency pathway must be documented in the patch policy — not invented during a crisis. The policy should specify: what triggers the emergency pathway (e.g., KEV listing, vendor advisory rated Critical, CISA alert), who authorizes the out-of-cycle deployment, what testing is required (or explicitly waived), and how the deployment is documented after the fact. An assessor will ask: "What happens when a zero-day drops? Walk me through your process." If the answer is "we figure it out," the process is undocumented and the control is at risk.

The emergency pathway does not mean "panic patching." It means a pre-defined, documented fast track with clear triggers, clear authority, and clear evidence requirements — even if the timeline is compressed from 30 days to 72 hours.

Third-Party Applications and Firmware

Windows updates get all the attention. Third-party software and firmware get almost none — and they are where some of the most exploitable vulnerabilities live.

NIST SP 800-171 does not limit flaw remediation to operating system patches. SI.L2-3.14.1 applies to system flaws — which includes the operating system, installed applications, browser plugins, runtime environments, drivers, and firmware. If it runs on an in-scope system and it has a known vulnerability, it is covered by the control.

01

Third-Party Desktop Applications

Adobe Acrobat, Google Chrome, Mozilla Firefox, Java, 7-Zip, Zoom, Webex — any software installed on in-scope endpoints. Windows Update does not patch these. You need a third-party patch management tool — Intune with Winget, Patch My PC, PDQ Deploy, Ninite Pro, or similar — or a documented manual process with verification. Unpatched third-party software is one of the most common authenticated scan findings and one of the easiest to remediate.

02

Server Applications

SQL Server, IIS, Apache, line-of-business applications, ERP systems, engineering software. These often have their own patch cycles independent of the operating system. Some require downtime to update. Some have vendor-imposed restrictions on when patches can be applied. Your patch management program must track these independently — they are not covered by WSUS or Intune OS update policies.

03

Network Device Firmware

Firewalls, switches, wireless access points, VPN concentrators, and other network appliances run firmware — not a traditional operating system. Firmware updates are released by the manufacturer, often on an irregular schedule. They require manual download, staging, and deployment — usually through the device's management console. Firmware vulnerabilities are real and exploitable. Your patch program must include a process for tracking vendor advisories, testing firmware updates, and deploying them within the defined SLA.

04

BIOS and UEFI Updates

Endpoint BIOS and UEFI firmware is another category that Windows Update does not cover. Dell, HP, and Lenovo publish BIOS updates through their own management tools (Dell Command Update, HP Support Assistant, Lenovo System Update). These updates address hardware-level vulnerabilities — including speculative execution flaws and secure boot bypasses — that operating system patches cannot fix. Include BIOS update checks in your quarterly or semiannual maintenance cycle.

The assessor's cross-reference: They will compare your vulnerability scan results to your patch deployment records. If the scan shows an unpatched version of Adobe Acrobat or an outdated FortiGate firmware, and your patch records only cover Windows updates, the gap is visible and documented — by the scan you ran yourself.

How to Handle Systems That Cannot Be Patched Quickly

Not every system can accept a patch on the SLA timeline. Some cannot be patched at all — at least not without operational impact that exceeds the vulnerability risk. These exceptions are normal. What is not normal is ignoring them.

Common scenarios where patching is delayed or infeasible:

  • Vendor-certified configurations — An engineering application or ERP system is certified by the vendor to run on a specific OS version and patch level. Applying a newer patch may break certification, void support, or cause application failures. The vendor has not yet certified the patch.
  • Operational equipment — A CNC machine, a test bench controller, or an industrial system runs embedded software that cannot be updated without the manufacturer's involvement. Downtime for the update requires scheduling weeks in advance and may affect production commitments.
  • Legacy systems pending decommission — An older server running Windows Server 2012 R2 is scheduled for decommission in 90 days. It is still in scope. Patches for the OS are no longer available. The system cannot be upgraded without replacing the application it runs.
  • Patch stability concerns — A recently released patch has known issues reported by other organizations. Deploying it immediately risks system instability. The IT team is waiting for a revised patch or for stability confirmation before deploying.

In each case, the correct response is the same: document the exception, define a compensating control, set a review date, and track it.

The compensating control is what makes the exception defensible. "We can't patch this server" is not a compensating control. "This server is isolated on a VLAN with no internet access, no inbound connections from user endpoints, and restricted admin access — and the exception is reviewed monthly" is a compensating control. The assessor will ask what you did instead of patching — and the answer must be specific, technical, and documented.
── SECTION 6: EXCEPTION TRACKING AND POA&M ──

Exception Tracking and POA&M Interaction

Patch exceptions — systems that cannot meet the defined SLA — must be tracked in a register that the assessor can review. For some organizations, this register is a standalone document. For others, it is integrated into the vulnerability remediation tracker. Either approach works, as long as the information is complete and current.

Each exception entry should include:

Field 01 Asset & CVE
The specific system (hostname, IP, role) and the specific vulnerability (CVE ID, severity, affected component) that cannot be remediated within the standard SLA.
Field 02 Justification
Why the patch cannot be applied on schedule — vendor certification dependency, operational impact, patch stability issue, or system pending decommission. Must be specific, not generic.
Field 03 Compensating Control
The specific technical measure in place to reduce the risk while the vulnerability remains open — network isolation, enhanced monitoring, restricted access, application whitelisting, or disabling the affected service.
Field 04 Target Resolution Date
When the patch will be applied, the system will be decommissioned, or the exception will be re-evaluated. Open-ended exceptions without a target date are findings.
Field 05 Approver
The name and role of the individual who approved the exception. This should be someone with authority to accept risk on behalf of the organization — not the IT technician who discovered the problem.

POA&M interaction: If a patch exception results in a CMMC assessment objective being scored Not Met, the finding may be placed on a Plan of Action and Milestones — provided the control is not one of the practices excluded from POA&M eligibility. The POA&M must include the same information as the exception register plus the specific CMMC practice affected and the milestone plan for closure. Under CMMC rules, POA&M items must be closed within 180 days of the Conditional CMMC Status Date. A patch exception that has been open for longer than that — or that has no realistic path to closure — cannot be placed on a POA&M. It is a hard failure.

The exception register and the POA&M are related but not identical. The exception register tracks operational patch deferrals — these are internal risk management decisions. The POA&M tracks assessment findings that affect CMMC certification status. Not every exception becomes a POA&M item, but every POA&M item related to patching should trace back to an exception with full documentation.

Evidence Model for Repeatable Patch Governance

The assessment evidence package for patch management must demonstrate the entire lifecycle: policy, execution, verification, and exception handling. Here is what a complete package looks like — without overbuilding.

Evidence 01 Patch Policy
Documented policy that defines: patch categories covered (OS, third-party, firmware, BIOS), SLA tiers by severity, standard and emergency pathways, exception process, and roles responsible. Must reference the vulnerability scanning process that feeds it.
Evidence 02 Deployment Records
Reports from your patch deployment tool — WSUS, Intune, SCCM, or third-party tool — showing which patches were deployed, to which systems, on what date, and whether the deployment succeeded. Must cover at least the prior 90 days. Include both OS and third-party patches.
Evidence 03 Compliance Dashboard
A current-state view showing the patch compliance percentage across in-scope systems — exported from Intune, Defender Vulnerability Management, or your scanning tool. The assessor uses this to identify systems that are behind. 100% compliance is not expected. A documented process for closing gaps is.
Evidence 04 Verification Scans
Post-deployment vulnerability scan results showing that remediated CVEs no longer appear on patched systems. This is the "close the loop" artifact — it proves the patch worked, not just that it was attempted. Can be a full scan or a targeted rescan.
Evidence 05 Exception Register
All current patch exceptions with asset, CVE, justification, compensating control, target date, and approver. Must be current — stale entries with past-due target dates and no update are findings.
Evidence 06 Emergency Patch Record
If an emergency deployment occurred in the evidence period, the record of the trigger (CVE, CISA alert, vendor advisory), the approval, the deployment, and the verification. If no emergency occurred, the assessor may ask a hypothetical — "walk me through what would happen if a KEV-listed zero-day dropped tomorrow" — and the policy must provide the answer.
The most common patch management assessment failure: The contractor patches reliably but cannot prove it. WSUS shows patches deployed, but nobody exports the compliance report. Intune shows devices are current, but nobody screenshots the dashboard. The scans show remediation, but nobody saves the before-and-after reports. The patches happened. The evidence did not. And without evidence, the control is Not Met.

The Bottom Line

Patch management for CMMC is not about patching speed. It is about governed, severity-differentiated, evidence-producing patch operations that align what your policy says with what your systems show. The assessor evaluates the chain: policy defines the SLAs, vulnerability scans identify what needs patching, deployment records show what was patched, verification scans confirm the fix, and exceptions are documented with compensating controls and target dates.

Every link in that chain must produce evidence. A missing link — undefined SLAs, untracked third-party software, unverified deployments, or undocumented exceptions — is a finding. The patch itself is one step. The program around it is the control.

"Timely" does not mean fast. It means governed. It means you defined the timeline, you met the timeline, and you can prove it. And when you could not meet the timeline, you documented why, implemented a compensating control, and tracked the exception to closure. That is what a patch management program looks like for CMMC.