CMMC Scoring Explained
Met vs Not Met, Assessment Objectives, and Why One Gap Fails the Requirement
CMMC Level 2 findings are captured at the assessment objective level — one NOT MET objective causes the entire security requirement to be NOT MET. Per 32 CFR 170.4, an assessment objective is a set of determination statements expressing the desired outcome for a specific security requirement. Those determinations are reached through three validation methods: examine, interview, and test.
There is no curve. There is no "mostly compliant" outcome. Each of the 110 CMMC Level 2 control practices is decomposed into discrete assessment objectives — 320 in total across the full Level 2 practice set — and each objective receives an independent binary finding: Met or Not Met. A practice is Met only when every one of its objectives is individually Met. One unaddressed objective produces a Not Met finding for the entire practice, regardless of how many other objectives were satisfied.
Met, Not Met, and Not Applicable — The Only Three Outcomes
The DoD CIO CMMC Level 2 Assessment Guide defines exactly three possible findings for each assessment objective. These are the only permitted outcomes — there is no partial credit, no conditional met, and no "in progress" status that carries scoring value.
All Determination Statements Satisfied
The assessor has verified — through at least one of the three assessment methods — that the objective's determination statements are all satisfied by the current state of the environment. The evidence is adequate (right type) and sufficient (enough of it to demonstrate consistent implementation).
One or More Statements Unsatisfied
The assessor has determined that at least one determination statement within the objective is not satisfied — whether because evidence is missing, the live system contradicts the documentation, or the control owner cannot demonstrate understanding of the requirement. The entire practice is Not Met.
Objective Does Not Apply — With Documentation
The control objective does not apply to the assessed environment because the activity or technology it addresses is absent. N/A is rarely accepted and requires affirmative documentation: a policy prohibiting the activity, a technical control enforcing the prohibition, and an SSP statement explaining both. N/A is a claim that must be evidenced, not a blank field.
How Practices Decompose Into Objectives — A Live Example
The structure that makes one missed objective consequential is the practice-to-objective decomposition defined in NIST SP 800-171A. Each practice title is an umbrella for a set of specific determination statements that together constitute full implementation. Satisfying the practice title in spirit is not the same as satisfying each determination statement in evidence.
How Assessors Validate a Finding: Examine, Interview, and Test
NIST SP 800-171A defines three assessment methods. Legally, satisfying one of the three methods for a given objective is sufficient to mark it Met. In practice, C3PAO assessors cross-validate across at least two methods — typically Examine plus either Interview or Test — because a policy that exists on paper but is not understood by the control owner, or not enforced by the live system, is a policy that does not satisfy the objective.
Document Review
The assessor reviews documentation — policies, procedures, SSP implementation statements, configuration exports, activity records — and evaluates whether the evidence demonstrates the objective is satisfied.
Personnel Verification
The assessor interviews the specific individual named as control owner in the SSP. The interview verifies that the control owner understands the requirement, can describe how it is implemented, and is consistent with the documented procedures.
Live System Verification
The assessor exercises a live system and observes whether it behaves as documented — watching a session lock, triggering a SIEM alert, reviewing active ACL enforcement, or confirming MFA is active on CUI accounts in production.
| Object Type | What It Is | Examples | Method |
|---|---|---|---|
| Specifications | Documented statements of policy, procedure, standard, or requirement | Access control policy, incident response procedure, acceptable use policy, SSP implementation statement | Examine |
| Mechanisms | Hardware, software, and firmware implementing or enforcing a requirement | Firewall ACL rules, MFA configuration, SIEM alert policy, AD group policy, EDR settings | Examine / Test |
| Activities | Operational processes — records demonstrating a control is consistently executed | Dated log review records, patch remediation tickets, training completion logs, signed scan reports | Examine / Test |
| Individuals | The specific people responsible for implementing and maintaining each control | Network administrator, HR director, ISSO, facilities manager, sampled end users | Interview |
Where Point Values Fit: The 5/3/1 Scoring Model
The DoD NIST SP 800-171 Assessment Methodology assigns a point weight to each of the 110 control practices. These weights determine two things: how much each Not Met practice reduces the SPRS score, and whether a failing practice is eligible to be placed on a POA&M or must be fully remediated before conditional status can be issued.
Highest-consequence security capabilities. A Not Met 5-point practice cannot be placed on a POA&M. It must be fully implemented before Phase 2 begins. Examples: FIPS-validated cryptography (SC.L2-3.13.8), multifactor authentication (IA.L2-3.5.3), incident response (IR.L2-3.6.1). An open 5-point practice at Phase 2 close blocks conditional status entirely.
Significant security capabilities whose absence is serious but whose remediation can be tracked on a 180-day closeout timeline. A Not Met 3-point practice reduces the SPRS score by 3 and is eligible for the POA&M — provided the total score still meets the 88/110 threshold for conditional status.
Minor deficiencies — often documentation issues such as a missing signature, an undated policy, or an incomplete procedure statement. One-point failures qualify for the 5-day Limited Practice Deficiency correction window. Uncorrected within 5 days, they move to the POA&M. A large number of 1-point failures can still add up to meaningful SPRS score reduction.
"Almost Met" — The Four Failure Patterns That Appear After Preparation
The majority of assessment failures occur not in organizations with no security program, but in organizations whose preparation was real but whose evidence — for specific objectives — was inadequate, insufficient, or internally inconsistent. These four patterns account for most Not Met findings in otherwise well-prepared environments.
A correctly written, signed access control policy satisfies the Examine method for several objectives. The assessor then interviews the network administrator who owns those controls. The administrator describes a process that contradicts the policy, cannot identify where the policy is stored, or is unaware that the policy assigns them specific implementation responsibilities. The interview produces a Not Met finding for objectives that the Examine method had satisfied.
The critical insight: each assessment method is evaluated independently. A policy that satisfies Examine does not carry over to satisfy Interview. The control owner must be able to describe the same implementation the policy documents.
The SSP states that user accounts lock after three consecutive failed authentication attempts. The assessor tests the live system and observes that the group policy actually enforces a five-attempt threshold. The policy exists and the SSP documents the requirement. The test produces a different result than the documentation claims. The objective is Not Met because the test demonstrates the control is not implemented as documented — regardless of whether three attempts or five attempts is the "right" security posture.
An organization that bans remote access marks the remote access control objectives as Not Applicable. The assessor asks for the policy explicitly prohibiting remote access, the technical control that enforces the prohibition, and the procedure for verifying that new systems are deployed without remote access capability. None of these exist — the organization simply does not allow remote access and has never formalized the prohibition. N/A without supporting evidence is the same as Not Met in practice, because the assessor has no basis to accept the N/A claim.
A policy document that is complete in content, correctly scoped, and thematically resonant with the assessment objectives it supports — but lacks an authorizing signature and date from a senior official — is not a valid policy for assessment purposes. Without an authorizing signature, the assessor cannot confirm the policy is operative rather than a draft. The finding lands on the Limited Practice Deficiency list and triggers the 5-day correction window. If not corrected, it moves to the POA&M.
How Asset Categories Determine Which Objectives Apply
The 320 assessment objectives do not apply equally to every asset in the environment. The DoD's five asset categories determine which practices must be demonstrated against which systems — and understanding this mapping is what makes enclave scoping a meaningful cost-reduction strategy rather than merely an organizational convenience.
Pre-Assessment Scoring Readiness Checklist
The binary nature of objective-level scoring makes pre-assessment preparation a completeness exercise rather than a best-effort exercise. Every objective must be individually satisfied. These are the five preparation checks that prevent single-objective failures from producing practice-level Not Met findings in a well-prepared environment.
The Bottom Line
CMMC scoring is a completeness test applied at the most granular level the framework defines. A practice is the unit of scoring in terms of point impact on the SPRS total. An assessment objective is the unit of evaluation in terms of what the assessor must verify. The gap between the two is where "mostly compliant" organizations discover they are not certified — because every objective within every practice must be individually Met, and one gap in one objective produces the same finding as no preparation at all.
The preparation implication is equally simple: map evidence to every objective, align documentation language to every objective's exact vocabulary, verify live system behavior matches every documented control, and confirm every control owner can describe every practice they own. The assessment does not evaluate security on an impression. It evaluates it on a checklist of 320 determination statements, each one binary.
"Mostly compliant" is not an assessment outcome. It is a description of a Not Met finding that was close. The gap between Met and Not Met is the gap between a certification and a POA&M — and in some cases, the gap between a conditional status and a full reassessment from scratch. Prepare for every objective. Not for most of them.