Adequate vs Sufficient Evidence
How CMMC Assessments Really Fail — and How to Prevent It
Assessors make findings at the assessment objective level — one Not Met objective can fail the entire control practice. And assessors use judgment to determine when adequate and sufficient evidence has been presented to support a finding. Having a policy is not enough. Here is what that judgment actually looks for.
CMMC assessments are not graded on a holistic impression of your security posture. They are scored against discrete assessment objectives — granular sub-requirements labeled [a], [b], [c], [d] within each control practice — using a binary determination for each one: Met or Not Met. A control practice with four objectives where three are Met and one is Not Met is scored as Not Met for the entire practice. There is no partial credit.
The governing standard for how those determinations are made comes from the Cyber AB CMMC Assessment Process: assessors must verify the adequacy and sufficiency of the evidence presented. Both criteria must be satisfied. And when either is missing, assessors are directed to increase their sampling — which directly extends assessment duration and billable hours.
What "Adequate" and "Sufficient" Actually Mean in Practice
The two terms address different dimensions of evidence quality. Assessors evaluate both independently, and an evidence submission can fail either test — even when it appears superficially complete.
Right Type of Evidence
Adequacy is about relevance. Does the evidence submitted actually address the specific assessment objective being evaluated? Assessors are not permitted to interpret, infer, or translate — the evidence must speak directly to the objective's language.
A network diagram is adequate evidence for a network boundary objective. It is not adequate evidence for a user access management objective — even if both are relevant to your overall security posture.
Enough Evidence
Sufficiency is about completeness and consistency. Even if the evidence type is correct, a single example, an undated record, or a two-page document covering 110 controls tells the assessor that the control is not systematically implemented — it was performed once, for the audit.
A signed log review record from one week is adequate. Two years of consistently dated records is sufficient. The gap between them is the gap between "this happened" and "this practice is maintained."
Why CMMC Is Scored at the Assessment Objective Level
Each NIST SP 800-171 control practice is broken into multiple assessment objectives in NIST SP 800-171A — the individual sub-requirements that together define what it means to fully implement the control. Assessors evaluate each objective independently. A practice is only Met when every one of its objectives is individually Met.
This is the structural reason why evidence mapping matters so much. A broad policy that addresses a control in general terms may satisfy some objectives while leaving others unaddressed — and the ones left unaddressed are findings.
Assessment Objects: What Actually Counts as Evidence
Not all evidence is the same type. NIST SP 800-171A organizes evidence into four assessment object categories — and each assessment method (Examine, Interview, Test) targets specific object types. Understanding which object type an objective requires tells you exactly what to prepare.
Specifications
Documented statements of policy, procedure, standard, or requirement — the "what shall be done" layer of your security program.
Mechanisms
Hardware, software, and firmware that implement or enforce a security requirement — the technical controls operating in the live environment.
Activities
Operational processes and behaviors — what people actually do to implement and maintain a control, evidenced through records of execution.
Individuals
The specific people responsible for implementing and maintaining each control — the control practice owners identified in your organizational chart.
The practical implication: for every assessment objective, you need evidence from the correct object type. An objective that targets an activity — "audit logs are reviewed" — requires a record of that activity happening, not a policy stating it should happen. Policies are specifications. Records are activities. An assessor cannot substitute one for the other.
Evidence Mapping: Pointing Assessors to the Exact Sentence
The most direct way to accelerate an assessment and reduce billable hours is to make it trivially easy for the assessor to find the evidence for each objective. The tool that does this is an Evidence Mapping File — a spreadsheet that connects every assessment objective to its supporting evidence at the exact document, page, and paragraph level.
The mental model is arithmetic: the assessor's job is to confirm that one piece of evidence plus one assessment objective equals Met. Your mapping file sets up the equation. An assessor following a map does not search — and searching is what extends engagements.
| Practice | Obj. | Pointer |
|---|---|---|
| AC.L2-3.1.1 | [a] | Access Policy §3.2 ¶1 |
| AC.L2-3.1.1 | [d] | AD screenshot → Group Policy export |
| AU.L2-3.3.1 | [e] | SIEM config + log review record 2024-01 |
| IA.L2-3.5.3 | [a] | MFA Config doc p. 7 + live demo |
| SC.L2-3.13.11 | [a] | FIPS CMVP Cert #4127 |
| CA.L2-3.12.4 | [a] | SSP §2.1 + Network Diagram Rev B |
| CM.L2-3.4.1 | [b] | Baseline Config doc + firewall export |
Interview Readiness: How Control Owners Create or Destroy Sufficiency
The Interview method is not a general security awareness check. Assessors target the specific control practice owner identified in your organizational chart — the person whose name appears against each control in your SSP. That person is responsible for explaining how the control is implemented, not just that it exists.
A well-written policy owned by someone who cannot explain it creates an immediate sufficiency gap. The documentation says the control is implemented; the interview says it is not understood. Assessors treat that discrepancy as a signal to test more deeply.
Answer the question and nothing but the question. An employee who mentions they are mid-deployment on a new SIEM will immediately prompt the assessor to ask for change management logs, configuration documentation, and implementation status for that project. Every unrequested detail is a potential new line of scrutiny. Brief every control owner on this rule individually — the ones who understand it protect your assessment; the ones who do not can extend it by weeks.
Common Evidence Failures — and What Assessors Actually Do When They Find Them
The single most common documentation failure: evidence that addresses the right topic but uses different vocabulary than the assessment objective. Assessors are not permitted to translate, interpret, or infer equivalence. If the assessment objective states "authorized users are identified" and your policy states "access rosters are maintained," those phrases do not match — and the assessor cannot mark the objective Met on the basis of inferred equivalence.
"Authorized users of the system are identified."
"Access rosters are maintained. Recovery Time Objectives are defined per system tier. Periodic access reviews are scheduled quarterly."
"Authorized users of the system are identified."
"Authorized users are identified via Active Directory security groups documented in Asset Inventory Appendix A. Account provisioning requires ISSO approval per Access Control Procedure §3.1."
Organizations frequently attempt to mark controls as Not Applicable — remote access controls for a company that forbids remote access, for example. Assessors and the DoD very rarely accept N/A designations without extensive supporting evidence. A control marked N/A still requires a policy explicitly prohibiting the activity, a documented procedure for how the prohibition is enforced on new systems, and a control owner who can explain the enforcement in an interview.
N/A is not a bypass — it is a claim that requires as much documentation as Met, and carries more scrutiny because assessors are trained to verify that the claimed prohibition actually holds.
A policy document that is otherwise complete — correct vocabulary, adequate depth, correct ownership assignment — but lacks an authorizing signature from a senior official is an immediate limited practice deficiency. The signature is evidence that the policy has been formally reviewed, approved, and is currently in effect. Without it, the assessor cannot confirm the policy is operative — it may be a draft, a legacy document, or an aspirational statement with no organizational backing.
A policy stating "logs are reviewed weekly" satisfies the specification object type. It does not satisfy the activity object type — which requires records demonstrating that logs were actually reviewed weekly. Assessors distinguish between documentation of intent and documentation of execution. A control that appears in the SSP but has no corresponding activity records is a policy that exists on paper and nowhere else.
The Bottom Line
CMMC assessments fail at the level of individual assessment objectives, not control domains or program-level impressions. One inadequate or insufficient evidence submission for one objective can fail one control practice — and a failed practice is a scored deficiency against your SPRS total and potentially a POA&M obligation with a 180-day closeout deadline.
The entire evidence preparation process is oriented toward a single outcome: making it as easy as possible for an assessor to say "Met" for each objective. That means the right type of evidence (adequate), enough of it to prove consistent implementation (sufficient), organized so the assessor can find it without searching (mapped), and supported by control owners who can explain it without creating new threads of inquiry (prepared).
The assessor's job is to verify — not to assist, interpret, or infer. Every hour they spend searching for evidence you should have mapped is a billable hour you paid for. Build the mapping file. Brief the control owners. Review the signatures. Then give the assessor the clearest possible path to Met.