4.7 How to Understand evidence_level
In Treeify, some generated results include a field called evidence_level.
This field tells you how directly the current content is supported by your input materials.
It helps you quickly distinguish between:
- information that is clearly stated in the source document
- information that is implied by the document
- information inferred from requirement type or common testing practice
Understanding evidence_level helps you review AI-generated content more efficiently and decide which parts can be trusted directly, which parts should be checked, and which parts may require additional confirmation.
Why evidence_level matters
When AI participates in test design, not every generated item comes from the same level of source certainty.
Some content is directly written in the requirement document.
Some content is not explicitly written, but can be reasonably derived from context.
Some content reflects common testing logic for a certain requirement type, but is not actually committed in the source materials.
Treeify uses evidence_level to make this difference visible.
This helps you:
- review generated results more quickly
- identify which content is directly grounded in the source
- spot content that may require confirmation or supplementation
- reduce the risk of treating inferred content as confirmed requirement facts
Available values of evidence_level
Treeify currently uses the following values:
explicitimpliedinferred_typedomain_common
Each value represents a different level of source support.
1. explicit
explicit means the content is directly stated in the input materials.
This is the strongest level of evidence.
If a requirement, rule, field, role, state, condition, or constraint is clearly written in the source document, the generated result should usually be marked as explicit.
Example
Source document says:
After a leave request is submitted, the status changes to "Pending Approval".
Only managers can approve or reject the request.
Generated result:
- Leave request status becomes
Pending Approvalafter submission →explicit - Only managers can perform approval actions →
explicit
How to use it
When reviewing content marked as explicit, you should still check whether the extraction is accurate, but in general this type of content is directly traceable back to the input.
2. implied
implied means the content is not directly written as a standalone statement, but can be reasonably derived from the source context.
This usually happens when the document describes behavior indirectly, through flow, UI text, structure, examples, or related rules.
Example
Source document says:
Employees can withdraw a leave request before approval is completed.
The document may not explicitly say:
Withdraw is only available in Pending Approval status.
But based on the described business flow, Treeify may generate:
- Withdrawal is only valid before approval is completed →
implied
How to use it
When reviewing content marked as implied, check whether the conclusion is truly supported by the source context.
If the reasoning is reasonable and consistent with the document, it can usually be kept.
If the implication feels uncertain, you should revise it or supplement the missing requirement information.
3. inferred_type
inferred_type means the content is not directly stated in the source, but is generated based on the nature of the requirement type.
This usually appears when Treeify recognizes a common structure such as:
- state transitions
- approval flows
- role-based operations
- form input validation
- CRUD operations
- upload/download behavior
- API request/response handling
In these cases, Treeify may infer likely testing directions that are typically relevant for this kind of requirement.
Example
Source document says:
Users can submit a reimbursement form.
The document may not explicitly mention validation rules, but Treeify may generate:
- Check required field validation before submission →
inferred_type - Check invalid amount format handling →
inferred_type
These are not directly written in the requirement, but they are commonly relevant for form-based functionality.
How to use it
Content marked as inferred_type should be reviewed more carefully.
It may be useful and relevant for testing, but it should not be treated as confirmed requirement fact unless the source document or business owner confirms it.
This type of content is often valuable for expanding coverage, but it may also need refinement.
4. domain_common
domain_common means the content is based on common domain knowledge or common testing practice, rather than direct evidence from the current source document.
This is the weakest level of source grounding among the four values.
It is useful for suggesting possible coverage directions, but it should be reviewed with the highest caution.
Example
For a payment-related requirement, Treeify may generate:
- Check duplicate submission handling in payment scenarios →
domain_common - Check timeout and retry behavior for transaction processing →
domain_common
These may be important testing considerations in many payment systems, but if they are not described or implied in the actual requirement materials, they are not confirmed project facts.
How to use it
When reviewing content marked as domain_common, do not assume it is part of the confirmed requirement scope.
Treat it as a possible testing suggestion that may or may not apply to the current project.
If it is relevant, keep it and refine it.
If it is outside the current project scope, remove it or mark it for later clarification.
How to review different evidence_level values
A practical review order is:
-
Review
explicitcontent first
Make sure directly stated requirements are extracted correctly. -
Review
impliedcontent next
Confirm that the conclusion is truly supported by the document context. -
Review
inferred_typecontent after that
Decide whether the inferred testing direction fits the actual requirement. -
Review
domain_commoncontent last
Treat it as optional or confirm it before using it as formal requirement-based output.
This approach helps you separate document-grounded facts from AI-extended testing suggestions.
How evidence_level helps during editing
When you see generated content that feels questionable, evidence_level can help you decide what to do next.
If the content is explicit
Go back to the source document and check whether the extraction is accurate.
If the content is implied
Check whether the context truly supports the conclusion.
If not, revise or delete it.
If the content is inferred_type
Decide whether this testing direction should be kept as a useful extension, or removed because it is not relevant to the actual requirement.
If the content is domain_common
Treat it as optional guidance.
Keep it only if it fits your project, domain, and current test scope.
Example: reviewing one requirement with different evidence_level values
Source requirement
After a leave request is submitted, the request enters Pending Approval status.
Managers can approve or reject the request.
Employees may withdraw the request before approval is completed.
Possible generated results
- Request enters
Pending Approvalafter submission →explicit - Only managers can approve or reject requests →
explicit - Withdrawal is only allowed before approval completion →
implied - Check duplicate approval action handling →
inferred_type - Check concurrent status update conflicts →
domain_common
How to interpret this
The first two items are directly supported by the requirement text.
The third item is not written in exactly that wording, but it is reasonably derived from the business rule.
The fourth item reflects a common testing direction for approval workflows.
The fifth item reflects a broader testing concern that may be relevant, but is not clearly grounded in the source.
This distinction helps you review the output with the right expectations.
Important reminder
evidence_level does not mean:
- whether the generated content is correct or incorrect
- whether the generated content is useful or not useful
- whether the generated content should always be kept or deleted
It only indicates how strongly the content is supported by the input materials.
A lower evidence level does not automatically mean the content is wrong.
A higher evidence level does not automatically mean the content needs no review.
The purpose of evidence_level is to make the source grounding more transparent, so you can review the result more efficiently and more safely.
Recommended review strategy
When using Treeify, you can apply the following strategy:
- Treat
explicitas requirement-grounded content - Treat
impliedas context-grounded content - Treat
inferred_typeas likely useful testing extension - Treat
domain_commonas optional domain/testing suggestion
If your project requires strict traceability, you may want to keep a stronger focus on explicit and implied content.
If your goal is broader test coverage exploration, inferred_type and domain_common may also provide useful input.
Summary
evidence_level helps you understand how each generated item relates to your source materials.
explicit= directly stated in the sourceimplied= reasonably derived from source contextinferred_type= inferred from requirement typedomain_common= based on common domain or testing knowledge
By reading evidence_level correctly, you can review Treeify results with clearer expectations, stronger traceability, and better control over what should be kept, confirmed, revised, or removed.