PRD-STD-005: Documentation Requirements
Standard ID: PRD-STD-005 Version: 1.1 Status: Active Compliance Level: Level 3 (Optimized) Effective Date: 2025-01-15 Last Reviewed: 2026-02-18
1. Purpose
This standard defines documentation requirements for AI-assisted development. As AI-generated code proliferates across codebases, organizations face a growing risk of knowledge erosion--the gradual loss of understanding about why code exists, how it works, and what constraints shaped its design. When code is generated by AI, the reasoning behind implementation choices often lives only in the prompt session and is lost when the session ends.
This standard ensures that AI-assisted development produces adequate documentation to preserve institutional knowledge, enable effective maintenance, and support future development by engineers who were not involved in the original generation.
2. Scope
This standard applies to:
- All production code generated, modified, or substantially influenced by AI coding assistants
- Architecture and design decisions informed by AI-generated proposals
- Prompt engineering artifacts used to generate production code
- Knowledge artifacts created to support long-term maintenance of AI-generated codebases
3. Definitions
| Term | Definition |
|---|---|
| AI-Generated Section | A contiguous block of code that was substantially generated by an AI coding assistant |
| Architecture Decision Record (ADR) | A document that captures an important architectural decision along with its context and consequences |
| Prompt Documentation | Records of the prompts, constraints, and context used to generate production code |
| Knowledge Artifact | Any document, comment, diagram, or record that preserves understanding of code purpose and behavior |
| Inline Documentation | Comments and documentation strings embedded directly in source code |
| AI Interaction History | A concise record of prompt iterations, rejected alternatives, and rationale for selected AI-assisted implementation choices |
| Documentation Drift | Any mismatch between current system behavior and the documentation that describes it |
| D.O.C.S Coverage | Documentation coverage across Domain context, Operations context, Change history, and Support guidance |
4. Requirements
4.1 Code Comments for AI-Generated Sections
REQ-005-01: AI-generated code sections MUST include an annotation indicating AI involvement. The annotation MUST include: (a) the AI tool used, (b) the date of generation, and (c) the name of the engineer who reviewed and approved the code.
Example:
// AI-Generated: GitHub Copilot | 2025-06-15 | Reviewed by: J. Smith
// Purpose: Rate limiter middleware with sliding window algorithm
REQ-005-02: AI-generated functions and classes MUST include documentation comments (docstrings, JSDoc, Javadoc, or language-equivalent) that describe:
- The purpose and behavior of the function/class
- Parameter descriptions and types
- Return value description
- Exceptions or errors that may be thrown
- Any non-obvious constraints or assumptions
REQ-005-03: AI-generated code that implements business logic MUST include inline comments explaining the business rules, not just the implementation mechanics. Comments MUST answer "why" rather than "what."
REQ-005-04: Organizations SHOULD establish a standard annotation format for AI-generated code and enforce it through linting rules.
REQ-005-05: Comments SHOULD reference the original requirements, user stories, or specifications that prompted the code generation.
REQ-005-06: Complex AI-generated algorithms SHOULD include a brief explanation of the algorithmic approach and its time/space complexity.
4.2 Architecture Decision Records
REQ-005-07: When AI-generated proposals influence architecture decisions (e.g., technology selection, design patterns, system decomposition), an Architecture Decision Record (ADR) MUST be created that documents:
- The decision context and problem statement
- The options considered (including AI-proposed options)
- The selected option and rationale
- Consequences and trade-offs
- Which aspects were AI-generated versus human-determined
REQ-005-08: ADRs MUST be stored in the project repository alongside the code they describe (e.g., docs/adr/ directory) and version-controlled.
REQ-005-09: ADRs SHOULD follow a standardized template such as the Michael Nygard format or MADR (Markdown Any Decision Records).
REQ-005-10: ADRs SHOULD be reviewed and approved by at least one architect or senior engineer.
4.3 Prompt Documentation
REQ-005-11: For production-critical code generation tasks (as defined by the organization's risk classification), the prompts used MUST be preserved as project artifacts. This includes: the full prompt text, relevant context provided, constraints specified, and the AI tool version used.
REQ-005-12: Prompt documentation MUST NOT include any sensitive data (credentials, PII, proprietary business data) even if the original prompt contained such data. Sensitive data MUST be redacted before documentation.
REQ-005-13: Organizations SHOULD maintain a prompt log that tracks which prompts were used for which code sections, enabling traceability from requirements to prompts to generated code.
REQ-005-14: Prompts that produce high-quality outputs SHOULD be contributed to the organization's prompt library per PRD-STD-001.
REQ-005-15: Teams SHOULD document failed prompt approaches alongside successful ones to build institutional knowledge about effective prompting strategies.
4.4 Knowledge Preservation
REQ-005-16: Projects with more than 30% AI-generated code (by volume) MUST maintain a knowledge preservation document that describes:
- Which components are AI-generated
- Key design decisions and their rationale
- Known limitations or technical debt in AI-generated sections
- Maintenance guidance specific to the AI-generated components
REQ-005-17: Knowledge preservation documents MUST be updated when significant changes are made to AI-generated components.
REQ-005-18: Teams SHOULD conduct periodic "knowledge audits" (at least semi-annually) to verify that documentation remains accurate and sufficient for a new team member to understand the AI-generated sections.
REQ-005-19: Teams SHOULD maintain architecture diagrams (e.g., C4 model) for systems with significant AI-generated components, showing the boundaries between AI-generated and human-written code.
REQ-005-20: Code walkthroughs or recorded explanations SHOULD be created for complex AI-generated subsystems to supplement written documentation.
4.5 Lifecycle Documentation Coverage (D.O.C.S)
REQ-005-21: Projects with AI-assisted production components MUST maintain D.O.C.S coverage for material components:
- Domain context: purpose, business constraints, assumptions, and non-goals
- Operations context: dependencies, integration points, runtime requirements, monitoring/alert signals, and failure modes
- Change context: architecture decision references, change log, and links to relevant AI interaction records
- Support context: troubleshooting steps, known issues, escalation path, and owner/contact information
REQ-005-22: API, service, and integration documentation for AI-assisted components MUST include:
- Input/output contracts and versioning expectations
- Error semantics and retry/idempotency behavior where applicable
- At least one valid request/response example for externally consumed interfaces
REQ-005-23: Organizations SHOULD standardize D.O.C.S documentation templates so teams can produce consistent lifecycle documentation with low overhead.
REQ-005-24: Lifecycle documentation SHOULD be co-located with source code and linked from the owning component's README or service catalog entry.
4.6 AI Interaction History
REQ-005-25: For high-risk or production-critical AI-assisted changes, teams MUST preserve an AI Interaction History record that captures:
- Major prompt/refinement iterations (not every autocomplete event)
- Rejected alternatives and rejection rationale
- Final accepted approach and why it was chosen
REQ-005-26: AI Interaction History records MUST link to applicable provenance metadata and ADRs. Where prompt content cannot be stored, records MUST include a redaction note and a stable reference (hash/ID/ticket) for auditability.
REQ-005-27: Teams SHOULD capture common failure patterns from interaction history (for example, recurring hallucination patterns or unsafe defaults) and feed them into prompt templates and reviewer checklists.
4.7 Documentation Review Workflow
REQ-005-28: Documentation updates for AI-assisted changes MUST pass a documented review workflow:
- Stage 1: Author self-review for completeness, correctness, and redaction compliance
- Stage 2: Technical peer review for accuracy versus implementation
- Stage 3: Consumer-perspective validation for medium/high-risk changes (review by a likely consumer or an engineer not involved in the implementation)
REQ-005-29: Pull requests with AI-assisted changes MUST include explicit documentation review status in the PR template/checklist.
REQ-005-30: Material documentation defects identified in review (missing operational runbook data, incorrect interface contracts, unsafe guidance, or stale compliance statements) MUST block merge until resolved.
REQ-005-31: Teams SHOULD maintain a rotating documentation reviewer roster for critical domains to avoid documentation blind spots.
4.8 Currency and Drift Controls
REQ-005-32: Documentation impacted by behavior, interface, deployment, or operational changes MUST be updated in the same pull request or in a linked, pre-release follow-up with clear ownership and due date.
REQ-005-33: Teams MUST define freshness SLAs for critical documentation classes (for example, runbooks, integration contracts, and incident response guides) and track compliance with those SLAs.
REQ-005-34: Teams SHOULD implement automated drift checks where practical (examples: contract test references, broken-link checks, schema compatibility checks, and runbook validation drills).
REQ-005-35: Teams SHOULD run monthly stale-documentation reviews and create remediation items for any artifact that violates freshness SLAs.
4.9 Documentation Effectiveness Metrics
REQ-005-36: Organizations at Maturity Level 3 or higher MUST measure documentation effectiveness at least quarterly using, at minimum:
- Documentation Currency Rate
- Documentation Comprehension Validation Rate
- Documentation Reference Frequency
- Documentation Maintenance Efficiency
- Support Ticket Deflection attributable to documentation
REQ-005-37: Documentation effectiveness metrics MUST be reviewed in team retrospectives or operational reviews and linked to corrective actions when trends degrade.
REQ-005-38: Organizations SHOULD include documentation effectiveness targets in engineering KPI dashboards and calibrate targets by maturity level.
5. Implementation Guidance
AI Annotation Standards by Language
| Language | Annotation Format | Example |
|---|---|---|
| Python | Module/function docstring + comment | # AI-Generated: Claude Code | 2025-06-15 | Reviewed: J. Smith |
| JavaScript/TypeScript | JSDoc + comment | /** @ai-generated Claude Code 2025-06-15 @reviewer J. Smith */ |
| Java/Kotlin | Javadoc + annotation | @AIGenerated(tool="Copilot", date="2025-06-15", reviewer="J. Smith") |
| Go | Comment block | // ai:generated copilot 2025-06-15 reviewed:jsmith |
| C#/.NET | XML doc + attribute | [AIGenerated("Copilot", "2025-06-15", Reviewer = "J. Smith")] |
ADR Template
# ADR-[NUMBER]: [TITLE]
## Status
[Proposed | Accepted | Deprecated | Superseded]
## Context
[Describe the problem or decision to be made]
## AI Involvement
[Describe how AI was used: generated proposals, evaluated options, etc.]
## Options Considered
1. [Option 1] - [Brief description]
2. [Option 2] - [Brief description]
3. [Option 3] - [Brief description]
## Decision
[Which option was selected and why]
## Consequences
- [Positive consequence 1]
- [Negative consequence / trade-off 1]
- [Risk or technical debt introduced]
## Review
- Approved by: [Name, Role]
- Date: [Date]
Documentation Quality Checklist
For each AI-generated component, verify:
- AI involvement is annotated in the code
- All public functions/classes have documentation comments
- Business logic has "why" comments, not just "what" comments
- Architecture decisions are captured in ADRs
- Critical prompts are preserved as project artifacts
- D.O.C.S lifecycle coverage exists for affected components
- AI interaction history exists for high-risk/production-critical changes
- Documentation review workflow stages are completed and recorded
- Critical docs meet freshness SLA and no known drift is unresolved
- Knowledge preservation document is up to date (if applicable)
- Documentation is reviewed as part of the code review process
D.O.C.S Lifecycle Template
# [Component/System] Documentation
## D: Domain Context
- Purpose and business objective
- Key constraints, assumptions, and non-goals
- Data classification and compliance notes
## O: Operations Context
- Runtime dependencies and integration points
- Required resources (CPU/memory/storage/queue limits)
- Monitoring signals, alerts, and SLO/SLA references
- Failure modes and fallback behavior
## C: Change Context
- Relevant ADR links
- AI interaction history references
- Recent change log and pending deprecations
## S: Support Context
- Troubleshooting playbook
- Known issues and workarounds
- Escalation path, owner, and on-call handoff references
AI Interaction History Template
# AI Interaction History: [Change/Feature]
- Ticket/PR: [ID]
- Risk tier: [Low/Medium/High]
- Tool/model: [tool + version]
## Iteration Summary
1. Prompt iteration 1: [goal + outcome]
2. Prompt iteration 2: [goal + outcome]
3. Final iteration: [accepted outcome]
## Rejected Alternatives
- [Alternative A] -- rejected because [...]
- [Alternative B] -- rejected because [...]
## Final Rationale
- Why selected approach was chosen
- Link to ADR / provenance record / tests
## Redaction Notes (if applicable)
- [Prompt content redacted due to policy X; reference hash/ticket]
Documentation Effectiveness Metrics (Minimum Set)
| Metric | Definition | Suggested Reporting Cadence |
|---|---|---|
| Documentation Currency Rate | % of critical docs within freshness SLA | Monthly |
| Comprehension Validation Rate | % of sampled engineers/consumers who can complete key tasks using docs without ad hoc help | Quarterly |
| Reference Frequency | Number of accesses/citations of critical docs per release cycle | Monthly |
| Maintenance Efficiency | Median elapsed time to produce/update required docs for a change | Monthly |
| Support Ticket Deflection | % reduction in repeated support issues after documentation updates | Quarterly |
6. Exceptions & Waiver Process
Exceptions MAY be granted for:
- AI annotation requirements (REQ-005-01) for trivial code changes (less than 5 lines) where AI assistance was minimal -- the code review record serves as sufficient documentation
- Prompt documentation (REQ-005-11) for code generated through IDE inline completions (autocomplete), as individual prompts are impractical to capture
- Knowledge preservation documents (REQ-005-16) for projects below the 30% AI-generated code threshold
Waivers MUST be approved by the engineering lead and documented in the project's documentation strategy.
7. Related Standards
- PRD-STD-001: Prompt Engineering -- Prompt structure and library standards
- PRD-STD-002: Code Review Standards -- Documentation is reviewed as part of code review
- PRD-STD-006: Technical Debt Management -- Documentation of known debt
- Code Provenance & Attribution -- Source-of-truth linkage for AI interaction metadata
- Productivity Metrics -- KPI implementation guidance for documentation effectiveness measures
- Pillar 3: People & Skills -- Knowledge preservation supports team skill development
- Maturity Model -- Documentation maturity assessment
8. Revision History
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-01-15 | AEEF Standards Committee | Initial release |
| 1.0.1 | 2026-01-15 | AEEF Standards Committee | Added language-specific annotation table; expanded ADR template |
| 1.1.0 | 2026-02-18 | AEEF Standards Committee | Added D.O.C.S lifecycle coverage, AI interaction history requirements, documentation review workflow, drift controls, and documentation effectiveness metrics |