Skip to main content

AI Standards Crosswalk

This crosswalk compares AEEF with major AI engineering and governance frameworks that organizations commonly use for policy and audit programs. It is intended to identify where AEEF is already strong, where adapters are required, and which controls should be implemented first.

Assessment date: February 18, 2026

Framework Scope and AEEF Coverage

FrameworkPrimary FocusCurrent AEEF CoverageRemaining GapPriority
ISO/IEC 42001:2023AI management system (AIMS) requirementsStrong governance process coverage in Pillar 2 and transformation tracksClause-level certification evidence package and internal audit cadenceHigh
ISO/IEC 23894:2023AI risk management guidanceStrong security and risk controls in Security Risk FrameworkExplicit risk treatment register format aligned to 23894 vocabularyMedium
NIST AI RMF 1.0Govern/Map/Measure/Manage lifecycleStrong policy + measurement baseline in Pillar 2 and KPI frameworkFormal RMF function-to-control traceability matrixMedium
NIST SP 800-218 (SSDF)Secure software development practicesStrong SDLC controls across PRD-STD-002/003/004/007/008Explicit SSDF practice-level evidence mappingMedium
OWASP Top 10 for LLM ApplicationsLLM app-layer threatsPartial coverage through prompt governance and secure coding standardsDirect mapping for LLM-specific controls (prompt injection, output handling, agent misuse)High
EU AI Act (Regulation (EU) 2024/1689)Legal obligations for AI systems in EUBaseline governance controls existArticle-by-article legal mapping and role-specific operational controlsHigh
KSA PDPL + NCA controls + DGA controlsSaudi legal/privacy/cyber/government obligationsStrong regional profile with 10 KSA controls, data sovereignty, Arabic requirements, PDPL Article 22Ongoing: NCA control-by-control evidence matrix depthMedium
SDAIA AI Ethics Principles12 ethical principles for AI systems in Saudi ArabiaStrong principle-by-principle traceability in SDAIA Ethics TraceabilityOperational fairness and interpretability supplementary controlsLow
SDAIA AI Adoption Framework4-level maturity model for AI adoptionStrong crosswalk in KSA Regulatory ProfileNone — crosswalk is completeLow
SDAIA National AI Risk Management FrameworkRisk-based classification for AI systemsStrong alignment in SDAIA Risk Framework AlignmentNone — mapping is completeLow
SAMA Cyber Security FrameworkFinancial-sector cybersecurity controlsStrong domain-level mapping in SAMA CSF IntegrationOngoing: sub-control evidence refinementLow
SDAIA National AI Index (NAII)National AI readiness measurement dimensionsStrong metrics mapping in NAII Metrics MappingNone — mapping is completeLow

Where AEEF Is Already Strong

  1. Change governance and human review controls.
  2. Provenance, audit retention, and gate-based deployment control.
  3. Secure SDLC controls: SAST/SCA/secrets scanning and remediation SLAs.
  4. Policy and maturity model structure that supports adaptation by profile.

Priority Enhancements Introduced

This release adds profile-oriented governance extensions in the following pages:

  1. ISO 42001 Certification Readiness
  2. KSA Regulatory Profile — expanded with SDAIA maturity crosswalk, Arabic language requirements, data sovereignty, and PDPL Article 22
  3. Government (Middle East) Profile — expanded with Arabic and cultural controls
  4. SDAIA Ethics Traceability — principle-by-principle mapping to SDAIA AI Ethics Principles with self-assessment template
  5. SDAIA Risk Framework Alignment — mapping to SDAIA National AI Risk Management Framework
  6. SAMA CSF Integration — deep mapping for financial-sector AI engineering
  7. NAII Metrics Mapping — alignment with Saudi National AI Index dimensions

Implementation Sequence

  1. Implement ISO 42001 readiness controls and evidence model.
  2. Apply KSA profile for PDPL/NCA/DGA alignment, including data sovereignty and Arabic language requirements.
  3. Complete SDAIA ethics traceability and self-assessment.
  4. Align risk controls to SDAIA National AI Risk Management Framework.
  5. Apply SAMA CSF integration for financial-sector implementations.
  6. Establish NAII-aligned metrics reporting.
  7. Apply government profile overlay for public-sector delivery.
  8. Add legal-jurisdiction overlays (for example EU AI Act detailed controls) as needed by deployment geography.

Evidence and Source Anchors