Skip to main content

SDAIA Risk Framework Alignment

The Saudi Data and Artificial Intelligence Authority (SDAIA) has published a National AI Risk Management Framework that introduces a risk-based classification system for AI systems. This document maps AEEF risk controls to the SDAIA framework, providing organizations with a clear path to compliance for AI-assisted engineering deployed in Saudi Arabia.

Applicability

Apply this alignment when any of the following are true:

  1. AI systems or AI-assisted engineering outputs are deployed within Saudi Arabia.
  2. The organization is subject to SDAIA regulatory oversight.
  3. Contracts require demonstrated alignment with Saudi AI risk management requirements.

SDAIA Risk Classification Mapping

SDAIA's framework classifies AI systems into risk levels based on their potential impact. The following table maps SDAIA risk levels to AEEF risk tiers and specifies the corresponding governance requirements.

SDAIA Risk LevelCriteriaAEEF Risk TierAEEF Review RequirementsGovernance Gate
Low RiskAI systems with minimal potential for harm; limited autonomy; reversible outputsTier 1 (Standard)Peer review + automated scanningAutomated gate with human spot-check
High RiskAI systems affecting rights, safety, critical infrastructure, or public services; significant autonomy; difficult-to-reverse outputsTier 2 (Elevated) or Tier 3 (High)Full code review + AI checklist + architecture review + security reviewFull manual Governance Gate with multi-stakeholder approval
warning

SDAIA's risk classification applies to the AI system as a whole, not individual code changes. When AI-assisted engineering contributes to a system classified as High Risk by SDAIA, all AI-assisted contributions to that system MUST follow Tier 2 or Tier 3 review requirements regardless of the individual change size.

Risk Classification Decision Criteria

Organizations MUST classify AI systems using the following criteria aligned to the SDAIA framework:

FactorLow Risk IndicatorHigh Risk Indicator
Impact on individualsNo direct effect on rights or welfareAffects employment, financial standing, access to services, or legal rights
Autonomy levelHuman makes final decisionSystem makes or materially influences decisions
ReversibilityOutputs easily reversed or correctedOutputs difficult or impossible to reverse
ScaleLimited user base, internal useCitizen-scale deployment, public services
SectorNon-regulated sectorHealthcare, finance, critical infrastructure, government
Data sensitivityPublic or internal data onlyPersonal data, confidential data, or restricted data

Pre-Deployment Impact Assessment

SDAIA requires pre-deployment impact assessments for AI systems. AEEF maps this requirement across the Operating Model Lifecycle:

SDAIA RequirementAEEF Implementation PointEvidence
Pre-deployment risk assessmentStage 1: Business Intent — risk tier assignmentBusiness Intent Document with risk classification
Impact analysisStage 2: AI Exploration — feasibility assessment with risk documentationExploration report with risk section
Mitigation planningStage 3: Human Hardening — security review and quality hardeningHardened code with security clearance
Approval before deploymentStage 4: Governance Gate — multi-stakeholder approvalGate approval record with SDAIA risk classification

Post-Deployment Monitoring

SDAIA mandates post-deployment monitoring for AI systems. AEEF addresses this through:

SDAIA RequirementAEEF ControlEvidence
Continuous performance monitoringPRD-STD-012 Inference Reliability — SLOs and observabilityMonitoring dashboards; SLO compliance reports
Incident detection and responseIncident Response — AI-specific incident classificationIncident reports; MTTR metrics
Outcome measurementStage 6: Post-Implementation Review — outcome assessmentPIR reports; production metrics
Periodic reassessmentMaturity Assessment — quarterly reassessmentAssessment records; trend data

Safety Testing Alignment

SDAIA emphasizes safety testing including stress testing, edge-case scenarios, and red-team exercises. The following table maps these to AEEF controls:

SDAIA Testing RequirementAEEF ControlImplementation
Stress testingPRD-STD-003 Testing — load and performance testingPerformance test results under extreme conditions
Edge-case scenariosPRD-STD-003 Testing — boundary value analysis, mutation testingEdge-case test suites; mutation testing scores
Red-team exercisesPRD-STD-010 AI Product Safety — abuse evaluationRed-team reports; abuse scenario documentation
Adversarial testingSecurity Risk Framework — threat modelingThreat model documents; adversarial test results

Transparency and Disclosure

SDAIA requires transparency artifacts including model factsheets and user notices. AEEF maps these through:

SDAIA RequirementAEEF ControlArtifact
Model factsheetsPRD-STD-005 Documentation; Code ProvenanceAI model cards; provenance metadata with model version
User noticesPRD-STD-010 AI Product Safety — transparency controlsUser-facing AI disclosure notices
Decision rationalePRD-STD-005 Documentation — architecture decision recordsADRs with AI-specific rationale
Audit trailRetention & Audit EvidenceComplete provenance chain from generation to deployment

Cybersecurity and Model Security

SDAIA requires protection of AI models and training data against tampering and theft:

SDAIA RequirementAEEF ControlEvidence
Model integrity protectionPRD-STD-004 Security Scanning — SAST, SCA, secret detectionScan results; integrity verification logs
Training data governancePRD-STD-011 Model & Data Governance — data lineage and rightsData lineage records; rights verification
Access controlPillar 2 Data Classification — classification-based access restrictionsAccess control configurations; tool approval records
Supply chain securityPRD-STD-008 Dependency Compliance — dependency scanningDependency audit reports; license compliance records

Accountability and Documentation

SDAIA requires standardized logs, model cards, and audit trails:

SDAIA RequirementAEEF ControlEvidence
Standardized logsRetention & Audit Evidence — retention policy with defined periodsLog archives; retention compliance reports
Model cardsCode Provenance — provenance metadata schemaAI provenance records per provenance schema
Audit trailsPillar 2 Production Gating — five mandatory gatesGate approval records; deployment logs
Human accountability mappingPRD-STD-009 Autonomous Agent Governance — agent-to-human ownershipAgent contracts; human owner registry

Implementation Checklist

  • AI systems classified by SDAIA risk level (Low/High) with documented rationale.
  • High-risk systems assigned AEEF Tier 2 or Tier 3 review requirements.
  • Pre-deployment impact assessments completed for all in-scope systems.
  • Post-deployment monitoring active with SLOs defined and dashboarded.
  • Safety testing (stress, edge-case, red-team) completed for high-risk systems.
  • Transparency artifacts (model factsheets, user notices) published for high-risk systems.
  • Model and data security controls verified and evidence retained.
  • Accountability mapping documented with named human owners for all AI systems.
  • Audit trail retention verified per AEEF retention policy.
  • Periodic reassessment schedule established (minimum quarterly for high-risk systems).

External Sources