Skip to main content

SDAIA Ethics Traceability

The Saudi Data and Artificial Intelligence Authority (SDAIA) publishes AI Ethics Principles that apply to all AI stakeholders designing, developing, deploying, or using AI systems within Saudi Arabia. This document provides a principle-by-principle traceability matrix demonstrating how AEEF controls satisfy each SDAIA ethics principle, and includes an ethics self-assessment template aligned to SDAIA vendor qualification requirements for public-sector tenders.

Applicability

Apply this traceability when any of the following are true:

  1. AI-assisted engineering outputs are deployed within Saudi Arabia.
  2. The organization seeks vendor qualification for Saudi government or government-linked tenders.
  3. Contracts or policies require demonstrated alignment with SDAIA AI Ethics Principles.

SDAIA Ethics Principles Traceability Matrix

The following matrix maps each of SDAIA's twelve AI Ethics Principles to specific AEEF controls. The Coverage column indicates whether existing AEEF controls fully satisfy the principle (Full), substantially satisfy it with minor operational additions (Substantial), or require supplementary controls (Partial).

#SDAIA PrincipleDefinitionAEEF Control(s)EvidenceCoverage
1IntegrityAI systems must operate with honesty and adherence to ethical standards throughout their lifecycleCode Provenance & Attribution; Human-in-the-Loop Review; PRD-STD-002 Code ReviewProvenance metadata records; PR review logs; reviewer qualification recordsFull
2FairnessAI systems must avoid bias and discrimination, ensuring equitable treatmentAI Output Verification; Engineering Quality Standards; PRD-STD-003 TestingVerification test results; quality gate evidence; bias-aware test casesSubstantial
3PrivacyAI systems must protect personal data and respect individual privacy rightsPillar 2 Data Classification; Prompt Governance; KSA Regulatory Profile (PDPL controls)Data classification records; prompt sanitization logs; PDPL compliance evidenceFull
4SecurityAI systems must be protected against threats, vulnerabilities, and unauthorized accessSecurity Risk Framework; PRD-STD-004 Security Scanning; PRD-STD-008 Dependency ComplianceSAST/SCA scan results; threat model documents; dependency audit logsFull
5ReliabilityAI systems must perform consistently and predictably under expected conditionsPRD-STD-003 Testing; PRD-STD-007 Quality Gates; PRD-STD-012 Inference ReliabilityTest coverage reports; quality gate pass records; SLO compliance dashboardsFull
6SafetyAI systems must not cause harm and must include safeguards against unintended consequencesPRD-STD-010 AI Product Safety & Trust; Incident ResponseSafety gate records; kill-switch readiness evidence; incident response logsFull
7TransparencyAI systems must be open about their capabilities, limitations, and decision-making processesCode Provenance & Attribution; Retention & Audit Evidence; PRD-STD-005 DocumentationProvenance metadata; audit trail records; published documentationFull
8InterpretabilityAI systems must produce outputs that can be understood and explained by humansPRD-STD-005 Documentation; Human-in-the-Loop Review; Architecture Decision RecordsDocumentation artifacts; review decisions with rationale; ADR recordsSubstantial
9AccountabilityClear assignment of responsibility for AI system outcomes to identifiable human rolesHuman-in-the-Loop Review; Pillar 2 Governance Roles; PRD-STD-009 Autonomous Agent GovernanceReviewer assignment records; governance role matrix; agent-to-human ownership mappingFull
10ResponsibilityOrganizations must take ownership of AI system impacts and provide remediation when harm occursIncident Response; Operating Model Lifecycle; PRD-STD-010 AI Product SafetyIncident reports; PIR documents; corrective action recordsFull
11HumanityAI systems must serve human well-being and augment rather than replace human judgmentCulture & Mindset; Human-in-the-Loop Review (non-negotiable human judgment)Cultural health surveys; human review gate evidence; developer experience scoresFull
12Social & Environmental BenefitAI systems must contribute positively to society and minimize environmental impactKPI Framework (productivity + financial dimensions); Training & Skill DevelopmentROI reports; workforce development metrics; Saudization metricsSubstantial
info

Principles marked Substantial require organizations to supplement AEEF controls with operational practices. For Fairness, add bias-aware test cases to AI output verification. For Interpretability, ensure architecture decision records include AI-specific rationale. For Social & Environmental Benefit, track workforce impact and environmental metrics alongside standard KPIs.

Ethics Self-Assessment Template

The following self-assessment is aligned to SDAIA vendor qualification requirements for public-sector tenders. Organizations SHOULD complete this assessment annually and before submitting responses to Saudi government procurement.

Assessment Instructions

  1. For each principle, rate your organization's compliance on a 1-5 scale.
  2. Provide evidence references for each rating.
  3. A score of 3 or above on all principles is the minimum for vendor qualification.
  4. Submit the completed assessment as part of your governance evidence package.

Self-Assessment Checklist

#SDAIA PrincipleAssessment QuestionScore (1-5)Evidence Reference
1IntegrityDo all AI-assisted engineering outputs have full provenance tracking and mandatory human review?___
2FairnessAre AI outputs systematically tested for bias, discrimination, and equitable behavior?___
3PrivacyIs personal data prohibited from AI prompts unless the tool is approved for the data classification level, with PDPL compliance verified?___
4SecurityAre all AI-generated outputs scanned with SAST, SCA, and secret detection before deployment?___
5ReliabilityDo AI-assisted systems meet defined SLOs with automated quality gates enforced in CI/CD?___
6SafetyAre safety gates, abuse evaluations, and kill-switch mechanisms in place for AI-powered products?___
7TransparencyIs a complete audit trail maintained from AI generation through deployment, accessible for regulator review?___
8InterpretabilityAre AI-assisted architectural and design decisions documented with human-readable rationale?___
9AccountabilityIs every AI-assisted output attributable to a named human reviewer who approved it for production?___
10ResponsibilityAre incident response procedures in place that specifically address AI-generated defects, with defined remediation SLAs?___
11HumanityDoes the organization maintain human judgment as non-negotiable in all AI-assisted workflows, with cultural health tracked?___
12Social & Environmental BenefitDoes the organization track workforce development, Saudization, and positive societal impact from AI adoption?___

Scoring Guide:

ScoreMeaning
1No controls in place
2Informal or ad-hoc controls
3Documented controls with evidence (minimum acceptable)
4Automated controls with dashboarded metrics
5Continuously optimized controls with demonstrated improvement trends

Minimum Qualification Threshold: Score of 3 or above on all twelve principles. Any principle scoring below 3 MUST have a documented remediation plan with target date.

Integration with AEEF Governance Gates

Ethics compliance checks integrate into the AEEF Operating Model Lifecycle at two stages:

Stage 2: AI Exploration

  • Ethics impact screening SHOULD be performed during AI exploration to identify principles at risk early.
  • For high-impact systems (citizen-facing, decision-making), a preliminary ethics assessment MUST be documented.

Stage 4: Governance Gate

  • The Governance Gate MUST include an ethics compliance check as a mandatory gate criterion for Saudi-deployed systems.
  • The gate reviewer MUST verify that the ethics self-assessment for the relevant system is current (within 12 months) and all principles score 3 or above.
  • Systems scoring below 3 on any principle MUST NOT pass the Governance Gate without a documented waiver from the Engineering Director and a remediation plan approved by the Compliance Officer.

Stage 6: Post-Implementation Review

  • The PIR MUST include an ethics outcome review assessing whether the deployed system maintains compliance with all twelve principles in production.
  • Any ethics-related incidents MUST be logged and traced to the relevant principle for trend analysis.

External Sources