Bias & Fairness

https://taxonomy.eticas.ai/risk/bias-fairness

Maturity: established

The risk that an AI system produces outcomes that systematically advantage or disadvantage individuals or groups based on protected or sensitive attributes, leading to unequal treatment, reduced accuracy, or unjust impacts. This includes biases introduced through data, model design, or deployment context, and covers both measurable disparities and perceived unfairness in decision-making.

Also known as: Fairness · Bias & Discrimination · Algorithmic fairness

Applies to: ALL
Lifecycle stages: Pre Processing, In Processing, Post Processing

Risk groups

Mappings to external frameworks

Compliance

Framework Concept
EU AI Act (Regulation 2024/1689) Data and data governance
ISO/IEC 42001:2023 — AI Management System Quality of data for AI systems
AIUC-1 — AI Underwriting Company Standard Prevent customer-defined high-risk outputs

Reference frameworks

Framework Concept
NIST AI 600-1 — Generative AI Risk Profile Harmful Bias & Homogenization
NIST AI Risk Management Framework (AI 100-1) Fair with Harmful Bias Managed
OECD AI Principles Human rights, rule of law, fairness & privacy

Taxonomies & vocabularies

Framework Concept
MIT AI Risk Repository Discrimination & Toxicity
W3C Data Privacy Vocabulary — AI Extension AI Bias
AIR 2024 / AIR-Bench 2024 Legal & Rights-Related Risks → Discrimination & Bias
IBM AI Risk Atlas Output → Fairness dimension