Version 0.3.0
A unified AI risk taxonomy for use across Eticas audit methodologies, assessment frameworks, and reporting outputs.
established
The risk that an AI system produces outcomes that systematically advantage or disadvantage individuals or groups based on protected or sensitive attributes, leading to unequal treatment, reduced accuracy, or unjust impacts.
established
The risk that an AI system collects, processes, or infers personal information in ways that infringe on individuals’ rights to control their data (privacy), or that sensitive information is exposed, accessed, or shared without authorization (confidentiality).
established
The risk that an AI system produces false, fabricated, or misleading outputs (hallucinations), spreads inaccurate or deceptive information (misinformation), or delivers inconsistent results across similar inputs and contexts.
established
The risk that an AI system lacks adequate structures, policies, or accountability mechanisms to oversee its design, deployment, and use.
established
The risk that an AI system is exposed to AI-specific vulnerabilities, attacks, or misuse that compromise its integrity, availability, or confidentiality.
established
The risk that an AI system’s development, deployment, or use causes negative environmental effects, such as excessive energy or water consumption, carbon emissions, or unsustainable use of hardware and resources.
established
The risk that stakeholders cannot understand how an AI system works, what it does, or why it produces specific outputs.