top of page

CSET AI Harm Taxonomy

Harms Taxonomies

Classification

AI Risk Assessment & Governance Frameworks

Overview

The CSET AI Harm Taxonomy is a structured framework developed by the Center for Security and Emerging Technology (CSET) to categorize and analyze potential harms arising from artificial intelligence systems. It systematically organizes different types of AI-related harms, such as privacy violations, discrimination, security breaches, and societal disruptions, providing a common language for stakeholders to assess and mitigate risks. The taxonomy is valuable for policymakers, developers, and auditors, as it enables consistent risk identification, prioritization, and reporting across diverse AI applications. However, one limitation is that the taxonomy may not capture novel or emergent harms as AI technologies evolve rapidly, and some harms may overlap or fall into multiple categories, complicating risk assessment. Additionally, the taxonomy's academic origins mean it may require adaptation for practical implementation in specific regulatory or industry contexts.

Governance Context

The CSET AI Harm Taxonomy supports compliance with governance frameworks like the EU AI Act and NIST AI Risk Management Framework by mapping AI system risks to specific harm categories, aiding in risk assessment and mitigation planning. For example, the EU AI Act obligates providers to conduct risk assessments and maintain documentation of potential harms-processes that can be structured using the taxonomy's categories. Similarly, the NIST AI RMF requires organizations to identify, assess, and manage risks throughout the AI lifecycle, which can be operationalized by referencing the taxonomy's harm types. The taxonomy also aligns with ISO/IEC 23894:2023, which mandates systematic risk identification and evaluation. Concrete obligations include: (1) conducting and documenting risk assessments that explicitly categorize potential AI harms, and (2) implementing mitigation controls tailored to each identified harm type (e.g., bias audits for discrimination harms, security testing for privacy breaches). Its use fosters transparency, accountability, and traceability in AI governance, but organizations must regularly update their harm mappings to address evolving risks and regulatory requirements.

Ethical & Societal Implications

The CSET AI Harm Taxonomy highlights the ethical imperative to systematically identify and address AI harms, promoting fairness, safety, and accountability. Its use can help prevent discrimination, privacy violations, and unintended societal impacts. However, rigid reliance on predefined categories risks overlooking nuanced or intersectional harms, and may inadvertently deprioritize less quantifiable impacts such as loss of agency or trust. Therefore, ethical governance demands ongoing review and contextual adaptation of harm classifications to ensure comprehensive protection of affected stakeholders.

Key Takeaways

The taxonomy provides a structured approach to categorizing AI-related harms.; It supports compliance with major governance frameworks by clarifying risk types.; Limitations include possible gaps for novel or overlapping harms.; Practical use requires regular updates to address emerging risks.; It facilitates communication among stakeholders but must be adapted for specific contexts.; Concrete obligations include risk assessment documentation and harm-specific mitigation controls.

bottom of page