top of page

Risk-based vs Rights-based

Regulation Commonalities

Classification

AI Governance Frameworks and Principles

Overview

The distinction between risk-based and rights-based approaches is fundamental in AI governance. A risk-based approach classifies AI systems according to potential risks to individuals, society, or organizations, and tailors regulatory requirements accordingly. This method aims for proportionality, focusing resources on the most hazardous applications. In contrast, a rights-based approach centers on upholding and safeguarding fundamental human rights, such as privacy, equality, and freedom from discrimination, regardless of the assessed risk level. While the risk-based model offers flexibility and scalability, it may under-protect rights in low-risk scenarios or where risks are hard to foresee. The rights-based model ensures strong protection but can be rigid, potentially stifling innovation or imposing broad restrictions. Both approaches can be complementary but may conflict when prioritizing risk management over absolute rights, or vice versa, making their integration a nuanced challenge.

Governance Context

In practice, the EU AI Act exemplifies a risk-based approach by categorizing AI systems into unacceptable, high, limited, and minimal risk, imposing stricter controls on higher-risk categories. Obligations include mandatory conformity assessments for high-risk systems and transparency requirements for limited-risk systems. Meanwhile, the UNESCO Recommendation on the Ethics of Artificial Intelligence illustrates a rights-based framework, mandating respect for human dignity, non-discrimination, and privacy throughout the AI lifecycle. It requires impact assessments and redress mechanisms to protect individual rights. Both frameworks impose concrete obligations: the EU AI Act mandates post-market monitoring and incident reporting, while UNESCO calls for participatory governance and inclusive stakeholder engagement. The interplay between these approaches is evident in national AI strategies, which often need to reconcile risk classification with non-negotiable rights protections.

Ethical & Societal Implications

The choice between risk-based and rights-based approaches has significant ethical and societal implications. A risk-based model may leave gaps in rights protection, especially where risks are underestimated or emerge over time. Conversely, a rights-based model can ensure robust protection but may hinder beneficial innovation or create legal uncertainty. Balancing these approaches is essential to ensure fair, inclusive, and trustworthy AI systems that respect individual autonomy while enabling societal progress. Societally, an overemphasis on risk may marginalize vulnerable groups, while an inflexible rights-based stance could slow technological adoption in areas with broad social benefit.

Key Takeaways

Risk-based approaches tailor regulatory requirements to the level of risk posed by AI systems.; Rights-based approaches prioritize the protection of fundamental human rights in all AI applications.; The EU AI Act and UNESCO Recommendation exemplify risk-based and rights-based frameworks, respectively.; Exclusive reliance on one approach can result in under- or over-regulation.; Effective AI governance often requires integrating both approaches to address nuanced risks and rights.; Concrete obligations include conformity assessments, transparency, impact assessments, and redress mechanisms.; The balance between flexibility and rights protection is central to responsible AI policy.

bottom of page