top of page

Two Principles (Human-centric; Explainable, Transparent, Fair)

AI Ethics

Classification

AI Ethics and Governance Principles

Overview

The 'Two Principles'-Human-centric and Explainable, Transparent, Fair-serve as foundational guidelines in the governance of artificial intelligence systems. The Human-centric principle emphasizes that AI should be designed, developed, and deployed in ways that prioritize human well-being, dignity, and autonomy. The Explainable, Transparent, Fair principle requires that AI decisions and processes be understandable to stakeholders, open to scrutiny, and free from unjust bias. These principles underpin frameworks such as the Monetary Authority of Singapore's (MAS) FEAT framework, which operationalizes fairness, ethics, accountability, and transparency in financial AI applications. However, a limitation is that these principles are high-level and may be interpreted differently across contexts, leading to challenges in enforcement and measurement. Additionally, trade-offs may arise, such as balancing transparency with the protection of proprietary information or security.

Governance Context

In practice, the 'Two Principles' are embedded in regulatory and voluntary frameworks. For example, the MAS FEAT framework requires financial institutions to demonstrate fairness by conducting bias assessments and human-centricity by involving human oversight in critical AI decisions. The EU AI Act mandates transparency obligations such as documentation of AI system logic and user information disclosures, while also requiring risk assessments to ensure fairness and human-centricity. Concrete obligations and controls include: 1) Regular bias audits to identify and mitigate discriminatory outcomes; 2) Explainability documentation that details how AI decisions are made and can be communicated to stakeholders; 3) Mechanisms for human intervention, such as the ability for a human to override or review automated decisions; and 4) Maintenance of detailed records for transparency and accountability. Such obligations ensure that organizations are accountable for both the design and outcomes of their AI systems, but they also introduce complexity in implementation, especially when technical explainability is limited.

Ethical & Societal Implications

The Two Principles address critical ethical concerns such as bias, discrimination, and loss of human agency in AI-driven decision-making. Ensuring explainability and transparency helps build public trust and facilitates accountability, while human-centricity safeguards individual rights and well-being. However, operationalizing these principles can be technically challenging, particularly for complex models, and may require trade-offs with innovation or efficiency. Societally, failure to adhere to these principles can result in exclusion, injustice, or erosion of democratic values. Additionally, the need for explainability may slow down deployment of advanced AI models, and strict transparency requirements may expose sensitive business information or create new security risks.

Key Takeaways

The Two Principles underpin many AI governance frameworks globally.; Human-centricity ensures AI serves human interests and upholds dignity.; Explainability, transparency, and fairness are essential for trust and accountability.; Implementation challenges include subjective interpretation and technical limitations.; Effective controls include bias audits, explainability documentation, and human oversight.; Regulatory frameworks like the EU AI Act and MAS FEAT operationalize these principles.; Trade-offs may be necessary between transparency and proprietary or security concerns.

bottom of page