top of page

Veritas Framework

Singapore Framework

Classification

AI Governance, Risk & Compliance

Overview

The Veritas Framework is a governance toolkit developed in Singapore to guide the responsible adoption of artificial intelligence (AI) and machine learning (ML) solutions in financial institutions. Its primary focus is to operationalize the FEAT principles-Fairness, Ethics, Accountability, and Transparency-by providing structured processes, assessment methodologies, and documentation templates. The framework facilitates self-assessment and documentation of AI systems' compliance with these principles, promoting responsible AI usage in high-stakes domains like finance. While Veritas offers practical guidance, it is tailored to the regulatory and business context of Singapore's financial sector, which may limit its direct applicability elsewhere. Additionally, the framework relies heavily on self-assessment, which could introduce subjectivity or inconsistent interpretations. Nevertheless, Veritas is a pioneering model for sector-specific AI governance and is referenced globally as an example of applied ethical AI. The framework has influenced other regions and sectors to consider similar operational approaches for responsible AI adoption.

Governance Context

Within the governance landscape, the Veritas Framework serves as a practical extension of Singapore's Model AI Governance Framework (MGF) and is aligned with the Monetary Authority of Singapore's (MAS) requirements for responsible AI in financial services. Two concrete obligations include: (1) conducting regular fairness assessments using prescribed metrics and documentation, as required by MAS guidelines, and (2) maintaining explainability records for AI-driven decisions, supporting auditability and regulatory reviews. Additionally, organizations are expected to establish AI governance committees to oversee implementation and ensure accountability, and to integrate risk controls into the model development lifecycle, such as bias detection and mitigation steps. These obligations echo controls found in international frameworks such as the EU AI Act (e.g., transparency and human oversight) and the OECD AI Principles (e.g., accountability and robustness), emphasizing the operationalization of high-level AI ethics principles into actionable controls. Veritas also requires ongoing monitoring and review of deployed AI systems to ensure sustained compliance.

Ethical & Societal Implications

The Veritas Framework aims to mitigate ethical risks such as discrimination, lack of accountability, and opacity in AI systems, especially in sectors where decisions can significantly affect individuals' financial well-being. By embedding FEAT principles, it promotes social trust and responsible innovation. However, overreliance on self-assessment and limited external validation may allow ethical blind spots or inconsistent application. Additionally, the framework's sector-specific design may overlook broader societal impacts outside finance, underlining the need for adaptation and oversight when applied in other domains. There is also a risk that organizations may prioritize compliance over genuine ethical reflection, potentially missing emerging societal concerns.

Key Takeaways

Veritas is a sector-specific framework for operationalizing ethical AI in finance.; It translates FEAT (Fairness, Ethics, Accountability, Transparency) principles into actionable processes.; Documentation, regular fairness and explainability assessments are central to Veritas compliance.; Self-assessment provides flexibility but risks bias or inconsistent application without external review.; Veritas aligns with international AI governance trends but is tailored to Singapore's regulatory context.; Ongoing monitoring and governance structures are required for sustained compliance.; The framework is a reference point for sectoral AI governance globally.

bottom of page