top of page

Risk Scoring Formula

Risk Scoring

Classification

AI Risk Management

Overview

The Risk Scoring Formula is a foundational concept in risk management, typically expressed as Risk = Severity Probability. This formula quantifies risk by assessing how severe the impact of an adverse event could be and how likely it is to occur. In AI governance, this approach is used to prioritize risks associated with AI systems, such as data breaches, model failures, or ethical harms. While the formula provides a structured way to compare and rank risks, its simplicity can be a limitation: both severity and probability are often subjective, context-dependent, and may be hard to estimate accurately, especially for novel or complex AI risks. Additionally, this formula may not capture interdependencies or cascading effects, and it can underweight rare but catastrophic events. Despite these nuances, the risk scoring formula remains a widely used tool for initial risk screening and resource allocation.

Governance Context

The risk scoring formula is embedded in several AI and information security governance frameworks. For example, the NIST AI Risk Management Framework (AI RMF) requires organizations to assess and prioritize risks using systematic approaches-often involving risk scoring-to inform mitigation strategies. Similarly, ISO/IEC 27005:2018 on information security risk management recommends quantifying risks by evaluating both the potential impact (severity) and likelihood (probability) of threats. Concrete obligations include documenting risk assessment methodologies and maintaining evidence of periodic risk evaluations. Controls may require organizations to justify risk scores, especially for high-impact AI applications, and to re-assess risks when systems are updated or when new threat intelligence emerges. Additional obligations can include ensuring that risk scores are reviewed and approved by relevant governance bodies and that any changes to severity or probability inputs are traceable and auditable.

Ethical & Societal Implications

Using the risk scoring formula in AI governance can help prioritize resources and interventions for the most pressing risks, supporting responsible AI deployment. However, if severity or probability are underestimated, significant ethical harms-such as discrimination, safety incidents, or privacy violations-may go unaddressed. Over-reliance on quantitative scores can obscure qualitative factors, such as societal values or stakeholder perspectives, and may lead to a false sense of security. Transparent documentation and inclusive risk assessment processes are essential to mitigate these concerns. There is also a risk that organizations may manipulate inputs to downplay certain risks, emphasizing the need for independent review and stakeholder engagement.

Key Takeaways

The risk scoring formula is a core tool for prioritizing AI risks.; Severity and probability estimates are often subjective and context-dependent.; Frameworks like NIST AI RMF and ISO/IEC 27005 require systematic risk scoring.; Limitations include difficulty capturing rare events and interdependencies.; Ethical use requires transparency, stakeholder input, and periodic review.; Documentation and justification of risk scores are key governance obligations.; Risk scoring should be complemented by qualitative assessments and scenario analysis.

bottom of page