Classification
AI System Types & Capabilities
Overview
Classic AI models, often called discriminative or deterministic models, are designed to perform specific, well-defined tasks such as classification, regression, or rule-based decision-making. These systems, like decision trees or logistic regression, produce predictable outputs for given inputs and do not generate novel content. In contrast, generative AI models, such as GPT or Stable Diffusion, are capable of creating new data samples, including text, images, or audio, by learning underlying data distributions. Generative models can produce highly realistic and contextually appropriate outputs, but they also introduce challenges related to unpredictability, data provenance, and control. A key nuance is that while classic models are generally easier to audit and validate, generative models require more sophisticated monitoring and guardrails to prevent misuse or unintended outputs.
Governance Context
Governance of classic AI typically involves controls like model validation, bias assessment, and clear documentation, as required by frameworks such as ISO/IEC 22989 (AI Concepts and Terminology) and the EU AI Act, which mandates risk assessment and transparency for high-risk systems. For generative AI, additional obligations arise, including content provenance tracking, watermarking (as suggested by the US Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), and robust human oversight. Both types must comply with data privacy regulations like GDPR, but generative systems may also need mechanisms to prevent the creation of harmful or misleading content, reflecting obligations in NIST AI RMF's 'Govern' and 'Map' functions. Concrete obligations include: 1) Implementing model validation and bias monitoring for classic AI; 2) Enforcing content provenance tracking and watermarking for generative AI; 3) Establishing human-in-the-loop review for high-impact generative outputs; 4) Maintaining audit trails and transparency documentation for both types.
Ethical & Societal Implications
Classic AI systems raise issues of fairness, transparency, and accountability, but their deterministic nature makes them more amenable to audit and control. Generative AI amplifies concerns around misinformation, intellectual property, and consent, as it can produce convincing but false or unauthorized content. Both types must address data privacy and bias, but generative AI's ability to create new, potentially harmful outputs introduces heightened ethical risks, including the potential for societal manipulation, erosion of trust, challenges in attribution, and the amplification of deepfakes or synthetic misinformation that can undermine democratic processes and public safety.
Key Takeaways
Classic AI provides deterministic, predictable outputs; generative AI creates novel content.; Generative AI introduces unique governance challenges, including content provenance and misuse risks.; Both system types require compliance with data privacy and transparency obligations.; Auditing generative AI is more complex due to output unpredictability and potential for harm.; Frameworks like the EU AI Act and NIST AI RMF address both types but impose stricter controls on generative models.; Human oversight and technical safeguards are essential for mitigating generative AI risks.; Classic AI is easier to validate and explain, while generative AI demands ongoing monitoring and robust guardrails.