top of page

Amending Existing Laws

Regulation Approaches

Classification

Legal and Regulatory Frameworks

Overview

Amending existing laws refers to the process by which governments and regulatory bodies update or revise established legal frameworks-such as privacy, consumer protection, or sector-specific regulations-to address new challenges posed by artificial intelligence (AI) technologies. This approach leverages the foundation of pre-existing statutes, adapting definitions, compliance requirements, enforcement mechanisms, and penalties to account for AI-specific risks and opportunities. For example, older data protection laws may be revised to clarify how automated decision-making or profiling by AI systems is regulated. While this strategy can be more efficient than creating entirely new legislation, it has limitations: legacy laws may lack the conceptual flexibility to address the unique features of AI, and amendments can lead to patchwork compliance obligations or regulatory uncertainty if not carefully harmonized. Additionally, the process can be slow, and there is a risk of regulatory lag behind technological advances.

Governance Context

In AI governance, amending existing laws is a pragmatic solution adopted by several jurisdictions. For instance, the European Union's General Data Protection Regulation (GDPR) has been interpreted, and in some cases amended at the national level, to address automated decision-making and AI-driven profiling. Brazil's General Data Protection Law (LGPD) was amended to clarify obligations around data subject rights in the context of AI. Concrete obligations arising from such amendments include: (1) mandatory impact assessments for high-risk AI applications (as required by GDPR Article 35 Data Protection Impact Assessments), (2) enhanced transparency and explainability requirements for automated decisions affecting individuals (as seen in the California Consumer Privacy Act's updates), and (3) requirements for explicit consent when AI processes sensitive data. These amendments often require organizations to update their compliance programs, retrain staff, implement technical and organizational controls tailored to AI use cases, and regularly review AI deployments for legal compliance.

Ethical & Societal Implications

Amending existing laws to include AI can enhance protections for individuals by ensuring established rights (e.g., privacy, non-discrimination) are upheld in new technological contexts. However, if amendments are too narrow or poorly integrated, they may fail to address systemic risks such as algorithmic bias, lack of transparency, or new forms of harm. There is also a risk that rapid technological change will outpace legal updates, resulting in regulatory gaps or unintended consequences. Society may benefit from continuity and familiarity, but must remain vigilant against complacency and the erosion of rights through incremental or insufficient reform. Stakeholder engagement and public consultation are crucial to ensure amendments are effective and equitable.

Key Takeaways

Amending existing laws is a common, pragmatic approach to AI regulation.; This method builds on established legal frameworks but may introduce complexity or uncertainty.; Concrete obligations often include transparency, impact assessments, and explainability for AI systems.; Amendments must be carefully crafted to avoid regulatory gaps and ensure effective oversight.; Limitations include potential incompatibility with AI-specific risks and the risk of regulatory lag.; Organizations must adapt compliance programs and staff training to meet new obligations.; Stakeholder engagement is vital to ensure amendments are practical and protective.

bottom of page