Systematic comparative analysis of AI rules across the globe

The growing pace of AI regulation demands international coordination. The Digital Policy Alert has documented over 600 regulatory developments targeting AI providers since January 2023. Often, governments share regulatory objectives, such as safety and transparency, enabling international alignment under the OECD AI Principles. To translate international alignment into national AI rules, however, governments need a common factual base. This report provides a comprehensive comparative analysis of AI rules across the globe.

Key Findings

AI rules diverge on three layers:

  1. Prioritisation of OECD AI principles: Governments prioritise different principles such as accountability and fairness.
  2. Regulatory requirements: Governments employ different regulatory requirements to implement the same principles, creating a patchwork of regulations.
  3. Granular differences: Even when using the same regulatory requirements, detailed differences exist, affecting interoperability.

Opportunities and challenges

Divergence in AI rules presents both opportunities and challenges:

  • Learning from diversity: Governments can learn from each other to develop effective regulations.
  • Risk of fragmentation: Without coordination, a fragmented regulatory landscape can emerge, similar to current data transfer rules.

Call for International Collaboration

The report advocates for proactive international collaboration and increased research into how technological innovations affect society. Designing a sustainable and ethical future for AI requires:

  • Child-centred regulations
  • Educating stakeholders, including regulators, developers, parents, educators, and children, about responsible AI use

By providing a common language and detailed analysis, this report aims to facilitate international alignment on AI rules, helping governments learn from each other and develop effective, interoperable AI regulations.

For more detailed insights, you can read the full report here.