AI Watch: Global regulatory tracker – OECD : The OECD's Recommendation on Artificial Intelligence encourages Member States to uphold principles of trustworthy AI. – JD Supra

Spread the love

White & Case LLP
The OECD Council’s Recommendation on Artificial Intelligence1 (the “Recommendation”) adopted by 49 adherents2 as of April 2026 (the “Adherents”), contains:
The OECD’s AI Principles (the “Principles”) , which were the first intergovernmental standard on AI and formed the basis for the G20’s AI Principles; and
On February 28, 2025, the OECD published a policy paper entitled, “Towards a common reporting framework for AI incidents”, which proposes a common framework for AI incident reporting. The proposed framework consists of a detailed set of criteria for reporting AI incidents (e.g., “description of incident”, “date of first known occurrence”, “severity”, and “harm type”). The OECD considers that these criteria “summarize the information needed to understand an AI incident”, while recognizing that additional criteria may be necessary to align with specific reporting.5 It remains to be seen whether governments will adopt these criteria.
On May 2, 2025, the OECD published a report entitled, “The Adoption of Artificial Intelligence in Firms”, which offers a detailed analysis of AI uptake across businesses in the G7 and Brazil. The report indicates that 83% of businesses who responded indicated a desire to receive more information regarding current and forthcoming regulations around data or AI, or on expected returns from investment in AI. Many businesses also indicated that specific policies including tax incentives, partnerships with educational institutions and public sector initiatives, would help strengthen AI uptake. The report concludes with recommendations for governments and policymakers to facilitate the continued adoption of AI.6
On February 19, 2026, the OECD published Due Diligence Guidance for Responsible AI. This Guidance aims at supporting enterprises in their implementation of the OECD Guidelines for Multinational Enterprises and the Recommendation. Chapter 1 introduces the Responsible Business Conduct (RBC) due diligence concept together with a six-step framework: (i) embedding RBC into policies and management systems; (ii) identifying and assessing actual and potential adverse impacts; (iii) ceasing, preventing, and mitigating adverse impacts; (iv) tracking implementation and results of due diligence activities; (v) communicating actions to address impact; (vi) providing for or cooperating in remediation when appropriate. Chapter 2 lays out the RBC due diligence framework and provides practical implementation examples for enterprises involved in the development and use of AI systems.7
The OECD’s Expert Group on AI Futures explores potential AI impacts, guiding policymakers on crafting forward-looking policies. Key identified benefits include accelerating scientific progress, improving economic growth, reducing inequality, enhancing decision-making, and empowering citizens. However, significant risks such as cyber threats, disinformation, AI safety lapses, power concentration, and privacy violations are also highlighted. The report suggests ten policy priorities including establishing clear AI liability rules, restricting harmful AI uses, ensuring AI transparency, and promoting international cooperation to manage competitive race dynamics. Governments are urged to implement these strategies to maximize AI benefits and mitigate risks, with ongoing initiatives indicating progress, yet emphasizing the need for more concrete actions. Nevertheless, it remains unclear whether such urgings will be sufficient to stem the divergence of AI regulatory approaches that has arisen from one jurisdiction to the next.
In July 2024, the Global Partnership on Artificial Intelligence (GPAI) and the OECD joined forces to create an integrated partnership on AI under the GPAI brand. The integrated partnership aims to “advance an ambitious agenda for implementing human-centric, safe, secure and trustworthy [AI]”.8 Prospective GPAI members must satisfy the following criteria:
The GPAI currently includes 46 member countries and the European Union.9
The Adherents have agreed to promote, implement, and adhere to the Recommendation. The Principles contribute to other AI initiatives, such as the G7’s Hiroshima AI Process Comprehensive Policy Framework (including the International Guiding Principles on AI for Organizations Developing Advanced AI Systems and the International Code of Conduct for Organizations Developing Advanced AI Systems).
While certain OECD instruments can be legally binding on members, most are not. However, OECD recommendations (including the Recommendation) represent a political commitment to the principles they contain and entail an expectation that Adherents will endeavor to implement them.10 Notwithstanding, a non-exhaustive list of OECD guidance that does not directly seek to regulate AI, but may affect the development or use of AI includes:
The OECD’s definition of “AI system” was revised on November 8, 2023 to ensure that it continues to accurately reflect technological developments, including with respect to generative AI.11 AI is defined in the Recommendation using the following terms:
The 49 Adherents (who are expected to promote and implement the Recommendation – see above) include OECD members and non-members, and the European Union.
Specific obligations would be placed on AI actors by Adherents implementing the Recommendation. However, the term “AI actors” is not defined in the Recommendation by reference to territory.
The Recommendation is not sector-specific. As discussed above, Adherents are expected to promote and implement the Recommendation and, by doing so, specific obligations should be placed on AI actors. However, the term “AI actors” is not defined in the Recommendation by reference to sector.
Adherents are expected to comply with the Recommendation, although the Recommendation does not explicitly govern compliance or regulatory oversight. Certain Principles relating to human-centered values and fairness, transparency and accountability are applicable to AI actors. Whether and to what extent AI actors have to comply with the Principles depends on the relevant Adherent state’s approach to implementation.
The OECD’s AI Regulations intend to help shape a stable policy environment at the international level that promotes a human-centric approach to trustworthy AI, fosters research, and preserves economic incentives to innovate.12
AI is not categorized according to risk in the Recommendation.
In order to promote a stable policy environment with regard to AI risk frameworks, the OECD has stated that it intends to analyze the criteria that should be included in a risk assessment and how to best aggregate such criteria, taking into account that different criteria may be interdependent.13
The Adherents are expected to promote and implement the following five Principles:14
The Adherents are also expected to promote and implement the Five Recommendations:15
The OECD does not regulate the implementation of the Recommendation, although it does monitor and analyze information relating to AI initiatives through its AI Policy Observatory. The AI Policy Observatory includes a live database of AI strategies, policies and initiatives that countries and other stakeholders can share and update, enabling the comparison of their key elements in an interactive manner. It is continuously updated with AI metrics, measurements, policies and good practices that lead to further updates in the practical guidance for implementation.16
The Recommendation does not stipulate how Adherents should regulate the implementation of the Principles in their own jurisdictions.
As the Recommendation is not legally binding, it does not confer enforcement powers or give rise to any penalties for non-compliance. The OECD relies on Adherents to implement the Recommendation and enforce the Principles in their own jurisdictions.
1 Read the OECD’s “Recommendation of the Council on Artificial Intelligence” here. The Recommendation was first adopted in May 2019 and amended in May 2024.
2 OECD Members: Australia, Austria, Belgium, Canada, Chile, Colombia, Costa Rica, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Republic of Türkiye, United Kingdom, United States; and Non-Members: Argentina, Brazil, Egypt, Malta, Peru, Romania, Saudi Arabia, Singapore, Ukraine and Uruguay; and Other: European Union.
3 Read about the Principles here.
4 See OECD policy paper “Assessing future AI risks, benefits, and policy imperatives” here.
5 See OECD policy paper “Towards a common reporting framework for AI incidents” here.
6 See the OECD report “The Adoption of Artificial Intelligence in Firms” here.
7 See OECD Due Diligence Guidance for Responsible AI here.
8 See an overview of the GPAI here.
9 See a description of the GPAI membership here.
10 “Decisions are adopted by the Council and are legally binding on all Members except those which abstain [whereas] Recommendations are adopted by the Council and are not legally binding [but do] represent a political commitment to the principles they contain and entail an expectation that Adherents will do their best to implement them.” See the OECD Legal Framework here.
11 Read the Recommendation here.
12 “RECOGNISING that given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context.” See the Recommendation, ‘Introduction’, here.
13 “The OECD Experts Working Group, with members from across sectors and professions, plans to conduct further analysis of the criteria to include in a risk assessment and how best to aggregate these criteria, taking into account that different criteria may be interdependent.” See the “OECD Framework for the Classification of AI systems” here, pg.67.
14 See the Recommendation, Section 1 (1.1 – 1.5), here.
15 See the Recommendation, Section 2 (2.1 – 2.5), here.
16 See the OECD’s Policy Observatory here.
[View source.]
See more »
DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.
© White & Case LLP
Refine your interests »
Join more than 70,000 authors publishing their insights on JD Supra
Back to Top
Explore 2026 Readers’ Choice Awards
Copyright © JD Supra, LLC

source

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top