Sections

Commentary

Small yards, big tents: How to build cooperation on critical international standards

Cameron F. Kerry
Cam Kerry
Cameron F. Kerry Ann R. and Andrew H. Tisch Distinguished Visiting Fellow - Governance Studies, Center for Technology Innovation

March 11, 2024


  • Government can and should take steps to enable greater participation from a wider range of stakeholders but, if the system of standards development for AI and other critical emerging technologies is to be led by key standards development organizations (SDOs), these organizatons will have to do the most to broaden participation.
  • Governments can heighten awareness of standards and participation among their own personnel and the public. Elevating the level of the leadership involved standards-related activities will help increase their visibility.
  • The U.S., EU, China, and other governments and international bodies have recognized a need for international engagement on standards.

Technical standards can be easy to overlook. They are arcane, granular, and full of jargon. But as the world grapples with the risks and opportunities of artificial intelligence (AI) and other emerging technologies, mastering these details will be essential. 

The United States, China, and the European Union (EU), three major players in the global economy, all have recently identified standards as critical to their strategies for AI and other emerging technologies and have propounded strategies to increase their engagement accordingly. This paper analyzes these strategies in the context of the system of international standards development and examines both the need to improve the system to meet broad societal implications of AI and the ways government engagement can undermine bottoms-up, research-driven, and adaptive features that make this system an effective tool of technology policy. The paper concludes with a series of recommendations both for strengthening standards development and for avoiding harm. 

At a recent of roundtable convened as part of a joint project of The Brookings Institution and Centre for European Policy studies (the Forum for Cooperation on AI), participants were asked in which of six international channels cooperation on AI is most needed. More than 40% identified international AI standards as the top priority. Indeed, standards help enhance safety, improve management, enable interoperability for business and consumer, and provide coherent frameworks that can improve compliance across national borders and differing systems of law and governance. The World Trade Organization (WTO) recognizes that differing national standards can operate as barriers to trade and encourages adoption of international standards. 

The U.S., EU, and China roles in international standards 

The leading international standards organizations (SDOs)—especially when it comes to standards for AI—are the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), the International Telecommunications Union (ITU), and the IEEE Standards Association (IEEE). Most notably, ISO and IEC have formed a joint subcommittee, ISO/IEC JTC 1/SC 42, which has issued 17 standards and reports, with 27 more in progress. IEEE adopted one of the first AI ethics frameworks in 2015 and since then has developed some 20 standards on AI and machine learning. The broad societal impact of AI changes the nature of standards development; unlike purely technical issues like the design of a mechanical part or mobile device chip, AI involves standards that are described as “socio-technical systems,” combining the technical with a range of broader considerations. 

These SDOs operate with broad participation and consultation from industry and experts and adopt standards primarily on this basis of voluntary consensus. The resulting standards are not inherently binding, but individual governments can adopt or adapt them into law and regulation. The decisionmakers in these bodies vary; IEEE has members, while ISO and IEC operate through national standards bodies designated by governments, and in some instances are government bodies and in other cases are independent. In either case, it is usually the stakeholders who lead development standards based primarily on their technical soundness, and the main test of a standard’s quality is its adoption in the marketplace. 

THE UNITED STATES 

The U.S. approach has a well-established industry-led system that relies on stakeholders to establish voluntary consensus standards through a wide variety of mostly sectoral standards development organizations under the umbrella of the American National Standards Institute, an independent nonprofit. Under this approach, the government operates mainly as a convenor and a stakeholder, a role led by the National Institute of Standards and Technology (NIST), which operates adeptly as a proponent of standards development while remaining a facilitator. The White House issued a strategy on standards development for AI and other critical emerging technologies such as quantum, biotechnology, and semiconductors that calls for (1) increased funding for fundamental research and pre-standardization research, (2) increasing U.S. stakeholder participation in international SDOs; (3) promoting greater workforce skills development; (4) increasing engagement with international partners and increasing the diversity of interests among stakeholders. 

THE EUROPEAN UNION 

The EU has a 2012 regulation on standards development founded on WTO principles that relies primarily on voluntary standards developed by a multistakeholder approach. It establishes an EU-wide system by authorizing the European Commission to request for the development of “harmonised standards” by recognized European standards organizations that override any standards developed at the EU member-state level. These can and often do adopt ISO/ IEC standards, sometimes with modifications, and member-state standards bodies participate in ISO/IEC. Commission-initiated EU harmonized standards make up only 20% of standards in the EU, with balance based on industry proposals in member state or European bodies. 

In 2022, the Commission issued a standards strategy that includes increasing transparency and participation in standards organizations, pre-standardization research, and education on standards. In addition, a major element of the strategy is increasing the Commission’s role in standards development and international standards bodies, assisted by an advisory body, and planning legislation to enlarge Commission authority to prescribe “common specifications” in lieu of standards if harmonized standards do not meet Commission requests or the Commission deems them “inadequate.” The Commission’s proposal for the EU Artificial Intelligence Act contained similar authority for AI standards, which was adopted in narrower form in the final agreement released in January 2024. 

CHINA 

The PRC has a state-driven dual-track approach led by the Standardization Administration of China under the State Administration for Market Regulation and State Council, with various ministries leading in the sectors they administer—subject to “Xi Jinping Thought” and the “comprehensive leadership” of the Communist Part of China. 

These agencies organize stakeholders—primarily industry—to conduct the actual work of standards development. China’s State Council outlined a standards strategy in 2021 that, like the U.S. and EU, identified AI along with quantum, biotechnology, and other areas as key areas of focus. The goals include expanding standardization research and incentives for participation in standards development, with an emphasis on adoption of international standards involvement in international SDOs, standards partnership, and involvement on standards through regional organizations, the Belt and Road Initiative to build a network of alliances, and the Brazil-Russia-India-China-South Africa (BRICS) group. In October 2023, China announced a “Global AI Governance Initiative” aimed at “AI governance frameworks, norms and standards based on broad consensus.” These elements reflect a dual strategy: increased stakeholder involvement and international cooperation alongside a China-centered effort to promote standards based on Chinese interests. 

Despite their differing approaches, the EU, U.S., and Chinese standards strategies all include international engagement as a component. A number of bilateral and multilateral initiatives have identified standards development as key areas for collaboration. These include the U.S.-EU Trade and Technology Council (TTC); the G7; technology dialogues between the U.S. and Singapore, Singapore and the EU, and U.S. and the U.K.; and the Quadrilateral Security Dialogue among Australia, India, Japan, and the U.S. The U.K.’s Safety Summit in October 2023 spurred the U.K. and U.S., and likely others, to initiate research bodies for AI safety, and the New Zealand-U.K. Free Trade Agreement includes reference to industry-led standards for regulation of emerging technology, including AI. The TTC in particular has produced concrete results with agreement on terminology, comparison of risk assessment approaches, and agreement to develop a code of conduct for AI, which resulted in G7 adoptions of codes for AI. 

Continue reading the full report

Author

  • Acknowledgements and disclosures

    I am grateful to Mishaela Robison for her research assistance throughout the evolution of this paper. Jack Malamud provided editorial assistance. I am also grateful to my colleagues in the Forum for Cooperation on AI, Joshua Meltzer and Andrew Wyckoff at Brookings, and Andrea Renda and Clément Perarnaud at the Centre for European Policy Studies for their help and wisdom in exploring the impact of standards development on digital policy. In addition, various participants in FCAI with deep experience in standards development reviewed a draft of this paper. Because FCAI discussions are conducted under the Chatham House rule, they are not identified here but they should know I am especially grateful for the time and thought they put into helpful comments.

    The National Science Foundation is a donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.