The Critical Need for International Cooperation on AI
The Quest to Harness AI's Promise While Avoiding Its Perils Demands Unprecedented Global Cooperation
Artificial intelligence (AI) is advancing at a blistering pace. As systems become more capable and autonomous, their impacts promise to be transformative. But without coordination across borders, AI may exacerbate global risks as much as it helps humanity flourish.
International cooperation is essential to steer AI toward benefits and away from pitfalls. New institutions for global AI governance can set norms, enable technology sharing, build consensus, and accelerate safety research. But what forms could such cooperation take? And how can it overcome the obstacles of Competing interests and national security concerns?
I recently read an influential paper on AI governance from researchers at organizations like Google DeepMind, Stanford, and Oxford with lead author Lewis Ho. The wide-ranging topics within shed light on the critical need for global cooperation and the governance models that could fulfill it.
Why International Cooperation is Essential
Most technologies' impacts are confined largely within national borders. But advanced AI systems may be different due to two key characteristics:
1. High Barriers to Development and Utilization
Cutting-edge AI requires massive data sets, computing power, and specialized expertise. This means only a handful of technology giants and elite academic labs sit at the frontier.
"The resources required to develop advanced systems make their development unavailable to many societies," notes Ho.
This concentration of AI progress makes international cooperation essential. Without it, advances may not reflect diverse priorities or benefit humanity equally.
2. Potential for Cross-Border Use and Effects
Many potential beneficial and harmful applications of AI inherently cross borders. Language translation, disinformation campaigns, autonomous cyberweapons, drones, and unmanned vehicles all operate without regard to national boundaries.
As Ho argues, "cross-border access to AI products and cross-border effects of misuse and accidents suggests that national regulation may be ineffective even within states."
This further necessitates international coordination. Otherwise, states may hesitate to regulate domestically due to competitiveness concerns or inability to control external impacts.
Four Functions of Global AI Governance
Given these realities, Ho and colleagues propose four key functions of international AI governance:
1. Spread Beneficial Technology
International collaboration could develop and distribute advanced AI to benefit underserved communities. For example, systems designed specifically for challenges in the developing world like healthcare, agriculture, and education.
"A failure to coordinate or harmonize regulation may also slow innovation," notes Ho. International coordination on governance can support global access.
2. Coordinate Regulation
By setting standards and norms, international institutions can steer states toward effective and harmonized governance. This reduces the friction of discordant rules across borders while still allowing regulatory diversity.
"Inconsistent national regulations could slow the development and deployment of AI," says Ho, as companies hesitate to export to inconsistent regimes. Coordination alleviates this.
3. Manage Shared Risks
Global cooperation can also directly address downside risks of misuse and accidents. By supporting safety research, implementing best practices, monitoring high-stakes development, and controlling dangerous inputs, international collaboration bolsters resilience.
"Advanced AI capabilities may create negative global externalities," argues Ho. "International efforts aimed at managing these risks could be worthwhile."
4. Reduce Geopolitical Sources of Risk
Finally, international institutions can mitigate sources of risk from AI's geopolitical impacts, like arms races and inequality between national capabilities. For example, incentives to participate in governance regimes or collective development may alleviate pressures.
As Ho notes, "The significant geopolitical benefits of rapid AI development decreases the likelihood of adequate AI governance without international cooperation."
Four Models for International AI Institutions
To fulfill these governance functions, Ho and colleagues propose four institutional models:
1. Commission on Frontier AI
An intergovernmental commission modeled after the IPCC could build consensus on AI's opportunities, risks, and policy implications through rigorous scientific assessments. Diverse, impartial experts would conduct regular studies and reviews.
"Consensus among an internationally representative group of experts could expand our confidence in responding to technological trends," contends Ho.
However, he notes challenges like politicization and the relative lack of existing research on advanced AI risks. Careful scoping and governance is essential.
2. Advanced AI Governance Organization
A multistakeholder organization could set guidelines and standards for responsible AI development, support implementing them globally, and potentially monitor compliance. It may build on related existing bodies like the International Telecommunication Union.
"Standard setting facilitates widespread adoption by reducing the burden on domestic regulators," explains Ho. Monitoring compliance where feasible also mutually reinforces commitments.
But Ho believes "the rapid and unpredictable nature of frontier AI progress may require more rapid international action" than typical bureaucratic processes allow. The organization's membership and flexibility will be key.
3. Frontier AI Collaborative
This public-private partnership would develop and distribute beneficial AI systems to reach underserved communities. It could draw on the expertise of leading labs to build inclusive technologies and support local capacity building.
"Pooling resources towards these ends could potentially achieve them more quickly and effectively," argues Ho. But a major challenge is "managing the proliferation of dangerous systems" in any distribution of cutting-edge AI.
4. AI Safety Project
Finally, an ambitious international initiative could accelerate AI safety research. It would provide leading researchers access to massive compute, data, and models to collaborate on technical risk mitigation.
Ho argues this could "significantly expand safety research through greater scale, resources and coordination." However, it may divert efforts from industry and face obstacles to sharing proprietary models.
The Promise and Perils of Technology Collaboration
A number of Ho's proposals hinge on collaboration to develop and share advanced AI technology, whether for safety, beneficial applications, or managing geopolitical impacts. This echoes historical efforts at technology cooperation.
The atomic age spawned major initiatives centered on pooling knowledge and potentially hazardous technologies:
The Baruch Plan proposed international control of nuclear technology under a United Nations Atomic Development Authority. Though it failed, it influenced later nonproliferation efforts.
Organizations like CERN and ITER successfully facilitate scientific collaboration on particle colliders and nuclear fusion.
The IAEA operates uranium banks to supply fuel for civilian nuclear programs without spreading enrichment technology.
Technologies like AI have dual-use potential for both benefit and harm. So cooperating by sharing generalized capabilities risks spreading dangerous knowledge. However, benefits may still outweigh the risks.
Ho believes an ambitious AI collaboration could reduce global tensions: "The existence of a technologically empowered neutral coalition may mitigate the destabilizing effects of an AI race between states."
But he cautions any such effort must carefully restrict membership and exports to manage proliferation dangers. And the dual-use nature of AI may make this intrinsically more difficult than for nuclear technology.
The Critical Role of AI Safety Research
A common thread across Ho's proposals is international cooperation on AI safety research and best practices. Whether conducted directly in a dedicated project or indirectly via a governance organization, improving safety is critical.
"Technical progress on how to increase the reliability of advanced AI systems and protect them from misuse will likely be a priority in AI governance," argues Ho.
AI safety remains nascent and underfunded relative to its importance. By pooling global expertise and resources, impact could scale dramatically.
Ho believes solutions like tiered access and secure enclaves can enable companies to share proprietary models safely for research: "It may be possible to structure model access and design internal review processes in such a way that meaningfully reduces this risk while ensuring adequate scientific scrutiny."
But he recognizes accelerating safety R&D without compromising commercial secrets or distracting internal researchers presents tradeoffs. Striking the right balance is key.
Realism About the Obstacles to Cooperation
AI exemplifies the tension between technology's benefits and risks. And the dynamics of geopolitical competition introduce major obstacles to international cooperation.
States may resist anything that limits perceived national advantages from AI. Hence "arguments about national competitiveness are already raised against AI regulation," notes Ho.
They also fear setting dangerous precedents for access and transparency. As Ho argues, "information security protocols" will be essential to mitigate state concerns about exposing secrets.
There are no easy answers here. But creative incentives and confidence-building measures may help. For instance, tying access to valuable technology and resources to governance commitments.
And starting with more aligned states can demonstrate benefits before expanding cooperation. As Ho says, "aligned countries may seek to form governance 'clubs', as they have in other domains."
The realities of AI require unprecedented collaboration. But it likely must proceed gradually with patience and pragmatism. Preventing a vicious cycle of mounting risk is paramount.
Subscribing to Pragmatic Optimism
International cooperation on AI won't be easy. But it is necessary to steer this transformative technology toward humanity's interests rather than parochial nationalism.
Groups like DeepMind Ethics & Society are advancing the thoughtful conversations essential to this endeavor. And their research makes a compelling case that the wise course lies in greater global coordination and governance.
Turning those ambitious visions into pragmatic reality will demand statesmanship, creative incentives, confidence-building, and moral courage on all sides. Technology shapes the future, but human choices shape technology.