The Tragedy of AI Governance

By Simon Chesterman

Despite hundreds of guides, frameworks, and principles intended to make artificial intelligence (AI) ‘ethical’ or ‘responsible’, ever more powerful applications continue to be released ever more quickly. Safety and security teams are being downsized or sidelined to bring AI products to market. And a significant portion of AI developers apparently believe there is a real risk that their work poses an existential threat to humanity.

This contradiction between statements and action can be attributed to three factors that undermine the prospects for meaningful governance of AI. The first is the shift of power from public to private hands, not only in deployment of AI products but in fundamental research. The second is the wariness of most states about regulating the sector too aggressively, for fear that it might drive innovation elsewhere. The third is the dysfunction of global processes to manage collective action problems, epitomised by the climate crisis and now frustrating efforts to govern a technology that does not respect borders.

Resolving these challenges either requires rethinking the incentive structures — or waiting for a crisis that brings the need for regulation and coordination into sharper focus.

The Turn to Industry

In 2014, most machine learning models were released by academic institutions; in 2022, of the dozens of significant models tracked by Stanford’s AI index, all but three were released by industry. In 2021, the U.S. government allocated US$1.5 billion to non-defence academic research into AI; Google spent that much on DeepMind alone. Talent has followed. Two decades ago, only about twenty percent of graduates with a PhD in AI went to industry; today around seventy percent do.

The fact that pure as well as applied research is now being undertaken primarily within industry is shortening the lead-time from investigation to application. That may be exciting in terms of the launch of new products — epitomised by ChatGPT reaching a hundred million users in less than two months. When combined with the downsizing in safety and security teams mentioned earlier, however, it suggests that those users are both beta-testers and guinea pigs.

It is too early to judge what impact this will have on the application side of AI, but there are already suggestions that the emphasis will be on monetising human attention and replacing human labour rather than augmenting human capacities.

The Hesitation of the State

States, meanwhile, are more wary of overregulating than underregulating AI. With the notable exception of the European Union’s new legislative regime and episodic intervention by the Chinese government, most states have limited themselves to nudges and soft norms — or inaction. This is a rational approach for smaller jurisdictions, necessarily rule-takers rather than rule-makers in a globalised environment.

Yet there are risks. Half a century ago, David Collingridge observed that any effort to control new technology faces a double bind. In the early stages of innovation, exercising control would be easy — but not enough is known about the potential harms to warrant slowing development. By the time those consequences are apparent, however, control has become costly and slow.

Most states focus on the first horn of the Collingridge dilemma: predicting and averting harms. In addition to conferences and workshops, research institutes have been established to evaluate the risks of AI, with some warning apocalyptically about the threat of general AI. If general AI truly poses an existential threat to humanity, this might lead to calls for restrictions, analogous to those on research into biological and chemical weapons, or a ban like that on human cloning.

It is telling, however, that no major jurisdiction has imposed a ban, either because the threat does not seem immediate or due to concerns that it would merely push that research elsewhere. If regulation targets more immediate threats, of course, the pace of innovation means regulators must play an endless game of catch-up. Technology can change exponentially, while legal, social, and economic systems change incrementally.

Collingridge himself argued that, instead of trying to anticipate the risks, more promise lies in laying the groundwork to address the second aspect of the dilemma: ensuring that decisions about technology are flexible or reversible. This is also challenging, not least because it risks the ‘barn door’ problem of attempting to shut it after the horse has bolted.

An International Artificial Intelligence Agency?

In the face of such governance challenges — states being weak relative to industry, and unable or unwilling to cooperate with one another — the obvious solution is some kind of global initiative to coordinate or lead a response.

Yet, the geopolitical tensions that are hindering national action can stymie international cooperation completely. Perhaps the greatest problem, however, is that the structures of international organisations are ill-suited — and often vehemently opposed to — the direct participation of private sector actors.

If technology companies are the dominant actors in this space but cannot get a seat at the table, it is hard to see much progress being made. On the other hand, some companies have operated through governments as a kind of proxy, which is arguably the definition of regulatory capture.

That leaves two possibilities: broaden the table or shrink the companies.

The Coming Crisis

The tragedy of AI governance is that those with the greatest leverage to regulate AI have the least interest in doing so, while those with the greatest interest have the least leverage.

Industry standards will be important for managing risk, but companies have every incentive to develop and deploy ever more powerful models with few guardrails in place. To the extent that the largest companies are calling for action by regulators, this is at least partly in the hope that friendly regulation will consolidate their position and raise costs for competitors.

Countries have the tools to regulate, but face invidious choices between overregulation that drives innovation elsewhere or risk exposing their populations to harm.

The hypothetical International Artificial Intelligence Agency proposed in the full version of my paper to appear in an edited volume in 2024 (first published in Just Security in advance of the global AI Safety Summit convened by British Prime Minister Rishi Sunak) is one means of addressing these structural barriers to coordination and cooperation.

Perhaps the greatest flaw in that analogy is that the IAEA was negotiated when the effects of the nuclear blasts on Hiroshima and Nagasaki were still being felt.

There is no such threat from AI at present and no comparably visceral evidence of its destructive power. It is conceivable that such concerns are overblown, or that AI itself will help solve the problems raised here. If it does not, global institutions that might have prevented the first true AI emergency will need to be created swiftly to avert the next one.

Keywords:  Artificial intelligence, ethics, law, regulation, markets, compliance, competition, antitrust

AUTHOR INFORMATION

Professor Simon Chesterman is David Marshall Professor and Vice Provost (Educational Innovation) at the National University of Singapore, where he is also the founding Dean of NUS College. He serves as Senior Director of AI Governance at AI Singapore and Editor of the Asian Journal of International Law.

Email:  chesterman@nus.edu.sg

Twitter/X   

LinkedIn