The history of modern technology has often been a history of technological races. A race starts when a new technology becomes cost-effective, then companies, states, and other actors hurry to develop the technology in hopes of capturing market share before others can, gaining a strategic advantage over competitors, or otherwise benefiting from a first-mover position \citep*{gottinger2013innovation}. Some notable examples of recent technological races include races over rockets, personal computers, and DNA sequencing \citep*{collins1999space}\citep*{cringely1996accidental}\citep*{davies2002cracking}. Although most of these races have been generally beneficial for society by quickly increasing productivity and expanding the economy, others, like races over weapons, generally make us less safe. In particular the race to build nuclear weapons dramatically increased humanity’s capability to extinguish itself and exposed us to new existential risks that we previously did not face \citep*{powaski2000}\citep*{bostrom2013}. This means that technological races can harm as much as they can help, and nowhere is that more true than in the burgeoning race to build AI \citep*{shulman2013}.
In particular we may be near the start of a race to build artificial general intelligence or AGI thanks to recent advances in deep learning \citep{Silver_2017}. And unlike existing narrow AI that outperforms humans but only on very specific tasks, AGI will be as good or better than humans at all tasks such that an AGI could replace a human in any context \citep{legassick2017}. The promise of replacing humans with AGI is extremely appealing to many organizations since AGI could be cheaper, more productive, and more loyal than humans, so the incentives to race to build the first AGI are strong \citep{hanson2001}. Yet the very capabilities that make AGI so compelling also make them extremely dangerous, so we may find we would have been better off not building AGI at all if we cannot build them safely \cite{bostrom2002}.
The risks of AGI have been widely discussed, but we may briefly summarize them by saying AGI will eventually become more capable than humans, AGI may not necessarily share human values, and so AGI may eventually act against humanity’s wishes in ways that we will be powerless to prevent \citep{M_ller_2014}\citep{amodei2018}. This means AGI presents a new existential risk similar to but far more unwieldy than the one created by nuclear weapons, and unlike nuclear weapons that can be controlled with relatively prosaic methods, controlling AGI demands solving the much harder problem of value aligning an “alien” agent \citep{matters2016}\citep*{Turchin_2018}\citep{yudkowsky2016}. Thus it’s especially dangerous if there is a race for AGI since it will create incentives to build capabilities out in advance of our ability to control them due to a likely tradeoff between capabilities and safety \citep{iii2018}.
This all suggests that building safe AGI requires in part resolving the coordination problem of avoiding an AGI race. To that end we consider the creation of a self-regulatory organization for AGI to help coordinate AGI research efforts to ensure safety and avoid a race.

An SRO for AGI

Self-regulatory organizations (SROs) are non-governmental organizations (NGOs) setup by companies and individuals in an industry to serve as voluntary regulatory bodies. Although they are sometimes granted statutory power by governments, usually they operate as free associations that coordinate to encourage participation by actors in their industries, often by shunning those who do not participate and conferring benefits to those that do \citep*{hagerty2005}. They are especially common in industries where there is either a potentially adversarial relationship with society, like advertising and arms, or a safety concern, like medicine and engineering \citep{shuchman1981}. Briefly reviewing the form and function of some existing SROs:
Currently computer programmers, data scientists, and other IT professionals are largely unregulated except insofar as their work touches other regulated industries \citep{engineers}. There are professional associations like the IEEE and ACM and best-practice frameworks like ITIL, but otherwise there are no SROs overseeing the work of companies and researchers pursuing either narrow AI or AGI, yet as outlined above narrow AI and especially AGI are areas where there are many incentives to build capabilities that may unwittingly violate societal preferences and damage the public commons. Consequently, there may be reason to form an AGI SRO. Some reasons in favor:
Some reasons against:
On the whole this suggests an SRO for AGI would be net positive so long as it were well managed, focused on promoting safety, and responsive to developments in AGI safety research. In particular it may offer a way to avoid an AGI race by changing incentives to avoid the game theoretic equilibriums that cause races.

Using an SRO to Reduce AGI Race Risks

To see how an SRO could reduce the risk of an AGI race, let's first consider a game theoretic description of the AGI race.
Suppose that there are two entities trying to build AGI — company A and company B. It costs $1 trillion to develop AGI, a cost both companies must pay, and the market for AGI is worth $4 trillion. If one company beats the other to market it will capture the entire market thanks to its first-mover advantage, netting the company $3 trillion in profits, and the company that is last to market earns no revenue and loses $1 trillion. If the companies tie, though, they split the market and each earn $1 trillion. This scenario yields the the payout matrix in Table \ref{tab_game_round_1}.