loading page

Avoiding AGI races through self-regulation
  • G Gordon Worley III
G Gordon Worley III
Phenomenological AI Safety Research Institute

Corresponding Author:[email protected]

Author Profile

Abstract

The first group to build artificial general intelligence or AGI stands to gain a significant strategic and market advantage over competitors, so companies, universities, militaries, and other actors have strong incentives to race to build AGI first. An AGI race would be dangerous, though, because it would prioritize capabilities over safety and increase the risk of existential catastrophe. A self-regulatory organization (SRO) for AGI may be able to change incentives to favor safety over capabilities and encourage cooperation rather than racing.