We see that the AI race game is symmetric, the expected value of racing is \(0.5(1)+0.5(3)=2\), and the expected value of cooperating is \(0.5(-1)+(0.5)1=0\), so it seems both companies should be inclined to race lest they lose by cooperating when the other company races, and an easy way to get ahead in the race is to ignore safety in favor of capabilities. What this game ignores, of course, is the large externalities imposed upon humanity by the development of unsafe AGI, especially the negative externalities of ignoring safety \citep*{flynn2019}. If we were to incorporate both the large, positive expected value of developing safe AGI that might be developed in the cooperate-only case and the large, negative expected value of developing unsafe AGI in the race cases, the payout matrix for the game would more resemble that given in Table \ref{tab_game_humanity}, making clear that the incentives in the race game are poorly aligned with the interests of humanity.