Against AI: The Case for Regulating AI like Nuclear Technology

Note: I’m not writing in reference to any specific fire alarm event, nor am I in a position to assert what the AGI timeline is. My aim is to convey the disconnect between what a lot of folks are saying (AGI is an imminent threat) and what they are doing (speeding it up or writing theory that isn’t being implemented).
A dramatically different AI safety approach is needed

 If you truly believe that Artificial General Intelligence (AGI) poses an existential threat to humanity (good primer on this here) and that AGI timelines are short, then a new strategy is needed for AI safety. The focus must shift from the type of alignment work done at MIRI and elsewhere to regulation and prohibition to prevent the creation of AGI or at least extend the timeline. This essay will explore a regulatory framework similar to how we regulate nuclear technology. How to get this framework implemented will be explored another time. 

fence outside nuclear facility

AI alignment efforts won’t work in time.

Existing AI safety organizations have done admirable theoretical work about alignment frameworks but have made little progress in getting actual AI practitioners interested (or even aware!) of their work. To quote the person who arguably invented the field of AI Safety– “This situation you see when you look around you is not what a surviving world looks like.” 

There are regular updates from groups like DeepMind and OpenAI actively pursuing AGI that show ongoing and exponential progress toward generalized intelligence and providing those agents with access to massive datasets. In short, the agents seem to be progressing and are being treated in ways that seem unsafe. We find ourselves in a situation where in an unknown amount of time, private entities will have the equivalent of nuclear or biological weapons, AND they will be unable to control or contain them. We must pivot toward focusing on lobbying governments around the world to regulate AI to stop reckless private actors from endangering the world and humanity’s future.

Why hasn’t a regulatory approach been tried?

 If the above is true and the AI safety community believes it to be true, one must answer the question of why they haven’t pursued a regulatory strategy and have favored a research driven, academic-ish approach with limited engagement with governments. I believe it is because of an ideological preference among people who are concerned about the existential risk of AGI against centralized, government solutions. I’m sympathetic to this general preference, but centralized, coordinated efforts are very effective tools against national/global threats. Furthermore, governments are the only actors with the scale and incentives to manage AGI risk, cooperate globally and have proven capable of managing other existential (or at least civilizational) risks.

Government alignment is easier than AI alignment

Many AI safety researchers and people broadly worried about AI dismiss a regulatory approach by saying that getting governments to care about and act according to their objectives is impossible. Or less likely than getting all private actors and the resulting AGIs to care about and act according to their objectives. If that is their position, they should likely give up on both.

Governments will intervene no matter what

Another key reason for early government intervention is that undoubtedly were AGI to be created by a private entity, governments would attempt to take it by force. I’ve discussed this with several people at AI research projects who dismissed it out of hand. (It’s odd the types of risks people forecast.) Because if there’s one thing I’m certain of, it’s that if the US government thought there was a super intelligence behind the doors of DeepMind, then they would kick in the doors and take it. This would be a very dangerous proposition given that it seems unlikely an AGI would want to be controlled by force. The moment when JSOC decides to move in on an AGI seems like the most predictable moment of existential risk. The choice is only the timing of government intervention. It is not an if, but a when.

What a regulatory framework might look like

 Nuclear regulations provide a beginning framework for how we can regulate AI on national and international scales. We must control the inputs, monitor industrial uses and allow democracies to decide on pursuing AGI. Using specialized AI for private purposes is analogous to using nuclear materials for power or medicine. It will be considered a dangerous, but highly valuable technology to be used carefully by private parties and stringently monitored by governments.  The pursuit of AGI would be akin to developing nuclear weapons. AGI research would need to be halted initially, until the international community could agree on whether to proceed. If they did, then non-state actors would continue to be banned from this research, and only states who acted in accordance with international treaties, subject to third party monitoring, would be allowed to pursue it.

Control the inputs

 Similar to nuclear regulations we would rapidly need to monitor and control the supply chains of AI inputs with the goal of preventing any non-state actor from developing a system capable of pursuing AGI without oversight. This would involve aggressive oversight of the semiconductor supply chain to prevent anyone from getting enough GPUs to build an unauthorized datacenter. We would need to highly regulate or nationalize cloud computing resources and insist on international monitoring of all advanced data centers. Governments would have to partner with cloud companies to continue operations, upgrades, etc and create a win-win revenue sharing model. We can think of these as public utilities like nuclear power plants. 

Monitor industrial uses

 Specialized AI has lots of valuable uses, and so we would need to regulate private actors’ use of it. We would define acceptable use of AI, machine learning, neural nets, reinforcement learning etc and monitor private uses. Because cloud computing would be under government control, this surveillance would be relatively straightforward. Enterprises like FAANGs that possibly have the resources to develop AGI would be under shareholder pressure to play ball, and startups would unlikely be real threats.

Allow democracies to choose to pursue AGI or not

 If AGI truly is an existential threat, then we would need to treat it like one. This means immediately placing those closest to it, DeepMind, OpenAI, etc, under government/military control. With their cooperation, we would need to do an assessment of progress to date. If AGI is to exist, democracies (educated about the risks and benefits) must make that choice and exert guidance and ideally control over it. To that end, an initial outright ban on AGI research and working toward an international ban makes the most sense. (These treaties would no doubt be complicated, but arms control experts have already begun thinking about them.) We’ve seen success with the outright and permanent ban of human cloning—making AGI research similarly taboo would likely be the safest course of action. However, depending on the decisions of other nations and the international community it seems plausible that AGI research could continue similar to nuclear weapons development.

If AGI is a serious risk, it’s time to act like it

 It is time to stop writing white papers on theoretical alignment solutions. It is time to start educating the public, communicating with the military and intelligence communities, to start lobbying lawmakers and developing draft bills on how we can stop Artificial General Intelligence before it is too late. We must treat specialized AI like the dangerous but valuable tool that it is, and we must treat AGI like the apocalyptic threat that it is. We have done this before with nuclear technology, we can do it again.

 

Leave a Reply

Your email address will not be published.