Regulating Artificial Intelligence

Last Updated


Artificial intelligence (AI) is advancing at a breakneck pace that experts speculate could rival previous technological revolutions. AI offers economic and social benefits, but could bring social, political, and national security risks. The United States needs to weigh the costs and benefits of different approaches to AI regulation.

Students will understand that artificial intelligence (AI) is a rapidly developing technology that carries enormous potential benefits, but also poses significant risks to society, politics, and even national security. 

The Situation

Artificial intelligence (AI) is advancing at a breakneck pace that experts speculate could rival previous technological revolutions like the advent of the internet. AI’s rise will transform the global economy, potentially driving trillions of dollars in growth worldwide. Yet governments, nongovernmental organizations, and AI developers themselves have warned that, along with the many opportunities it offers, AI poses serious risks to society, politics, and national security. Consequently, policymakers are deeply engaged in discussions about how to regulate AI.

Artificial intelligence refers to any computer function that can replace human activity. The technology has existed in some form for decades, but its development has massively accelerated with the recent emergence of so-called generative AI models, such as ChatGPT. These models analyze vast amounts of data to make predictions and generate increasingly human-like text, images, and audio. 

As AI advances, its use has become increasingly widespread. The technology has propelled technological leaps, such as advances in driverless car technology or the development of new medical treatments, and much more. AI has also allowed businesses to dramatically improve their operations. Among other things, it helps banks analyze data to make trading or lending decisions and allows businesses to more precisely target advertising. The technology is able to almost entirely automate some processes, such as customer service. The rapid expansion of AI offers immense societal and economic benefits. One 2023 study predicted that AI could add up to $4.4 trillion annually to the economy. AI also has military applications, such as improving autopilot and precision targeting. Militaries are also researching lethal autonomous weapons systems (LAWS) that could make the decision to shoot or launch a missile without human intervention. LAWS could increase military power without the cost of recruiting and training more soldiers. As the technology continues to develop, its uses will also become more varied and complex. Experts predict that AI could help produce strategies to address climate change, improve healthcare processes, or facilitate breakthroughs on new technologies such as nuclear fusion.

The risks posed by AI are nearly as similarly expansive. While boosting economic growth overall, the technology could disrupt industries and alter or replace jobs. AI also threatens to influence and alter politics and society beyond its impacts on industry. For example, it has helped facilitate increasingly sophisticated misinformation campaigns, and enabled repressive governments to better crack down on dissent. In Iran, for instance, the government has used AI facial recognition and web-tracking tools to suppress protests for democracy and women’s rights. In the military arena, critics warn about the ethics of employing LAWS, as they take the decision to kill out of human hands. AI models can also perpetuate socioeconomic and racial biases. The use of AI could have far-reaching consequences for hiring, law enforcement, healthcare, financial decision-making, and more. Experts have also warned that AI poses national security risks. Governments, criminal organizations, and terrorist groups could use AI tools at relatively low cost to compromise classified information or financial systems, and even assist in the creation of weapons of mass destruction. For many observers, these risks underscore the urgent need for the technology to be carefully regulated.

AI regulation is still in its early stages. A patchwork of national regulation schemes exists in various stages in several countries. In the United States, President Joe Biden has developed a Blueprint for an AI Bill of Rights to suggest guidelines for AI developers and users. Biden also issued an Executive Order requiring AI developers to adopt safety and transparency practices. China, meanwhile, has developed detailed rules to govern AI, including restrictions on its use in recommendation algorithms and the use of AI-generated text. The European Union (EU) has also adopted legislation imposing transparency requirements and restrictions on AI-powered surveillance

No widespread multilateral AI regulations exist yet. In November 2023, the United Kingdom hosted an AI Safety Summit gathering world leaders with the hope of laying a foundation for an international AI-regulation framework. So far, however, that goal has not been realized.

AI regulation poses several tradeoffs for policymakers. Internationally agreed regulations could help establish regulation at the widest scale, yet they would require lengthy negotiations and working with adversaries that may not share U.S. values on AI use. International regulations could also be difficult to enforce. National regulations, on the other hand, could govern some uses of AI without the need for negotiations, although they would not promote a globally coordinated approach. Since many AI developers are U.S.-based, national regulations could have some international effects, but may not cover the development of technologies in China and elsewhere. National regulations could provide an important baseline of protection, but would not address all of the global risks of AI, such as its malicious use against U.S. interests, or its use by repressive governments abroad. 

Regulations themselves carry risks. AI can be difficult to regulate in practice, and some lobbyists are wary that regulation could stymie innovation and progress. AI developers could lose opportunities to develop the technology further and explore new uses, and businesses could lose some of AI’s economic benefits. Uneven or weakly enforced regulations, moreover, could cause economic disruption as some AI developers and users would have more leeway than others. Regulations that stymie innovation could harm U.S. economic competitiveness overall. If AI can revolutionize the global economy as experts predict, barriers to progress could put the United States at a vast disadvantage in this rapidly developing field. Accordingly, any attempts to regulate AI will need to be considered with great care.

Decision Point

The rapid rise of artificial intelligence offers untold economic and social benefits, but also threatens grave social, political, and national security risks. As the technology develops and becomes increasingly integrated in societies worldwide, public attention has turned to policymakers and how, if at all, they will regulate the technology. The president has convened the National Security Council (NSC) to discuss how the United States should approach AI regulation. As they deliberate, NSC members will need to carefully consider the costs and benefits of AI regulations, the time that their implementation would require, and the risk of regulations restricting positive innovations in AI technology.

NSC members should consider the following policy options:

  • Pursue a multilateral governing regime for AI. A multilateral framework could produce widely adopted, responsible AI regulations. These regulations could ensure privacy protections, safeguard against bias and repression, and guard against economic disruptions. However, this framework could be difficult to negotiate. It could force the United States to compromise by accepting limitations it deemed harmful to innovation or allowing practices it deemed against its interests or values. Moreover, slow negotiations could fail to reach decisions at a pace required to keep up with the speed of  AI development. 
  • Continue to pursue national measures. This option could entail continued executive action to set standards for AI use among U.S. companies. The president could also endorse legislation to establish more robust regulations, although congressional approval for such legislation is not guaranteed. National regulations would grant the United States ultimate flexibility in regulating AI within its borders. National regulations could have some measure of global influence but they would not provide the same comprehensive governance as a multilateral treaty. They would do little to address the harmful uses of the technology abroad.
  • Hold off on regulations for the moment. This option would entail accepting the risk of AI’s development and use in unforeseen or undesirable ways, at least for the time being. However, it could also enable maximal room for new innovation in the field that may end up benefiting the United States. Allowing time for the AI revolution to progress before adopting regulations could also allow the United States to craft more carefully considered and easily enforceable regulations.

Additional Resources

Need inspiration for how to structure your Mini Simulation?

View Guidelines

More Mini Simulations

Mini Simulation
Who Owns Space?
Should the United States recognize space as a shared global commons? Explore this simulation. 
Mini Simulation
Defending Ukraine
How should the United States and its NATO allies respond to Russia’s invasion of Ukraine? Explore this simulation.