Artificial Intelligence Priorities for the Next Administration
from Digital and Cyberspace Policy Program
from Digital and Cyberspace Policy Program

Artificial Intelligence Priorities for the Next Administration

A super computing center featuring technologies from Nvidia during the Foxconn Tech Day 2024, in Taipei.
A super computing center featuring technologies from Nvidia during the Foxconn Tech Day 2024, in Taipei. Daniel Ceng/Anadolu/Getty Images

The rise of generative artificial intelligence (AI) has been breathtaking, and American firms are leading the way in showing the potential of a new AI-propelled world. But rivals like China are gaining ground, with major consequences for the U.S. economy and security.

November 26, 2024 11:12 am (EST)

A super computing center featuring technologies from Nvidia during the Foxconn Tech Day 2024, in Taipei.
A super computing center featuring technologies from Nvidia during the Foxconn Tech Day 2024, in Taipei. Daniel Ceng/Anadolu/Getty Images
Expert Brief
CFR scholars provide expert analysis and commentary on international issues.

Sebastian Elbaum is the Technologist-in-Residence at the Council on Foreign Relations. Adam Segal is the Ira A. Lipman Chair in Emerging Technologies and National Security and Director of the Digital and Cyberspace Policy program at CFR.

More From Our Experts

By many measures, the United States dominates the AI landscape: it is home to more top AI models and more leading companies and invests more in AI development than China and Europe. The U.S. market is dominated by a handful of private companies producing foundational models—large models trained on vast data sets that can perform many tasks—but there is a rapidly growing ecosystem of smaller companies building specialized systems, often on top of foundational ones.  

More on:

Artificial Intelligence (AI)

Technology and Innovation

Trump

The hardware sustaining these systems is also led by U.S. companies, primarily Nvidia along with Advanced Micro Devices (AMD) and Intel, but Chinese firms like Huawei are becoming increasingly competitive in semiconductor design and manufacturing. Longer term, supply chain issues, especially Nvidia’s dependence on Taiwan’s manufacturing capacity, and questions about the availability of the massive amounts of energy needed to train models, cloud the future of American AI. Failure to address these and other issues, and to create a regulatory environment that balances opportunities and risks, could slow American innovation and threaten U.S. economic and national security.  

What have been the main federal government responses to the emergence of AI? 

Over the last decade, the U.S. federal government has made strides toward AI adoption. Most agencies have appointed chief AI officers, collected potential use cases of AI, adjusted their compliance mechanisms, and integrated AI usage guidelines into their practices. The newly established AI Safety Institute has adapted the risk framework developed by the National Institute of Standards and Technology to work on AI and set up a voluntary assessment program with the major technology companies building the most sophisticated AI systems. The National Science Foundation is spearheading the National Artificial Intelligence Research Resource Pilot to provide additional computing power, datasets, and models for researchers and educators, thus building a broader base for technology innovation and diffusion. 

The incoming Donald Trump administration has signaled that it will prioritize accelerating AI innovation and dismantling some of the regulatory barriers put in place by its predecessor that it believes could hamper innovation. Still, there are a number of steps that the administration should take to ensure both the safety of AI systems and the competitiveness of the U.S. AI ecosystem.     

More From Our Experts

What laws govern AI? 

There is no comprehensive AI law so far. Congress has considered hundreds of bills that touch on AI, as well as several that directly address its risks, but the most consequential AI law it has passed is not even specifically an AI law. The CHIPS and Science Act of 2022 funded the investments necessary to boost the semiconductor manufacturing capacity and scientific research that will support the next wave of AI advances. Beyond that, Congress has produced several AI bills embedded into the annual National Defense Authorization Acts. In 2021, for example, the act created the National Artificial Intelligence Initiative Office and provided a structure to coordinate research and development (R&D) between the defense and intelligence communities and civilian federal agencies.  

Still, despite bipartisan support for some type of AI-specific regulation, the prospect for congressional action on this is uncertain. Republicans and Democrats differ on the purpose and proposed methods of legislation. The former have so far been more focused on AI and content moderation, the latter on AI’s impact on equity and economic inclusion.  As a result, the executive branch has taken the lead on AI governance thus far. 

More on:

Artificial Intelligence (AI)

Technology and Innovation

Trump

During his first term, President Donald Trump issued two executive orders on artificial intelligence. Executive Order (EO) 13859, signed in February 2019, highlighted the importance of AI leadership to the United States. It outlined a coordinated federal strategy to prioritize AI research and development, develop standards to reduce barriers to deployment and ensure safety, equip workers with AI skills, foster public trust in its technologies, and engage with international partners. EO 13960, signed in December 2020, directed AI’s use in the federal government to enhance the efficiency and effectiveness of operations and services. It also established principles for AI applications and mandated that agencies create inventories of AI use cases, ensure compliance with the principles, and promote interagency coordination to foster public trust in AI technologies.  

In July 2023, President Joe Biden announced voluntary commitments from leading AI companies to advance the safe, secure, and transparent development of AI technology. In October 2023, the president signed EO 14110, which focused on the responsible development and deployment of emerging AI technology, with mandates for risk assessment, robust evaluations, standardized testing of and safeguards for AI systems to provide a broad range of user protections. The order also supported job training and education for the AI era and sought to enhance the federal government’s capacity to use AI responsibly, including by attracting and retaining talent. Finally, it promoted U.S. global leadership through engagement with international partners to develop a framework for managing AI risks and benefits.  

Biden followed the EO with the first National Security Memorandum on AI in October 2024. The NSM is expected to accelerate the development of cutting-edge AI tools, protect AI R&D from theft and exploitation, and promote the adoption of advanced AI capabilities to address national security needs while ensuring defense and intelligence uses of AI to protect human rights and advance democratic values. 

Do U.S. states have any rules governing AI use? 

In the absence of legislation at the federal level, more than a dozen states, including California, Colorado, New York, Texas, and Virginia, have approved bills supporting various forms of consumer protection, stakeholder participation in the development and monitoring of AI systems, and accountability in the face of violations. For example, the California state legislature has passed bills protecting individuals from having their voice or likeness copied and requiring watermarks on AI-generated images. SB 1047 mandated safety training for companies that spend more than $100 million training frontier models and required AI developers to take “reasonable care” to avoid critical harms that could result in mass casualties or cyberattacks causing catastrophic damage to critical infrastructure. It was vetoed by Governor Gavin Newsom after vocal opposition from AI researchers, tech firms, and venture capitalists as well as members of Congress. There is a real risk that state-level initiatives, particularly to provide user protections from AI systems, could create a patchwork of conflicting laws that technological companies will struggle to navigate, slowing down innovation.  

Is there any international coordination on establishing regulatory norms? 

The United States internationalized the voluntary commitments signed by the tech companies through what is known as the Group of Seven (G7) Hiroshima AI process. The United States also participated in the United Kingdom AI Safety Summit and signed the Bletchley Declaration, which encouraged transparency and accountability from actors developing frontier AI technology. In September 2024, the United States signed the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This document aims to ensure AI systems align with human rights, democracy, and the rule of law. 

What should an AI governing roadmap look like? 

Set up an AI Commission. While the specifics of AI policy under Trump’s second term are not yet clear, the president-elect has said he would revoke EO 14110, lifting regulations so the United States can compete more effectively with China. The incoming administration has also been clear about its desire to limit the federal government and cut its spending.  

Still, the Trump administration should create an AI Commission to ensure the safety of AI systems. Private technology companies are driving the AI revolution, but they are unable to ensure that the benefits of their products outweigh the risks. Moreover, the market often incentivizes them to be first to introduce a product, before they can assure the product’s safety. Currently, no existing government agency has the expertise, resources, and authority required to formulate federal AI policies and regulations, conduct inspections of AI systems, check that these systems are good enough for their use cases, and levy penalties if systems fail.  

The AI Commission could be modeled on the National Highway Traffic Safety Administration, which manages safety, security, and efficiency of vehicles. Such a commission would also collect, analyze and investigate incidents, and invoke a recall of an AI system if necessary. These measures will incentivize companies to improve and accelerate the quality control practices of their AI systems. 

Spur AI investment in universities. The Trump administration should also support investment in AI systems outside of the big tech players, especially in federal labs and universities. The cutting-edge AI landscape is now monopolized by a handful of private companies that invest and rely on massive computing resources to advance technology. Universities have the talent but cannot match the level of computing investment. The National Science Foundation’s annual funding for AI-related research—over $800 million in 2023—is comparable with the cost of training a couple of foundational AI models, and an order of magnitude smaller than the total training budget of the top AI companies; OpenAI’s training and inference costs could reach $7 billion in 2024.  

The lack of investment curtails universities’ capabilities to develop the next generation of technology and to train young researchers and engineers who will create and staff new companies. More fundamentally, if this trend continues, universities will no longer be in a credible position to independently judge the strengths and weaknesses of emerging technology when it comes to social goals such as job creation or educational access instead of the market value that drives private companies.  

Align AI and energy policy. As AI systems have become more sophisticated, they require data centers with more computing power, which also requires more energy. One study found that AI could make up 0.5 percent of worldwide electricity use by 2027. Companies can reduce energy costs by developing more efficient chips, architectures, algorithms, and models, but it is unclear in the end whether the savings will result in lower energy consumption or just promote more usage. The government can help ensure sufficient energy capacity by streamlining the regulatory process, incentivizing private sector investment in grid modernization, and supporting research, demonstration, and easing deployment and permitting advanced nuclear projects. 

This work represents the views and opinions solely of the author. The Council on Foreign Relations is an independent, nonpartisan membership organization, think tank, and publisher, and takes no institutional positions on matters of policy.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close

Top Stories on CFR

Trade

President Trump doubled almost all aluminum and steel import tariffs, seeking to curb China’s growing dominance in global trade. These six charts show the tariffs’ potential economic effects.

Ukraine

The Sanctioning Russia Act would impose history’s highest tariffs and tank the global economy. Congress needs a better approach, one that strengthens existing sanctions and adds new measures the current bill ignores.

China Strategy Initiative

At the Shangri-La dialogue in Singapore last week, U.S. Secretary of Defense Pete Hegseth said that the United States would be expanding its defense partnership with India. His statement was in line with U.S. policy over the last two decades, which, irrespective of the party in power, has sought to cultivate India as a serious defense partner. The U.S.-India defense partnership has come a long way. Beginning in 2001, the United States and India moved from little defense cooperation or coordination to significant gestures that would lay the foundation of the robust defense partnership that exists today—such as India offering access to its facilities after 9/11 to help the United States launch operations in Afghanistan or the 123 Agreement in 2005 that paved the way for civil nuclear cooperation between the two countries. In the United States, there is bipartisan agreement that a strong defense partnership with India is vital for its Indo-Pacific strategy and containing China. In India, too, there is broad political support for its strategic partnership with the United States given its immense wariness about its fractious border relationship with China. Consequently, the U.S.-India bilateral relationship has heavily emphasized security, with even trade tilting toward defense goods. Despite the massive changes to the relationship in the last few years, and both countries’ desire to develop ever-closer defense ties, differences between the United States and India remain. A significant part of this has to do with the differing norms that underpin the defense interests of each country. The following Council on Foreign Relations (CFR) memos by defense experts in three countries are part of a larger CFR project assessing India’s approach to the international order in different areas, and illustrate India’s positions on important defense issues—military operationalization, cooperation in space, and export controls—and how they differ with respect to the United States and its allies. Sameer Lalwani (Washington, DC) argues that the two countries differ in their thinking about deterrence, and that this is evident in three categories crucial to defense: capability, geography, and interoperability. When it comes to increasing material capabilities, for example, India prioritizes domestic economic development, including developing indigenous capabilities (i.e., its domestic defense-industrial sector). With regard to geography, for example, the United States and its Western allies think of crises, such as Ukraine, in terms of global domino effects; India, in contrast, thinks regionally, and confines itself to the effects on its neighborhood and borders (and, as the recent crisis with Pakistan shows, India continues to face threats on its border, widening the geographic divergence with the United States). And India’s commitment to strategic autonomy means the two countries remain far apart on the kind of interoperability required by modern military operations. Yet there is also reason for optimism about the relationship as those differences are largely surmountable. Dimitrios Stroikos (London) argues that India’s space policy has shifted from prioritizing socioeconomic development to pursuing both national security and prestige. While it is party to all five UN space treaties that govern outer space and converges with the United States on many issues in the civil, commercial, and military domains of space, India is careful with regard to some norms. It favors, for example, bilateral initiatives over multilateral, and the inclusion of Global South countries in institutions that it believes to be dominated by the West. Konark Bhandari (New Delhi) argues that India’s stance on export controls is evolving. It has signed three of the four major international export control regimes, but it has to consistently contend with the cost of complying, particularly as the United States is increasingly and unilaterally imposing export control measures both inside and outside of those regimes. When it comes to export controls, India prefers trade agreements with select nations, prizes its strategic autonomy (which includes relations with Russia and China through institutions such as the Shanghai Cooperation Organization and the BRICS), and prioritizes its domestic development. Furthermore, given President Donald Trump’s focus on bilateral trade, the two countries’ differences will need to be worked out if future tech cooperation is to be realized.