A Geopolitical Competition Revived
As the state of technology advances, artificial intelligence software has become more prevalent in efforts to optimize all aspects of societal operations. The economic and military incentives for efficiency have resulted in what many deem the “global A.I. race.” The term “race” suggests the inherent drive of competition, as featured in President Trump’s recent A.I. Action Plan for America, subtitled “Winning the Race.” The plan targets the ongoing technological rivalry between the United States and China—two leading figures in geopolitical, economic, and military spheres that are now in a head-to-head to dominate the global artificial intelligence industry.
Trump’s A.I. action plan is categorized into three main pillars: “Accelerate A.I. Innovation,” “Build American Infrastructure,” and “Lead in International A.I. Diplomacy and Security.” In its introductory page, the plan discusses the potential for a “new golden age of human flourishing, economic competitiveness, and national security for the American people.” Using terms like “industrial revolution” and “renaissance,” the action plan describes a technological utopia, made possible only by the dominance of American A.I.
This global competition presents an interesting parallel between the current global agenda and the Cold War’s infamous rivalry. The introduction of Trump’s action plan makes the comparison directly: “Just like we won the space race, it is imperative the United States and its allies win this race.” Aside from the incentives of technological hegemony and control, both the space race and today’s A.I. race hold geopolitical and cultural weight. The notion of developing the most advanced A.I. software reflects a similar motivation to achieving superior spaceflight technology: global dominance. Just as the Cold War embodied a zeitgeist of ideological rivalry, the current era, now marked by new forms of unmanned combat, increased economic globalization, and advanced technologies, is a revival of a similar ideological rivalry.
Artificial Intelligence: The Good, the Bad, and the Deadly
In its discussion of the myriad of opportunities that artificial intelligence software can create, Trump’s action plan mentions a variety of potential benefits: discovering new materials, synthesizing new chemicals, manufacturing new drugs, unraveling ancient scrolls, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art. However, nowhere in the document does it mention the integration of A.I. into combat operations, wherein the true essence of this global competition lies. There is some reference to equipping armed forces and intelligence agencies with advanced high-security data centers and surveillance systems, but it altogether avoids any mention of automated weaponry, a global military development that is actively reshaping the landscape of international conflicts. This is yet another parallel to the paradigm-shifting Cold War race, only instead of a competition for developing nuclear warheads, it is a competition for integrating A.I. into warfare, deploying automated weapons and completely altering the nature of combat.
In an essay published by the Georgetown University Journal of International Affairs, Kristian Humble, Associate Professor of International Law at the University of Greenwich, defines and discusses the implications of advancing automated weapons. Humble states that while most current drone technology operates in a strict, predetermined set of circumstances with significant input by human operators, the ultimate goal of the global A.I. arms race is to develop completely automated weaponry that requires minimal human control. The advancement of this technology would apply the optimizing capacities of A.I. into a humanitarian threat, making violence a simple algorithmic output.
A.I.-powered surveillance systems are also a developing technology that blurs ethical and legal boundaries. What are the possible implications of designating robots as digital spectators? What rights do we surrender if we allow A.I. to surveil us in all aspects of life, from grocery stores to war zones? In regard to military surveillance, a 2019 Jane’s Markets Forecast report determined that 80,000 surveillance drones and 2,000 combat drones will be purchased globally within the next decade. By developing fully automated military surveillance, we grant the task of distinguishing civilians from combatants to a non-human system, removing any possible room for humanity in an already inhumane system.
Aside from the humanitarian risks of A.I. warfare, the rapid advancement of artificial intelligence may pose significant risks for environmental and sociological dynamics. For instance, large A.I. data centers require extensive hardware manufacturing and energy consumption—in particular, electricity for general operations and water for cooling systems. Although large-scale data centers have been around for decades, generative A.I. requires substantially more energy to operate. An article by MIT News explains the environmental impacts of the engineering, operations, and commercialization of generative A.I. According to the article, researchers have estimated that a single ChatGPT prompt consumes about five times more electricity than a simple web search. “The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Noman Bashir, lead author of a 2024 research paper that examined the climate and sustainability implications of generative A.I. Bashir is also a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (M.C.S.C.) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (C.S.A.I.L.).
The sociological threats of artificial intelligence are far-reaching as well. The expansion of A.I. software has already begun and will increasingly influence employment in a wide range of industries: customer service, cashiering, graphic design, journalism, software development, truck driving, administrative jobs, and more. A 2024 Pew Research Center report found that 30% of media jobs could be automated by 2035. Similarly, a 2025 World Economic Forum report states that 40% of programming tasks could be automated by 2040. Some claim automation of standard developmental or generative tasks will open opportunities for individuals to partake in higher-level tasks, but that notion remains highly speculative. Moreover, as generative A.I. advances, its capacity to replicate various forms of media could severely increase the rampancy of digital misinformation. While online spaces have been flooded with bots and deepfakes for years, the nature of digital media could become incredibly harmful and corrupt, especially without sufficient regulatory standards enforced on online platforms to target misinformation.
The Finish Line
For the past several years, individual states have had the jurisdiction to regulate artificial intelligence industries, setting standards to enforce ethical and legal boundaries. However, the Trump administration’s proposed regulatory lifts will prevent, in their words, “climate dogma and bureaucratic red tape” from slowing down the development of artificial intelligence: “Simply put, we need to ‘Build, Baby, Build!’” However, by liberating the private sector to expand the capabilities of A.I., the government opens opportunities for the artificial intelligence industry to transform all aspects of human endeavors, creating ethical and legal gray areas. This could significantly harm the environment, worldwide employment, and most importantly, human lives as targets of automated combat. While unbinding the industry from regulations will allow for speedier development of A.I. technologies, it raises serious ethical and moral questions.
Whether we like it or not, A.I. is the name of the game. The first country to cross the finish line, in Vladimir Putin’s words, “will become the ruler of the world.” But is beating China in this technological race more important than restraining artificial intelligence from becoming overly powerful, effectively taking over the lives of its creators? Or is this merely an inevitable outcome of the global trajectory? Must we simply accept the fate of the new A.I.-powered paradigm? Or is there a way to cross the A.I. finish line without abandoning humanitarian and environmental values? These are all incredibly nuanced questions that will require time to answer. It is imperative that at this time we do not permit competition to eclipse what is truly important: human lives. To do so, we need a legal framework to preserve morality and empathy on a technological scale. If we allow the industry to accelerate the development of A.I. technology without regard to its potential dangers, we risk abandoning our own humanity in the name of dominating a global competition.