Artificial Intelligence and the Evolving Landscape of Nuclear Strategy

March 4, 2024 | 10:48 am
President Joe Biden delivers remarks at an Executive Order signing on Artificial Intelligence, Monday, October 30, 2023 in the East Room of the White House.The White House/Flickr
Dr. Silky Kaur

In the ever-evolving landscape of technological advancements, artificial intelligence (AI) is rapidly becoming a focal point in both personal and political discussions. Engineered to emulate human cognitive functions of learning, problem-solving, perception, and decision-making, AI is advancing at an unprecedented pace. Amidst an escalating discourse surrounding AI and its rapid advancement, countries are working to regulate its uses.

For example, President Biden’s pivotal executive order on “safe, secure, and trustworthy” AI, signed on October 30, 2023, establishes crucial standards for its usage. This order mandates major AI companies to collaborate with the government, ensuring safety tests and critical information are shared to safeguard public well-being. Additionally, dialogues between global leaders, such as talks between Presidents Biden and Xi adjacent to the Asia-Pacific Economic Cooperation (APEC) meeting, underscore the pressing need to address the risks associated with AI.  In November 2023, more than 40 countries united with the United States to support a political declaration addressing the responsible utilization of AI in military contexts. And in the European Union, a landmark enactment of comprehensive AI governance regulations on December 8, 2023, positioned the EU at the forefront of global AI governance.

The increasing global focus on regulating artificial intelligence (AI) corresponds with mounting concerns among national security experts and scholars regarding its potential implications for nuclear strategy. As AI continues to advance, there is a growing recognition of the need to comprehensively evaluate its impact on nuclear stability, deterrence doctrines, command and control systems, and crisis management protocols. It is imperative for policymakers and strategic thinkers to address the complex intersection of AI and nuclear strategy to ensure global security and stability in an era becoming defined by AI evolution.

Given the absence of existing treaties or international agreements addressing AI advancements in these areas, questions persist regarding its appropriate use and the establishment of ethical guidelines. Establishing clear guidelines for the ethical and responsible use of AI is increasingly imperative to mitigate potential risks and safeguard against unintended consequences.

Effect of AI on nuclear deterrence 

The intersection of AI with nuclear deterrence creates a complex landscape fraught with heightened risks. Some experts speculate that integrating AI into nuclear strategy could enhance the efficiency and effectiveness of nuclear deterrence by improving early warning systems, enhancing command and control capabilities, and facilitating rapid decision-making processes. AI could assist in analyzing vast amounts of data to detect and respond to potential threats more quickly and accurately than human operators. Adam Lowther and Curtis McGiffin stress the imperative for integrating AI into the Nuclear Command, Control, and Communications (NC3) system for the US, citing the rapidly evolving threat landscape posed by countries like China and Russia. They argue that this integration is not merely a technological advancement but a strategic necessity, vital for enhancing detection capabilities, decision-making processes, and ensuring a prompt and effective response to nuclear threats.

On the other hand, there are significant concerns regarding the impact of integrating AI on existing nuclear deterrence stability. The proliferation of AI-enabled technologies in the realm of nuclear weapons could raise questions about transparency, accountability, and the potential for arms races among nuclear-armed states. AI systems could also be vulnerable to cyberattacks or outside manipulation, which could undermine the reliability and credibility of nuclear deterrence mechanisms. Another concern is the risk of inadvertent escalation due to the potential for AI systems to misinterpret or misattribute signals, leading to miscalculations or unintended consequences, also known as artificial escalation. The short film Artificial Escalation by Space Film & VFX for The Future of Life Institute offers a glimpse into the alarming potential of AI integration into weapons of mass destruction, highlighting the real threat it poses. Dr. James Johnson, author of AI and the Bomb, outlines in his book how advancements in AI may provide adversaries with the means to target nuclear assets. This includes the potential use of AI-powered cyber weapons to launch attacks on NC3 systems.

Overall, while AI has the potential to enhance certain aspects of nuclear deterrence, its integration into strategic decision-making processes also presents significant challenges, and risks that must be carefully considered and addressed by policymakers and stakeholders.

AI, emotions, ethics and guidelines

While exploring the dynamics of AI in nuclear decision-making, it’s important to recognize that we humans heavily rely on our intuition, emotions, and feelings to make choices—qualities that AI doesn’t have. AI may make decisions differently from humans, which could be both good and bad. For example, AI might make decisions more quickly and without being influenced by emotions, but it might also miss important factors that humans would consider.

The ongoing debate surrounding AI’s potential role in making decisions to end human lives while keeping human agents ‘out of the killing loop’ also intensifies discussions on its moral permissibility. While AI might exhibit more rational decision-making in the heat of war, devoid of the emotional nuances inherent in humans, it’s crucial to recognize the importance of human emotions. Though showing compassion and empathy isn’t always a rational choice and is difficult during conflicts, these emotions can bring out the best of humanity. Therefore, it is crucial to keep humans at the center of decision-making processes.

For example, Soviet naval officer Vasili Arkhipov played a critical role in diffusing the Cuban missile crisis in 1962, when the Soviet submarine B-59, armed with a nuclear torpedo, was cornered by 11 US destroyers. As the US ships dropped depth charges around the submerged sub, the crew, cut off from communication and believing they were under attack, faced the imminent decision of launching their nuclear weapon. Despite intense pressure from two senior officers, including the captain, who were in favor of the launch, Arkhipov, the second captain and brigade chief of staff, refused to give his assent. Arkhipov relied on his intuition, reasoned analysis, and past experience to reject the launch of a 10-kiloton nuclear torpedo. His emotional intelligence and ethical considerations played a crucial role in averting a nuclear disaster. While it’s true that human beings can also make terrible decisions, Arkhipov’s example—his ability to maintain composure, prioritize human lives over immediate retaliation, and navigate the complex geopolitical tensions of the Cold War—highlights the importance of human judgment and moral reasoning in critical moments.

For another example, we turn to the case of Stanislav Petrov, a Soviet officer stationed at a secret bunker near Moscow during the Cold War. In 1983, Petrov faced a harrowing situation when the Soviet early-warning system detected what appeared to be incoming American missiles. Tensions were high, and the pressure to respond swiftly was immense. However, Petrov trusted his intuition and made the critical decision to report the alarm as a false alarm, despite the system indicating a real attack. His decision, based on a gut feeling and rational analysis, proved to be correct when it was later revealed that the system had misinterpreted a natural phenomenon. Petrov’s actions single-handedly prevented a potential nuclear war and saved millions of lives. This example further illustrates the irreplaceable role of human judgment in navigating the complexities of nuclear decision-making.

AI and a nuclear arms race

There’s a growing unease about the destructive potential of countries using AI in a nuclear arms race. The pursuit of AI-enhanced nuclear capabilities by certain states could exacerbate existing inequalities in the international security landscape. The worry stems from the possibility that this global divide in military technologies could push AI “have-nots” to adopt unconventional strategies in response. Thus an AI-nuclear fusion arms race could trigger sky-high investments, a lack of transparency, mutual suspicion, and an underlying fear of losing ‘the race,’ which could provoke an avoidable or accidental conflict. 

Recommendations 

Dialogue and Negotiation: There is a need for thoughtful dialogue during these times of policy flexibility and strategic discussions with realistic goals, along with a need for bilateral and multilateral confidence-building measures. This is envisaged through an increase in strategic dialogue and the potential negotiation of arms control agreements. Furthermore, establishing regular channels of communication between key stakeholders can facilitate the exchange of information and promote mutual understanding, enhancing the prospects for successful negotiations and sustained peace.

Regulation: Traditional arms control regimes were not designed for AI, making regulation challenging. Despite significant investments in AI research and development, attention to legislative and regulatory changes is lacking. A comprehensive, cohesive approach to AI legislation and regulation must replace the current fragmented one and must address a variety of issues, including bias in the data used to train AI and accountability for both government and corporate users of AI. The timely implementation of appropriate regulations is crucial for realizing AI’s benefits.

These measures are strategically aimed at effectively managing the integration of artificial intelligence (AI) into the realm of nuclear capabilities. By fostering open communication and cooperation, such initiatives can contribute to a more secure and controlled implementation of AI technologies, reinforcing global efforts to ensure the responsible use of advanced technologies in the sensitive domain of nuclear weapons.

Dr. Silky Kaur holds a Ph.D. from Jawaharlal Nehru University and has served as an Associate Fellow at the Centre for Air Power Studies. Throughout her career, Dr. Kaur has made substantial contributions to academic discourse through her publications, presentations, and workshops. Her expertise lies in nuclear issues, emerging technologies, artificial intelligence, international relations, and strategic studies.

The UCS Science Network is an inclusive community of more than 25,000 scientists, engineers, economists, and other experts, focused on changing the world for the better. The views expressed in Science Network posts are those of the authors alone.