How Disruptive Technologies Are Changing Peace and Security

KEY TAKEAWAYS

  • Disruptive technologies are posing complicated challenges to peace and security.
  • To devise effective regulation, peacebuilders and policymakers need to enhance their tech literacy.
  • Regulation should focus on how to mitigate technology’s negative impacts on conflict.

KEY TAKEAWAYS

  • Disruptive technologies are posing complicated challenges to peace and security.
  • To devise effective regulation, peacebuilders and policymakers need to enhance their tech literacy.
  • Regulation should focus on how to mitigate technology’s negative impacts on conflict.

The global landscape of violence and conflict is transforming at a rapid pace, as disruptive technologies revolutionize how wars are waged. For years, security forces and intelligence agencies have been steeped in the dynamic threats posed by new technologies and they regularly use advanced tools to respond to those threats. Diplomats and peacebuilders, however, may often neglect threats from disruptive technologies due to an overreliance on historical power dynamics; a lack of creative thinking fostered by elite, risk-averse cultural pressures; and a disconnect from local communities where violence occurs. Tech illiteracy hampers understanding of how emerging technologies are used and how they can exacerbate conflicts.

The Kratos XQ-58 unmanned combat aerial vehicle at Eglin Air Force Base, July 2023. The drone uses artificial intelligence and has the capability to carry weapons, although it has not yet been used in combat. (Edmund D. Fountain/The New York Times)
The Kratos XQ-58 unmanned combat aerial vehicle at Eglin Air Force Base, July 2023. The drone uses artificial intelligence and has the capability to carry weapons, although it has not yet been used in combat. (Edmund D. Fountain/The New York Times)

Until now, much of the peacebuilding community has been reflexively emphasized how to use technology for peace but has not fully considered how to regulate technology in order to mitigate its negative impacts on conflict. Understanding and analyzing new technologies effects on violence is crucial for developing interventions that promote peace and security.

The Problem

The idea that technological advancements can destabilize societies and make human violence more deadly is nothing new. So, what makes today’s digital developments so different?

In their recent book, “The Coming Wave,” Microsoft AI CEO Mustafa Suleyman and writer Michael Bhaskar discuss the power of the convergence of advanced technologies, particularly those that merge digital "bits" (the realm of data processing) with physical "atoms" (the realm of manipulating our environment across space and time). They argue that new developments uniting the control of big data using AI, cyber and information warfare technologies, nanotechnology, biotechnology and robotics will create a new conflict paradigm. Traditional distinctions between war and peace, combatants and civilians, and physical and digital security will blur. While previous technological revolutions' have had major destabilizing effects, Suleyman believes the scale, speed and nature of changes from merging bits and atoms will be qualitatively different.

Understanding and analyzing new technologies effects on violence is crucial for developing interventions that promote peace and security.

But just how fast is this wave approaching, and what does it mean for the future of global security?

Dario Amodei, CEO of the Amazon-funded AI outfit Anthropic, argues that AI technology could potentially reach concerning capabilities very soon. He cites "scaling laws," which suggest AI capabilities increase exponentially with each injection of capital, expanding computational resources and the size and quality of datasets.

Based on these assumptions, Amodei suggests that AI systems could pose substantial bioweapon and cyber threats as early as this year or next. Between 2025 and 2028, AI systems could significantly enhance state actors' offensive capabilities and potentially replicate autonomously. For Amodei, the geopolitical implications are very real and could exacerbate global instability. He calls for democratic countries to lead AI development and for international cooperation in governance and responsible deployment.

Suleyman and Amodei’s perspectives are part of a growing chorus voicing techno-pessimism. Policymakers are increasingly paying attention to new technologies' disruptive potential, as evidenced by the EU, the U.N., and the U.S. Congress’s recent efforts. With increasing support and resources, peace institutions within governments, the U.N. and NGOs must devise interventions to address the challenges posed by rapidly evolving technologies. The call to action from tech CEOs is particularly concerning given their vested interest in limiting regulation, raising questions about the motivations behind their advocacy and the potential conflicts of interest in shaping technology governance.

Of course, some others offer more optimistic perspectives. Sundar Pichai, the CEO of Google, emphasizes AI's transformative potential, highlighting its ability to enhance creativity, productivity and problem-solving across sectors such as health care and climate change. Astrophysicist Neil deGrasse Tyson expresses optimism over new technologies' potential for good. He believes that AI does not inherently pose a unique threat and should be controlled by humans, particularly when it comes to critical military decisions.

When talking about this latest popular cultural phenomenon that mixes a lot of fact, fiction and prediction, it is important to clear through some of the hype and confusion and adopt a first principles approach that tackles fundamental issues of concern at the intersection of technology and today’s  violent conflicts.  

First, we must consider the impact of new weapons and finance technologies on rates of violence and the lethality of conflicts. Second, we must acknowledge how emerging technologies are creating new cybersecurity threats — the newest, and largely unregulated, domain of war. Finally, we need to analyze how today’s digital information platforms influence conflict narratives, which can be a source of public tension leading to violence and influence support for or against armed conflict.

New Weapons and Finance Technologies

New weapons technologies raise significant concerns over the increasing lethality and frequency of conflicts. The Office of the Director of National Intelligence (DNI) warns that the integration of advanced technologies — such as improved sensors, automated decision making, AI and hypersonic weapons — will lead to the development of more powerful and more lethal weapons. Many of these technologies, like drones, are cheap and semi-autonomous, opening space for many things to go wrong in semi-monitored, broad engagements involving hundreds of machines. This escalation heightens the risks of more deadly combat without necessarily ensuring quick or decisive outcomes — prolonging suffering and destruction.

Among states, the erosion of multilateral arms control frameworks exacerbates the threats. The 2019 expiration of the Intermediate-Range Nuclear Forces Treaty, the uncertain future of the New Strategic Arms Reduction Treaty set to expire in 2026, and the deadlock in negotiations at the U.N. Conference on Disarmament heighten the risks associated with the deployment of new weapons technologies without international guardrails and new norms to regulate their usage. Most recently, the U.N. Security Council failed twice to adopt a resolution on preventing an arms race in outer space. This deadlock not only reflects differing views on space security, but it is also indicative of the general regulatory vacuum resulting from increasing geopolitical tensions.

Non-state actors can pose significant threats by exploiting low-cost, commercially available technologies and automated systems, such as AI, digital platforms and unmanned systems. Groups like ISIS, the Houthis and Hezbollah exploit these technologies to conduct attacks, reduce costs and evade detection. In January, the first instance of American soldiers killed by an enemy drone attack occurred when an Iran-aligned group, Kataib Hezbollah, detonated an explosive-laden drone on an American outpost in Jordan. The rapid advancements and accessibility of these technologies increase the potential for misuse, creating substantial security risks.

Rogue state and non-state armed groups increasingly leverage new fintech tools and cryptocurrencies to finance their violent aims, exploiting the relative anonymity and lack of regulation. The U.N. Security Council has voiced grave concern that innovations in financial technologies, such as prepaid cards, mobile payments or virtual assets, present a great risk of being misused for terrorist financing. Cryptocurrencies enable malevolent actors to bypass traditional banking systems and international sanctions, facilitating untraceable transactions and fundraising. They use blockchain technology for secure, decentralized fundraising and money laundering, while fintech platforms offer innovative ways to crowdsource funds.

Terrorist organizations and insurgent groups can use encrypted messaging apps and social media to solicit cryptocurrency donations to sustain operations, purchase weapons and recruit new members. The ability to circumvent sanctions and traditional financial oversight poses significant challenges for regulators and law enforcement agencies striving to curtail the financial lifelines of violent actors.

Improved international cooperation is essential to regulate arms proliferation, financial transactions and dual-use technologies, helping to prevent their misuse in warfare.

From a governance perspective, improved international cooperation is essential to regulate arms proliferation, financial transactions and dual-use technologies, helping to prevent their misuse in warfare. While military and regulatory institutions primarily combat threats from new weapons and financial technologies, peacebuilders and diplomats also need to understand these technological impacts to effectively prevent, resolve or mitigate conflicts.

Cybersecurity Threats

Diplomats and peacebuilders must recognize that cyberspace is not just a virtual domain but a battlefield with real-world consequences. The domain is best described by the Open Systems Interconnection model, which organizes network communications into seven distinct layers, ranging from physical hardware like fiber optic lines, to the application layer, which interacts directly with end-users. Each layer has distinct functions but also potential vulnerabilities that hackers can exploit to disrupt or infiltrate digital resources.

Cyberattacks and threats are a cornerstone of "hybrid warfare," which combines conventional military tactics with irregular methods and gray-zone activities intended to intimidate or threaten adversaries without escalating to open conflict. Adversaries like Russia and China have significantly invested in cyber capabilities, often employing non-state hackers to increase deniability. FP Analytics categorizes cyber threat actors into five main types: state operatives, cybercriminals, hacktivists, terrorist groups and cyber mercenaries.

Cyberattacks, state-sponsored or otherwise, are not always complex. Widely available consumer tools, like Flipper Zero, Raspberry Pi, or “malicious” USB drives can all be purposed for malignant ends. The only limitations are the technical capabilities and creativity of the perpetrator. While the motivations and methods of these actors may differ, the lines between them are increasingly blurring.  

Evidence of Russian-sponsored cyber gray-zone activities date back to at least 2004, with a notable escalation in 2007 with cyberattacks on Estonia, known as "Web War One." These attacks, triggered by the removal of a Soviet-era war memorial in Tallinn, employed botnets and sophisticated techniques to cripple Estonia's digital infrastructure, including banking, media and government services. This marked a turning point in international relations, highlighting how cyberattacks could threaten national security and political autonomy.

Diplomats and peacebuilders must recognize that cyberspace is not just a virtual domain but a battlefield with real-world consequences.

Meanwhile, FBI Director Christopher Wray has warned that Chinese government-linked hackers, through an ongoing campaign called Volt Typhoon, have infiltrated critical U.S. infrastructure, including telecommunications, energy, water and pipeline operators, potentially positioning themselves to strike a devastating blow at an opportune moment. These gray-zone threats escalate the risk of catastrophic circumstances between major powers.

Since at least 2015, cyberattacks have been standard in the Russian hybrid warfare toolkit, complementing more traditional acts of military aggression. Cyber sabotage of the Ukrainian power grid has caused extensive outages for hundreds of thousands of civilians. During the 2022 illegal invasion, one of the earliest attacks left untold Ukrainian civilians in a blackout amid a devastating missile strike. In addition to attacks on infrastructure, cyberattacks are being integrated into kinetic strategies and tactics to disrupt opponents’ weapons systems and operations by hacking surveillance and guidance systems, command and control networks, and logistics operations.

The evolving threat landscape of cyberterrorism highlights the concerning intent and proven capabilities of terrorist groups to conduct cyberattacks. Groups like ISIS and al-Qaida have long aspired to exploit cyberspace, as evidenced by evidenced by operations like Tunisian Fallaga Team’s defacement of the UK's National Health Service websites and the Islamic State hacking division’s 2015 release of a "kill list." Although these cyber offensives are often of a low technical order, they reflect a resourcefulness and growing sophistication in leveraging digital platforms to further their agendas. 

In cyberwarfare, any digital asset is a potential military target. Peacebuilders and peacemakers must grasp the overarching concepts of these rapidly evolving threats.

In cyberwarfare, any digital asset is a potential military target. Peacebuilders and peacemakers must grasp the overarching concepts of these rapidly evolving threats. Cyberwarfare represents one of the most technical and challenging areas for non-experts to engage with. Responding effectively requires coordinated political, social and economic efforts through multi-stakeholder dialogues. Peacebuilders can facilitate this coordination and ensure the security of their own institutions.

Conflict Narratives

Finally, to effectively address the challenges posed by new disruptive technologies, peacebuilders and diplomats must understand how conflict narratives shape perceptions and realities in conflict environments. Critically, they must understand that the rapidly evolving digital information ecosystem is complex and dynamic and transcends the linear ways we typically see conflict narratives develop. 

Since the beginning of human history, warfare has always had a psychological dimension. Today, the tactics employed by state and non-state actors have evolved to exploit the speed, reach and addictive qualities of digital platforms, with AI-powered algorithms, micro-targeting of populations and the amplification of information campaigns. The highly malleable information spaces that dictate the "stories" people tell themselves about conflict are central to our understanding of human behavior and societal dynamics.

Philosophers and psychologists — from Carl Jung to Yuval Noah Harari to many post-structuralist thinkers — have emphasized the power and influence of grand narratives on human behavior. These stories tap into deeply rooted feelings and widespread symbols and compel both violence and peace. The internet — full of symbols and stories — represents a grand explosion in narrative that has generated discordant, chaotic developments in human behavior.

People, both unwittingly and in bad faith, structure this ecosystem by amplifying biases, spreading falsehoods and creating echo chambers, thereby influencing models of reality, and fueling violence and conflict.

In this context, misinformation refers to false information shared without harmful intent, disinformation involves the deliberate spread of false information to deceive, and malinformation entails the use of true information to cause harm or mislead. These categories highlight the diverse ways in which information can be manipulated, contributing to the complexity and instability of the digital landscape.

When it comes to armed conflict, the most insidious component is disinformation, derived from dezinformatsiya, which was coined by Joseph Stalin. At the local level, state-sponsored disinformation campaigns can cynically manipulate genuine grass-roots movements to sow discord. They can also be used to shape larger geopolitical narratives. A classic example is the 1929 Russian forgery of the "Tanaka Memorial,"  which purported to outline Japan’s plans for world domination. The document’s “truthiness” helped it to impact narratives about Japan for decades; and it is still cited on official Chinese government sites to this day

The type of contested global information environment seen during the build-up to World War II and throughout the Cold War is back, except now there are a thousand domains of information: big, small, messy and unmonitored. Nonetheless, U.S. adversaries like China, Russia, Iran and others are deploying familiar tactics across these multiple information domains as part of sophisticated strategies that destabilize many parts of the world. Despite their deepening partnership, Russia and China employ different strategies. Moscow exploits chaos, while Beijing promotes a new global order on its terms.

Both, however, use disinformation operations that exploit technology, political divisions and media platforms, orchestrating both large-scale and micro-campaigns to undermine democratic institutions and advance geopolitical agendas. In this space, non-state actors, including terrorist organizations and fringe political groups, work on their own or at the behest of state sponsors, using social media and other digital platforms to spread their ideologies and incite violence.

The same psychological and storytelling tools that have been used in warfare for millennia are now being propagated through digital means, tailored to individuals using advanced AI and big data analytics. AI algorithms analyze vast amounts of data to craft personalized narratives, making influence operations more precise and pervasive than ever before. By leveraging big data, these digital tools can exploit personal preferences and vulnerabilities on a grand scale, enabling actors to manipulate public opinion, sow discord and influence behaviors with unprecedented efficiency and scope.

How to Move Forward

Navigating the challenges disruptive technologies pose in conflict settings requires a multifaceted approach encompassing responsible public and private sector governance. While AI regulation is a pressing concern, a comprehensive understanding of the broader technological landscape is crucial for effective governance frameworks. Peacebuilding practitioners must first understand emerging problems in conflict zones before deploying solutions — particularly those that harness new technologies —ensuring these solutions are practical and appropriate for the local context.

As bad actors continue to exploit disruptive technologies, the international community must remain vigilant, adaptable and united in promoting peace and security in the digital age.

The U.S. military's cautious approach to generative AI exemplifies the need for careful consideration in adopting new technologies. Despite rapid advancements, generative AI's tendencies toward conflict escalation and its inherent security vulnerabilities necessitate extensive testing and evaluation to ensure responsible deployment.

Developing a shared global understanding of how to manage these technologies involves inclusive dialogue and collaboration among diverse stakeholders, including governments, civil society, academia and the private sector. This includes fostering international partnerships to enhance cybersecurity, implementing global standards for emerging technologies, countering digital disinformation, and promoting ethical standards for technological deployment.

USIP’s newly formed Disruptive Technologies and Artificial Intelligence team aims to contribute to these efforts. As state, non-state and hybrid actors continue to exploit disruptive technologies, the international community must remain vigilant, adaptable and united in promoting peace and security in the digital age. Proactive measures, such as investing in resilient infrastructure and fostering responsible innovation, are essential. By embracing a multistakeholder approach, investing in research and development, and promoting international cooperation, peacebuilders can help constrain the destructive capacities of rapidly evolving new technologies.


PHOTO: The Kratos XQ-58 unmanned combat aerial vehicle at Eglin Air Force Base, July 2023. The drone uses artificial intelligence and has the capability to carry weapons, although it has not yet been used in combat. (Edmund D. Fountain/The New York Times)

The views expressed in this publication are those of the author(s).

PUBLICATION TYPE: Analysis