Discussions about the negative effects of online communication on society — including its potential to contribute to violent conflict — tend to focus primarily on misinformation and disinformation. The former refers to factually incorrect information that manages to reach audiences at scale, whereas the latter refers to inaccurate information that is spread deliberately and malignantly by some actor or agent in order to produce specific perceptions and outcomes in physical or digital space.

"We need democracy" written on a street to protest the coup that ousted the civilian government in Yangon, Myanmar. February 20, 2021. (The New York Times)
"We need democracy" written on a street to protest the coup that ousted the civilian government in Yangon, Myanmar. February 20, 2021. (The New York Times)

There is also an enduring perception in many quarters that the internet is an inherently liberating and self-organizing medium, one that is separate and distinct from the “real world.” In this narrative, misinformation and disinformation are the bad parts of the good internet. This is a holdover from the early days of the internet when discussions tended to emphasize the internet’s potential for educating and uniting people.

This is an outdated and misleading view of what the internet is today.

For most people today, the internet is not the democratizing force that some hoped it would be during events like the Arab Spring. Rather, most people experience the internet as a rigid, highly organized and closely monitored medium of expression and connection dominated by corporate tech giants and — perhaps somewhat counterintuitively — state actors.

Much of the communication we talk about today as happening on “the internet” (which technically refers to nothing more than a specific protocol for exchanging data between networked computers) actually occurs via a relatively small number of digital platforms (e.g. Discord, TikTok, WhatsApp, Telegram, Instagram, Twitter, Facebook). All of which are governed by algorithms designed to prioritize certain content, shape social interactions and gather data in ways that maximize their commercial potential — a model sharply at odds with most understandings of “community.” These platforms are also increasingly similar to each other, since in order to compete they must each adopt the most effective features of the others — demonstrated through Youtube’s “shorts” feature, which is a close reproduction of Tik-Tok.

Meanwhile, these platforms have grown more pervasive and woven into the rhythms of everyday life, leading to a progressive collapse in the distinctions people draw between different sources of media and information — for many, the lines between “online” and “offline” are getting blurrier. 

For example, to many younger people, there is little difference between receiving news in person, through a Discord message or via a meme on Instagram. The emotional, symbolic and psychological weight can be the equivalent regardless of the medium. For those working in the peacebuilding field this insight carries enormous significance for how we think about and classify modalities and causes of conflict stemming from digital mediums.

More specifically, a compelling and up-to-date understanding of violence organized on digital mediums cannot artificially create a divide between “online” spaces and a “real world,” or sharply distinguish "real world communities" from "online communities." For many, they are one and the same. To understand the evolving relationship between digital communication platforms and violence in a smaller, angrier internet, the peacebuilding field must move beyond such binaries with roots in outdated conceptions of the internet.

The Limit of Misinformation and Disinformation in Peace and Conflict

One particularly well known example of a clear link between digital platforms and violence would be Facebook’s “failure to prevent its platform from being to used to ‘foment division and incite offline violence’” in Myanmar. Military officials in the Southeast Asian nation were behind a systemic campaign to target the Rohingya Muslim minority that resulted in murder, rape and large-scale forced migration.

One solution that has been widely adopted — the use of digital warnings attached to posts flagging them as misinformation or state-sponsored media — can actually serve to deepen suspicion among people already predisposed toward such content. Simply by virtue of being flagged “dangerous or untrue” on platforms assumed to be hostile to any number of groups, such content paradoxically comes to be perceived as “truer” than unflagged content. The flag itself functions to draw attention and heighten excitement over people saying “dangerous things” rather than to neutralize falsehoods or slow the spread of misinformation.

Studies of misinformation and disinformation tend to focus on fact-checking and journalism as natural and obvious solutions to assess and, where appropriate, produce rational and cogent challenges to articulations of political and extremist violence.

The fact-checking approach to misinformation and disinformation comes with stark limitations. The popularization of “post-truth” as a pithy summary of our declining capacity to agree on basic facts and the spread of articles — such as this one — that shift the burden of countering misinformation and disinformation to individuals are symptoms of the failure of this paradigm to account for malignant state, non-state and corporate actors, who collaborate to create rigid, small digital landscapes highly dependent on advertising revenue that financially incentivizes the rapid spread of all information, regardless of its status as misinformation or disinformation.

Misinformation and disinformation are not flaws in the system, they are part and parcel of the fundamental structure of digital mediums. By design, algorithms cannot and do not differentiate information quality. Powerful actors in digital mediums have no incentive to police or remove misinformation or disinformation either, as this would fundamentally undermine the reach and spread of their platforms.

Furthermore, what the misinformation and disinformation framework (and its prescription of fact-checking as remedy) fails to appreciate is that violence organized on digital mediums is as much about group self-expression and identity affirmation as it is about people behaving violently due to incorrect or deliberately false information they find online. People commit acts of violence not simply because they are ill-informed but because they want to hurt people they dislike and find a convenient pretext for doing so.

For example, across the Middle East and North Africa, gender and sexual minorities are targeted by state authorities for posts on social media that simply express who they are without any explicit political content or advocacy. Misinformation and disinformation are not behind this kind of state violence. Even if government authorities hold misconceptions about gender and sexual minorities (in theory “correctable” through exposure to better information), the violence would likely continue because this population is seen as a threat simply by virtue of their identity and is so weak they can be targeted without consequence.

The same holds true for peace activists in many countries around the world: State and non-state actors often perform acts of violence on peaceful protestors based on a wholly accurate understanding of viewpoints they perceive as wrong or dangerous and not in response to rumors and propaganda. This is the point at which the misinformation and disinformation approach, at least in studies of peace and conflict, fails to capture the ways digital media can generate violence.

Expanding our Imagination

As we have been arguing throughout, in many respects our current peacebuilding language falls short of capturing the contemporary digital experience and this is one possible reason our policy prescriptions suffer the same fate. The terminology we use to discuss digital media remains optimistic and often speaks of the consequences of using technology and of technology — when in fact it would be more accurate to say that we live technology in nearly every domain of life, including war and peace.

Some options for improving the peacebuilding field’s approach to digital mediums, including the field’s response to misinformation and disinformation, among other malignant digital phenomena, include:

Update our understanding of the internet and rapid technology change as a form of “global shock.”

The utopian idea of the internet is a long-gone fantasy. The internet is a rigid, tightly controlled, monitored and tracked space. State and corporate actors are powerful and active in intervening across digital communities, for good and ill. The unchecked optimism and artificial barriers we often still assume to exist between the digital and the physical worlds are both gone. We can no longer speak of “online communities” but must rather think in terms of communities with both digital and physical components. Peacebuilding analysis and practice that fails to appreciate this shift will be painfully limited in its capacity to have enduring relevance and offer insight.

Furthermore, many digital spaces that encompass a malignant dimension (such as spreading misinformation and disinformation) often serve more benign and, sometimes highly valuable functions within their communities. Social clubs and gaming or entertainment channels can become sites of recruitment or indoctrination for specific political and ideological agendas and function as platforms for extremist groups to generate financial and material support. The distinction between entertainment and terrorism is far less clear cut than we might think.

Generate better understanding of national and transnational variations in internet cultures and their implications for conflict and peacebuilding.

Across different countries, regions and language groups, we see huge diversity in internet landscapes and cultures of information consumption. Too often, expertise on a country, region or thematic issue, such as gender or religion, underappreciates these variations in digital landscapes. Understandings of such contexts are also often generated from specific user experiences rather than from comprehensive studies of distinctive and often idiosyncratic practices, injecting a degree of bias into research and writing.

In addition, approaches specifically to misinformation and disinformation vary considerable between non-state and state actors and there is only limited research exploring the various strategies adopted by different types of organizations — and even less on effective peacebuilding strategies to counter them. Radical groups may use disinformation to alienate people from society as part of recruitment efforts, meanwhile state actors may use disinformation to harm morale in targeted societies or misdirect enemy resources. These are different tactics, with different strategies, and require different solutions — all of which must move beyond “add truth and stir” to explore new forms of policy, programming and regulation. 

Create digital media programming specific to peacebuilding.

Investing in programs and research specifically focused on the role of digital media in peace and conflict can generate the field-specific knowledge and insight necessary to building out new, technology-sensitive approaches to peacebuilding. Ensuring these programs and tools closely track but remain independent of the key digital platforms will be vital to ensuring that they develop an unbiased capacity to assess how corporate, state and non-state actors enable and facilitate violence across digital and physical spaces.


Related Publications

China’s Dilemmas Deepen as North Korea Enters Ukraine War

China’s Dilemmas Deepen as North Korea Enters Ukraine War

Thursday, November 14, 2024

Until late October, the big questions about China’s role in the Ukraine conflict centered around whether Beijing would choose to expand its support for Russia to include lethal aid, or if it might engage in more active peacemaking to end the conflict. Then, on November 4, the Pentagon confirmed that North Korea sent more than 10,000 troops to Russia’s Kursk oblast, where Ukraine had captured some territory earlier this year. Days later, the State Department confirmed that North Korean soldiers had begun fighting Ukrainian troops.

Type: Analysis

Conflict Analysis & PreventionGlobal Policy

How Should Seoul Respond to North Korea's Soldiers in Russia?

How Should Seoul Respond to North Korea's Soldiers in Russia?

Wednesday, November 13, 2024

The Ukraine war is taking a new turn with the involvement of North Korean soldiers. Washington estimates that, so far, North Korea has sent approximately 10,000 troops to Russia — around 8,000 of whom have been deployed to the western region of Kursk, where Ukraine seized territory in a surprise attack earlier this year. And as U.S. officials predicted in late October, North Korean troops have reportedly begun engaging in direct combat.

Type: Analysis

Conflict Analysis & Prevention

Australia’s Strategic Thinking on the War in Ukraine, NATO, and Indo-Pacific Security

Australia’s Strategic Thinking on the War in Ukraine, NATO, and Indo-Pacific Security

Tuesday, November 12, 2024

Russia’s war against Ukraine has spurred closer cooperation between Euro-Atlantic and Indo-Pacific states and organizations, particularly Australia and NATO, signaling a deepening of ties that could have long-term benefits for global security. Over the long term, writes security expert Gorana Grgić, such alignment is crucial for signaling to potential aggressors that global coalitions are prepared to respond. This report analyzes Australia’s response in order to examine Canberra’s strategic thinking with respect to cross-theater cooperation, and it offers recommendations for US, NATO, and Australian policymakers.

Type: Special Report

Conflict Analysis & PreventionGlobal Policy

Many Ways to Fail: The Costs to China of an Unsuccessful Taiwan Invasion

Many Ways to Fail: The Costs to China of an Unsuccessful Taiwan Invasion

Tuesday, November 5, 2024

A Chinese invasion of Taiwan would be an extremely difficult military, complex operation. China’s People’s Liberation Army (PLA) has been thinking seriously since the early 2000s about what such a landing would require. For over two decades, its force development efforts have been focused on the weapons, equipment, doctrine and operational concepts required to conquer the island in the face of full U.S. military intervention. The PLA has made considerable progress toward that goal and may deem itself fully capable by the 2027 force development target set by Xi Jinping.

Type: Analysis

Conflict Analysis & Prevention

View All Publications