Author: Professor Tom Buchanan
In February 2022, Russia invaded its neighbour Ukraine, sparking a bloody conflict. At the time of writing, the full horror of the war is still unfolding, and the eventual outcome is unknown. Social media has been widely used in sharing information about the situation. However, not all of that information has been true.
Disinformation is defined by the UK government as “the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain”. This is closely related to misinformation, which refers to the inadvertent sharing of false information. The related term ‘fake news’ is still widely used, but it is best avoided due to its politicisation and loss of meaning. The same piece of false information can be either disinformation or misinformation depending on who is sharing it, and why. If I tell you a lie, that might be disinformation. If you believe it and pass it on to your friends, that would be misinformation.
Russia and its proxies are noted for their use of information warfare tactics, including disinformation. In fact, the very name ‘disinformation’ is derived from a Russian word, dezinformatsiya. Disinformation itself has a much longer history of course, with documented examples dating back at least to Roman times.
Russia is certainly not alone in its use of social media ‘information operations’ of the type described below. The Oxford Internet Institute lists 81 countries, including our own, where social media has been used to manipulate public opinion. However, Russia has a significant track record of its use for strategic objectives – whether in attempts to influence Western political processes, or in actions closer to its own borders.
Since 2015, the EU’s East StratCom Task Force has collated and published information about “the Russian Federation’s ongoing disinformation campaigns affecting the European Union, its Member States, and countries in the shared neighbourhood”. Ukraine, and key themes in the current strategic narratives, have featured regularly in these campaigns. The attack on Ukraine began long before the 2022 invasion.
In general, the way disinformation campaigns work is that false information is posted on social media (often repurposed from other media, such as websites or government-controlled news services). It is then spread by a combination of ‘bots’ and human activity, in order to extend its influence to a wider audience. The goal may be either to cause disruption in other countries, or to manipulate domestic public opinion (e.g., by increasing support for the war).
Social media disinformation should not be considered in isolation. There is a very real crossover between actions in the virtual and physical worlds. For example, even prior to the war, there were concerns that Russia was staging physical-world provocations in order to provide ‘evidence’ for the narratives being woven in the information space and provide justification for its actions. Conversely, genuine historical material is misrepresented and repurposed in support of these narratives, and footage of real events may be portrayed as being false. Such material is then spread through traditional media outlets and social media. In fact, there is considerable crossover between different vectors for false information. A report published by Demos in 2019 argues that we need to consider the role played by traditional media organisations in spreading both false information and selective amplification of genuine material.
A pertinent example of this is the false claim, reported in the Guardian, that the US was operating bioweapons facilities in Ukraine. The story originated in the US QAnon conspiracy movement, and seems to be rooted in a long-running Russian disinformation narrative. The claim was then repeated by the Fox News TV presenter Tucker Carlson. Carlson’s broadcasts were then picked up and repeated by Russian media outlets, as urged by the Russian government.
So, social media is only part of a bigger ecosystem in which false information is provided. However, it is still an important part, given that many people nowadays get their news from social media platforms. One of the ways in which false content spreads on social media is through human action. One mechanism for this is users directly sharing material to their networks of friends and followers. Another mechanism is algorithmic propagation: by engaging with it in ways such as ‘liking’ it, we can cause the platform’s algorithms to show it more widely.
So, how does this affect you personally? What can you believe, and what can you do?
One of the techniques of disinformation operatives is to flood the information space, creating uncertainty with so many conflicting narratives that one does not know what to believe. If we want to reduce the problem, we can avoid amplifying those narratives. As I have previously argued in an article published on this blog, one should not engage with anything you think may be untrue. Don’t share it, or even click on it. Don’t quote it, even if you are trying to argue that it is false – that spreads it further.
But how do we know what is true and what is not? Journalists and fact-checking organisations do a great job of debunking false information, but their efforts lag the spread of falsehoods: they are always playing catchup. Furthermore, biased media outlets may actually increase the problem as noted above, and disinformation creators sometimes present themselves as fact-checkers. Therefore, we all need to be careful with what we share and interact with online. The UK government’s SHARE checklist is useful guidance. As individuals, we’ll never get it completely right – but being careful about what we click can reduce that chance that we will amplify harmful lies.
Author’s biography:
Tom Buchanan is a Professor of Psychology at the University of Westminster, School of Social Sciences. He has been conducting research on how people interact online for many years. Since around 2016, much of his research has focused on why ordinary social media users share false information online. He is currently Principal Investigator on a 2-year Leverhulme Trust project examining people’s motivations for sharing political information on social media.