This explainer was updated in May 2023. The original version was published in 2019.
Disinformation. Fake news. Propaganda. Whatever you call it, one of the biggest challenges facing societies all over the world is the spread of false or misleading information through social media and communications platforms, often causing very real harms. While this issue is not new, the ease with which information can now be spread to large numbers of people, often anonymously and privately, has increased the stakes, and made the need to address it more urgent.
From the EU’s Code of Practice on Disinformation to national level legislation like Singapore’s Protection from Online Falsehoods and Manipulation Act to the Verified Campaign by the United Nations and Purpose, a wide range of stakeholders around the world are trying—and struggling—to work out how to respond.
A wide range of stakeholders around the world are trying to work out how to respond, with approaches ranging from industry-led initiatives like the EU’s Code of Practice on Disinformation, multi stakeholder projects like the Verified Campaign by the United Nations and Purpose, and data-sharing initiatives such as the Disinformation Analysis & Risk Management Framework by the DISARM Foundation.
However, when it comes to national level laws and policies on disinformation—such as Singapore’s Protection from Online Falsehoods and Manipulation Act or Ethiopia’s Proclamation to Prevent the Spread of Hate Speech and False Information—many of the proposals being put forward raise real risks to freedom of expression.
In this blog post, we set out why a human rights-based approach can be helpful, both in terms of identifying harms to be addressed, and in devising appropriate responses. We also share relevant tools and guidance, including our framework for analysing disinformation laws from a human rights perspective, the wealth of analysis on anti-disinformation laws and actions in Sub-Saharan Africa captured in LEXOTA, and the insights, guidance and news collated in our disinformation resource hub.
WHAT IS DISINFORMATION?
While the term “fake news” is commonly used to describe false or inaccurate information, it can be an unhelpful way of referring to the problem. Part of the problem is that this term covers a range of very different types of false or misleading information. First Draft has developed a taxonomy of seven different types of such information and online content that could be considered as “fake news”:
- Satire or parody: There is no intention to cause harm, but there is the potential to fool.
- False connection: When headlines, visuals or captions don’t support the content.
- Misleading content: The misleading use of information to frame an issue or individual.
- False context: When genuine content is shared with false contextual information.
- Imposter content: When genuine sources are impersonated.
- Manipulated content: When genuine information or imagery is manipulated to deceive.
- Fabricated content: When content is 100% false and designed to deceive and do harm.
Using this taxonomy, something which is called “fake news” might not be fake at all—nor would it necessarily even be what you’d ordinarily call “news”—meaning the term is inaccurate in and of itself. A further problem is that the term “fake news” has been appropriated by many politicians and their supporters to denigrate coverage or reporting which they simply dislike. For these and other reasons, there is a growing acknowledgment that other terms should be used in its place. Two of the most commonly put forward are “disinformation” and “misinformation”. While neither has a universally accepted and used definition, example definitions include:
- Disinformation: False, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit.
- Misinformation: The inadvertent or unintentional spread of false or inaccurate information without malicious intent.
These terms have many advantages over “fake news”, in that they more clearly set out the scope of the particular type of information, the harm caused, and the relevant intent (or lack thereof).
WHY IS DISINFORMATION A HUMAN RIGHTS ISSUE?
Disinformation is a human rights issue because, simply put, the spread of disinformation can cause harm to a range of human rights, some more than others:
- The right to free and fair elections (Article 25, International Covenant on Civil and Political Rights [ICCPR]): For an election to be free and fair, voters need to have accurate information about the parties, candidates and other factors when they vote. Incorrect information may influence the way that individuals vote, and there are numerous resources (such as the National Democratic Institute’s Playbook for Combating Information Manipulation during Elections) which highlight how the results of elections and referendums may be influenced by disinformation.
- The right to health (Article 12, International Covenant on Economic, Social and Cultural Rights): Inaccurate information about health care and disease prevention, such as false information on risks associated with vaccines, may deter people from taking healthcare decisions that protect their health, putting them (and others) at greater risk. For example, during the COVID-19 pandemic, health-based mis- and disinformation increased vaccine hesitancy and negatively impacted public health in nearly every country worldwide.
- The right to freedom from unlawful attacks upon one’s honour and reputation (Article 17, ICCPR): Disinformation often relates to a particular individual—particularly political and public figures, as well as journalists—and is designed to harm that person’s reputation.
- The right to non-discrimination (Articles 2(1) and 26, ICCPR): Disinformation sometimes focuses on particular groups in society—such as migrants, or certain ethnic groups—and is designed to incite violence, discrimination or hostility.
However, inappropriate policy responses to disinformation can, themselves, also pose risks to human rights, particularly the right to freedom of expression (Article 19, ICCPR).
The Protection from Online Falsehoods and Manipulation Act (POFMA) in Singapore, for example, criminalises the publication and circulation of “fake news” which may be detrimental to: national security; public health, safety, tranquillity or finances; friendly international relations with other countries; electoral integrity; inter-group relationships; or public confidence in national authorities. In 2019, the UN Special Rapporteur on freedom of expression raised concerns about the law’s overbroad definition of falsehood, its lack of guidance around how to assess whether a statement is detrimental to public interests, and the severe penalties it introduces for individuals for sharing false news. The Special Rapporteur also noted that such provisions may have a chilling effect on freedom of expression, and may be used to criminalise criticism of the government or the expression of unpopular, controversial or minority opinions. A number of laws and policies in other regions, like Sub-Saharan Africa, raise similar concerns.
WHAT WOULD A HUMAN RIGHTS-BASED APPROACH LOOK LIKE?
Trying to tackle disinformation by identifying and removing all forms of information that are false and misleading is an impossible task—and, by casting the net so wide, it’s inevitable that legitimate forms of expression will end up being removed.
A human rights-based approach to disinformation would, by contrast, be designed and targeted towards addressing the adverse human rights impacts caused by disinformation, rather than all disinformation itself.
For policymakers considering developing a policy or law on disinformation, one of the first steps should be making sure that their approach is in line with international human rights law and standards, as laid out in our analytical framework for assessing disinformation laws from a human rights perspective. This might mean, for example, articulating clear legal requirements for objective harm to be caused before liability is attached to a particular piece of disinformation, or ensuring that determinations of what information would fall in scope are made by an independent judicial authority.
Policymakers should also consider alternative approaches to general legislative prohibitions on disinformation which counter its social drivers. This might include providing media and information literacy programmes, or encouraging independent and verified fact-checking. Policymakers should also consider how they can address business models of online platforms which exacerbate the problem: for example, by implementing effective data protection legislation which tackles micro-targeting and surveillance advertising based on user data, or mandating greater transparency around advertising and political campaigning, particularly at election times.
WHAT CAN HUMAN RIGHTS DEFENDERS DO?
The challenge that disinformation poses to human rights, coupled with legislative proposals which themselves threaten the right to freedom of expression, necessitates engagement by human rights defenders. Human rights defenders are, after all, uniquely able to promote a human rights approach when developing responses to policy challenges, including disinformation.
While models of best practice are few and far between, those engaging on the issue from a human rights perspective can draw upon support from resources such as the UN Human Rights Council’s 2022 Resolution on the role of States in countering the negative impact of disinformation on the enjoyment and realisation of human rights; the 2017 Joint Declaration of the Special Rapporteurs on freedom of expression of the UN, OSCE, OAS and AU; and the joint report by Access Now, the Civil Liberties Union for Europe, and European Digital Rights (EDRi) on good policymaking when it comes to disinformation.
Those working in Sub-Saharan Africa can use LEXOTA—a tool which tracks and analyses government responses to disinformation across the region using our human rights-based analytical framework—to support their advocacy with governments and policymakers for more rights-respecting responses to disinformation.
Following the debate around disinformation? Sign up to our Digest for monthly commentary and analysis in your inbox.