A human rights-based approach to disinformation

15 Oct 2019

By Richard Wingfield

Disinformation. Fake news. Propaganda. Whatever you call it, one of the biggest challenges facing societies all over the world is the spread of false or misleading information through social media and communications platforms, often causing very real harms. While this issue is not new, the ease with which information can now be spread to large numbers of people, often anonymously and privately, has increased the stakes, and made the need to address it more urgent. 

From the EU’s Code of Practice on Disinformation to Malaysia’s Anti-Fake News Act, policymakers around the world are trying—and struggling—to work out how to respond. Unfortunately, many of the proposals being put forward raise real risks to freedom of expression. In this blog post, we set out why a human rights-based approach can be helpful, both in terms of identifying harms to be addressed, and in devising appropriate responses.

*

What IS DISINFORMATION?

While commonly used, the term “fake news” is not the most helpful when trying to identify the particular challenge to be addressed. Part of the problem with the term is that it covers a range of very different types of false or misleading information. First Draft has developed a taxonomy of seven different types of such information and online content that could be considered as “fake news”:

  • Satire or parody: There is no intention to cause harm, but there is the potential to fool.
  • False connection: When headlines, visuals or captions don’t support the content.
  • Misleading content: The misleading use of information to frame an issue or individual.
  • False context: When genuine content is shared with false contextual information.
  • Imposter content: When genuine sources are impersonated.
  • Manipulated content: When genuine information or imagery is manipulated to deceive.
  • Fabricated content: When content is 100% false and designed to deceive and do harm.

Using this taxonomy, something which is called “fake news” might not be fake at all—nor would it necessarily even be what you’d ordinarily call “news”—meaning the term is inaccurate in and of itself. A further problem is that the term “fake news” has been appropriated by many politicians and their supporters to denigrate coverage or reporting which they simply dislike. For these and other reasons, there is a growing acknowledgment that other terms should be used in its place. Two of the most commonly put forward are “disinformation” and “misinformation”. While neither has a universally accepted and used definition, example definitions include:

  • Disinformation: False, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit.
  • Misinformation: The inadvertent or unintentional spread of false or inaccurate information without malicious intent.

These terms have many advantages over “fake news”, in that they more clearly set out the scope of the particular type of information, the harm caused, and the relevant intent (or lack thereof).

*

Why is disinformation a human rights issue?

Disinformation is a human rights issue because, simply put, the spread of disinformation can cause harm to a range of human rights, some more than others:

  • The right to free and fair elections (Article 25, International Covenant on Civil and Political Rights [ICCPR]): For an election to be free and fair, voters need to have accurate information about the parties, candidates and other factors when they vote. Incorrect information may influence the way that individuals vote, and there are numerous reports (such as NYU Stern’s report on the upcoming 2020 elections in the USA) which highlight how the results of elections and referendums may be influenced by disinformation.
  • The right to health (Article 12, International Covenant on Economic, Social and Cultural Rights): Inaccurate information about health care and disease prevention, such as false information on risks associated with vaccines, may deter people from taking healthcare decisions that protect their health, putting them (and others) at greater risk. For example, one of the reasons that the ebola virus in West Africa proved so challenging was the amount of disinformation and misinformation spread about the virus.
  • The right to freedom from unlawful attacks upon one’s honour and reputation (Article 17, ICCPR): Disinformation often relates to a particular individual—particularly political and public figures, as well as journalists—and is designed to harm that person’s reputation.
  • The right to non-discrimination (Articles 2(1) and 26, ICCPR): Disinformation sometimes focuses on particular groups in society—such as migrants, or certain ethnic groups—and is designed to incite violence, discrimination or hostility.

However, inappropriate policy responses to disinformation can, themselves, also pose risks to human rights, particularly the right to freedom of expression (Article 19, ICCPR)

The Anti-Fake News Act in Malaysia, for example, criminalises the publication and circulation of “fake news”, defined to include any information which is “wholly or partly false”—even if no harm is caused—leading the UN Special Rapporteur on freedom of expression to observe that it would lead to “censorship and the suppression of critical thinking and dissenting voices”. Indeed, the law has already seen a Danish citizen imprisoned for “inaccurate” criticism of the Malaysian police. Despite this, the law has similar other legislative proposals in the region, notably Singapore and the Philippines.

*

What would a human rights-based approach look like?

Many of the current approaches to tackling disinformation aim to identify and remove all forms of information that are false and misleading. It’s an impossible task—by casting the net so wide, it’s inevitable that legitimate forms of expression will end up being removed.

A human rights-based approach to disinformation would, by contrast, be designed and targeted towards addressing the adverse human rights impacts caused by disinformation, rather than all disinformation itself.

What would this mean in practice? It might mean, for example, clear legal requirements for objective harm to be caused before liability is attached to a particular piece of disinformation. Or greater efforts to improve the digital literacy and critical thinking of internet users, reducing the impacts that disinformation has, rather than resorting to legislation at all .

For policymakers considering developing a policy or law, one of the first steps should be making sure that their approach is in line with international human rights law and standards.  Some initial guiding questions might be:

  • Does your policy include any restrictions on particular forms of speech or content? If so, are these restrictions set out in law? Restrictions which have no legal basis will not comply with international human rights law and standards.
  • Is there clarity over the precise scope of the law? General prohibitions based on vague or ambiguous ideas such as “false news” or “non-objective information”, for example, would fail this test.
  • Is speech or content restricted only where it is in pursuance of a legitimate aim? i.e. if it causes a particular harm to an individual’s human rights, or a society’s legitimate interest (such as the protection of democracy).
  • Do any restrictions in the law account for instances where the individual reasonably believed the information to be true?
  • Are determinations of whether speech or content is disinformation made by an independent and impartial judicial authority?
  • Are any responses or sanctions proportionate? Heavy fines, imprisonment, and the blocking of websites, for example, are all likely to be disproportionate.
  • Are intermediaries liable for third party content? This should only be permitted in instances where (a) an intermediary specifically intervenes in that content; or (b) an intermediary refuses to obey an order adopted in accordance with due process guarantees by an independent, impartial, authoritative oversight body (such as a court) to remove it, and they have the technical capacity to do so.

*

What can human rights defenders do?

The challenge that disinformation poses to human rights, coupled with legislative proposals which themselves threaten the right to freedom of expression, necessitates engagement by human rights defenders. Human rights defenders are, after all, uniquely able to promote a human rights approach when developing responses to policy challenges, including disinformation. 

While models of best practice are few and far between, those engaging on the issue from a human rights perspective can draw upon support from resources such as GPD’s booklet on disinformation and human rights; the 2017 Joint Declaration of the Special Rapporteurs on freedom of expression of the UN, OSCE, OAS and AU in 2017; and the joint report by Access Now, the Civil Liberties Union for Europe, and European Digital Rights (EDRi) on good policymaking when it comes to disinformation.

*

Following the debate around disinformation? Sign up to our Digest for monthly commentary and analysis in your inbox.