Global Partners Digital has submitted a contribution to the Forum on Information and Democracy’s Working Group on Infodemics.
The Working Group had issued a call, asking experts, academics and jurists to provide recommendations that address: the meta-regulation of content moderation; platforms’ design and reliability of information; rules for messaging systems; and transparency of structuring platforms. The Working Group’s purpose is to draw up legal recommendations that address these structural challenges to tackle disinformation, misinformation and infodemics.
In our contribution, we propose several specific recommendations, including:
- States should avoid content based restrictions on disinformation, particularly through criminal laws, which should only be used in the most severe circumstances where there is an intention to cause some clear, objective public harm.
- Platforms should not be expected to make determinations on the legality of content under national law. It should be up to platforms to decide what terms of service and content moderation policies they apply to content that is legal (even if harmful), including disinformation. These policies should be clearly set out, and consistently enforced, with independent oversight to ensure this.
- States should require certain platforms to submit transparency reports or relevant information on their advertising, targeting practices, and algorithmic decision making, particularly as they relate to political advertising and public health crises. Platforms should only be included where there is clear evidence of harm being caused or facilitated by their services.
- States should consider measures that facilitate appropriate data sharing by platforms to designated third parties. We recommend the data trust model as one solution, with researchers given the ability to independently assess and report findings.
- States need to develop and effectively enforce data protection legislation which tackles the issues of micro-targeting and surveillance of users. Policymakers should examine existing frameworks, such as the GDPR, as useful models for potential legislation.
- States should consider legislation that requires platforms to allow users to understand what types of controls or algorithms are in place, including the data points used to make recommendations. This should be provided in an accessible format to inform users.
- States should consider legislation that requires platforms to let users have a say in whether algorithmic recommender systems are applied, and have the option to turn them off, or instead only have them base recommendations on specific data points agreed to by the user.
- Platforms should adopt measures that limit the virality of false or misleading content shared on messaging apps, and research further options to quell the spread of such content without undermining privacy or freedom of expression.
- States should consider measures that encourage companies to allow their users to report disinformation, even on private or encrypted channels. They should also encourage companies to conduct further research on limiting the virality of disinformation on their services in a rights-respecting manner.
The submissions to this call will serve as a basis for the Working Group’s ongoing work, which hopes to ultimately encourage both the adoption of regulation by governments and international organisations, and self-regulation.
For regular updates on this process, and other important stories in the online content regulation debate, sign up to our monthly Digest.