Online platforms and spaces are critical to the enjoyment of freedom of expression in the digital age. But they can also facilitate the spread of harmful forms of content—like disinformation and hate speech.
In recent years, this has resulted in increasing regulation by governments, and stricter moderation by platforms. These responses tend to be developed in closed spaces, and fail to sufficiently reflect human rights standards.
At GPD, our focus is twofold: making the spaces where content policies are developed more open, inclusive and transparent; and engaging directly to ensure approaches to content regulation and moderation are rights-respecting.
-
NEWS 22 Mar 2023
GPD inputs to Ofcom’s call for evidence on protecting children from legal content that is harmful to them
-
PUBLICATION 9 Mar 2023
Evading accountability through internet shutdowns: Trends in Africa and the Middle East
-
NEWS 1 Mar 2023
GPD calls on UK government not to expand criminal liability for social media managers in Online Safety Bill
-
NEWS 8 Feb 2023
GPD inputs to upcoming UNSR report on FoE and sustainable development
-
NEWS 23 Jan 2023
GPD provides comments on UNESCO content regulation guidance
-
POST 20 Dec 2022
The return of the UK’s Online Safety Bill: what’s changed and what’s next
-
PUBLICATION 10 Oct 2022
Engaging Tech Companies on Human Rights: A How-To Guide for Civil Society
-
POST 26 Sep 2022
The ITU: a brief explainer
Load more