Online platforms and spaces are critical to the enjoyment of freedom of expression in the digital age. But they can also facilitate the spread of harmful forms of content—like disinformation and hate speech.
In recent years, this has resulted in increasing regulation by governments, and stricter moderation by platforms. These responses tend to be developed in closed spaces, and fail to sufficiently reflect human rights standards.
At GPD, our focus is twofold: making the spaces where content policies are developed more open, inclusive and transparent; and engaging directly to ensure approaches to content regulation and moderation are rights-respecting.
-
POST 14 Jul 2022
The EU Digital Markets Act: is interoperability the way forward?
-
POST 5 Jul 2022
How will the Digital Markets Act affect human rights? Four likely impacts
-
SERIES 27 Jun 2022
Disinformation Resource Hub
-
EVENT 6 Apr 2022
Government responses to online disinformation across Sub-Saharan Africa: A new tool for human rights defenders
-
POST 4 Apr 2022
The UK’s Online Safety Bill poses serious risks to human rights
-
POST 16 Mar 2022
Ukraine: what are the likely implications for norms and discussions in cyberspace?
-
POST 2 Mar 2022
Marginalised languages and the content moderation challenge
-
NEWS 28 Feb 2022
GPD and other organisations submit joint input to OHCHR on tech and BHR
Load more