In recent years, we have seen growing pressure on platforms from governments to ‘do more’ to remove unlawful and harmful content – whether by removing it more rapidly, or introducing algorithms and other tools to detect it automatically.
So far, these calls have received a mixed response from platforms, who prefer to deal with these problems on their own or, less frequently, through multistakeholder partnerships. But seemingly arbitrary and non-transparent decisionmaking by platforms has resulted in further criticism, not only from governments, but also particularly by those concerned by potentially adverse impacts upon freedom of expression.
There is a clear need for a model of content regulation by platforms which is both consistent with their responsibility to respect their users’ right to freedom of expression and which addresses the legitimate concerns of governments, making more interventionist proposals unnecessary. This white paper seeks to propose such a model – one which respects human rights and meets the legitimate interest of governments in having unlawful and harmful content removed.