13 Feb 2019

Facebook’s Oversight Board: what’s at stake, and what we know so far

By

Toward the end of 2018, Mark Zuckerberg set out a new blueprint for content moderation policies at Facebook. Among the proposals in the blueprint was a new “Oversight Board”, which would enable both users and Facebook to submit content-related decisions to independent review. At the end of last month, Facebook provided some additional detail on the Board, and opened a consultation on some of the key questions the company is still thinking through.

In this post, we look at the context and background to Facebook’s Board, explain why human rights defenders should care, share some practical advice on engaging in the consultation and set out our initial thoughts on the content of the proposal.

*

Why is Facebook doing this?

The background to this proposal is, of course, the intense public scrutiny and criticism of Facebook’s content moderation policies over the last year. Different groups of users have variously argued that Facebook removes too much content, too little content, and is insufficiently transparent in its decisionmaking – while governments around the world are threatening to regulate online platforms themselves unless more is done to tackle illegal and harmful content. Zuckerberg and his team are likely hoping that this new, independent body will – as well as improving decisionmaking – take some of the heat off Facebook.

 

Why should we care?

Facebook and the other tech giants are some of the main gatekeepers of freedom of expression online. More than almost anything else, it is their rules and codes of conduct – until now, developed, implemented and enforced in relative secrecy – which determine what types of speech and other content are acceptable in the digital environment. This means that restrictions of freedom of expression are relatively common, yet it is difficult for those affected to seek redress.

Facebook’s proposal therefore represents a potentially significant change to the rules of the game. If realised, it would mean that the role of “gatekeeper” would, in some aspects, shift from the platform itself to an independent oversight body, with the power to arbitrate on controversial or disputed decisions around content. If this model were replicated by other platforms, the digital environment would come to more closely mirror the offline world – where the rules of what is and isn’t permissible speech are established by legislation, but their enforcement can be challenged and reviewed (typically by courts). Over time, review mechanisms such as these might be able to encourage platforms to develop and implement policies which more fully respect the right to freedom of expression.

This is a therefore a real opportunity for human rights defenders. But strong engagement is critical. If Facebook’s proposal ends up being watered down, for example, or is implemented ineffectively (e.g. with low transparency or a non-diverse Board composition), there’s a risk that the overall model of independent oversight could be discredited. On the other hand, a diverse and transparent Board, which had the necessary contextual and cultural expertise and recognised freedom of expression as a core value, could greatly enhance the enjoyment of that right by Facebook’s two billion users.

 

What is the consultation asking, and how can I engage?

The questions being asked at this stage cover two key areas: the composition of the Board and how it will make its decisions.

On the composition side, the questions are relatively practical. How many individuals should be on the Board? How should they be selected? How long should their terms last?

The questions around how the Board will make its decisions are, by contrast, trickier. How should decisions be brought to the Board’s attention? How should the Board choose which cases to review? How will it acquire the necessary expertise to make decisions, and how will its independence be ensured? This section also asks how the risk of inconsistent decisions being reached can be mitigated.

Despite asking some critical questions, the consultation is somewhat vague on several points – notably the actual process by which answers to the questions it poses can be shared. All we know so far is that there will be workshops over the next six months in a range of cities across the world (including Singapore, Delhi, Nairobi, Berlin, New York and Mexico City). We don’t, however, know when they’ll take place, or whether they will be open or closed (although it’s implied that they will be invite-only). Facebook have also said that they’re interested in reaching other stakeholders, and will be announcing more on how proposals can be submitted “in the coming weeks”.

 

Our first thoughts

For the reasons set out above, at GPD we think this is an important exercise – and one which all organisations concerned about freedom of expression online should get engaged in. We’re pleased to see that the consultation is soliciting views on the full range of issues that need to be considered. In our own response, we’ll be making the case for as broad and diverse a Board as possible, so that decisions are informed by a wide range of experience and expertise. This model, we intend to argue, should also have provision for seeking specialist knowledge, evidence and insight when needed.

Above all, we’ll be arguing that freedom of expression should be a core consideration when the Board makes its decisions, and that the Board should provide advice to Facebook on how its policies could be further revised and improved in the future. We also want to see as much meaningful transparency as possible, with reasons provided for any decisions reached, and efforts made to ensure that underrepresented groups are not sidelined in the Board’s composition and outreach.

We’ll be following this process, and continuing to engage ourselves, so stay posted for more updates. And if you’re interested in collaborating, get in touch with us at richard@gp-digital.org.