08 Jan 2020

Online Safety Legislative Reform in Australia: our first thoughts

By

2020 is likely to be a big year for legal developments when it comes to social media platforms.

While governments have, in the last few years, made sporadic efforts to tackle what they consider to be “harmful” content, these have largely been focused on specific types of content, and limited in scope. In the near future, however, we are likely to see legislation brought forward which sets out efforts to tackle a much wider range of online “harms” through single regulatory frameworks in the EU, Canada, Australia and the UK.

While plans are still at an early stage in the first two, in Australia and the UK the process is gathering pace. Last year, the UK published a White Paper setting out its proposals (see our reaction to it here), and a response and legislation is expected imminently. In Australia, the government launched a consultation on a new Discussion Paper shortly before Christmas. 

The proposals in Australia seek to build on efforts that the government has made in recent years (there have been four pieces of legislation since 2015 dealing with online safety) and propose a comprehensive new framework dealing with a wide range of different harms. GPD recognises the legitimate desire of governments worldwide to tackle unlawful and harmful content online, and many of the proposals put forward in the Discussion Paper are reasonable and sensible. However, some of the proposals set out could pose risks to individuals’ right to freedom of expression online. Below, we set out what we welcome in the Discussion Paper, as well as areas where we think more thought is needed.

What we welcome

  • Different responses for different problems: While the UK has proposed a single solution (namely a duty of care on social media platforms to prevent harm), the Australian Discussion Paper recognises that the different harms people face online require different responses. Rather than taking a “one size fits all” approach, the Discussion Paper essentially breaks down the different harms into three categories with different proposals.
    • For the most serious harms, namely terrorist material and extreme violent material, Australia’s eSafety Commissioner would be able to block certain domains containing such material for a time-limited period. It is likely that this power would be used in situations comparable to the Christchurch attack, where the perpetrator live-streamed the attack on social media.
    • For other serious harms, such as online abuse which amounts to a criminal offence, the non-consensual sharing of intimate images, and child sexual abuse imagery, the eSafety Commissioner would have the power to order the removal of content. These types of serious harm are generally clearly and narrowly defined in Australian law.
    • For other forms of illegal and harmful content (such as pornography or violent imagery), companies would be required to develop industry codes of practice setting out how they will address them, with a particular focus on protecting children). Only where the eSafety Commissioner concluded that a code was ineffective would they have the power to set their own code.
  • Space for self-regulatory initiatives: The Discussion Paper recognises the efforts that some social media platforms are already making to address online harms, including initiatives with the eSafety Commissioner and other agencies. As a result, much of the Discussion Paper is focused on supporting these continued efforts, rather than directly regulating platforms. For example, the Discussion Paper highlights the voluntary Safety by Design principles being developed by the eSafety Commissioner and the government’s own Online Safety Charter. The Discussion Paper proposes that the government be able to set out (voluntary, non-binding) basic online safety expectations (BOSE), based on these two initiatives. The eSafety Commissioner would have the power to order transparency reports on what social media companies are doing to comply with the BOSE, but the only sanction for non-compliance would be a civil fine. Similarly, for those types of harms where the eSafety Commissioner would not have the power to order removal of content, platforms themselves would be responsible for developing and enforcing codes of practice to address them.
  • A focus on addressing, rather than removing content: With the exception of a small range of content which would already be unlawful under Australian law, the Discussion Paper uses the language of companies needing to “address” unlawful and harmful content, rather than simply “preventing” or “removing” it. This gives companies much-needed discretion in how harmful content is managed on their platforms. It could mean that users have greater control over whether they opt-in or opt-out of being able to access forms of content, or that content is preceded by a warning, rather than simply removed.
  • Decision making remaining with public bodies: The Discussion Paper does not propose making social media platforms decide whether a particular piece of content is unlawful or not. It would be the eSafety Commissioner, a public body, that would make this decision and then notify the social media platform. This is a welcome approach which avoids the privatisation of law enforcement which comes from making platforms decide whether content is unlawful or not. Decision making by a public body means a far greater level of transparency and accountability, and the potential to challenge decisions in court if necessary.

 

Where more thought is needed

  • Minimal recognition of the need to ensure freedom of expression: The Discussion Paper does—albeit cursorily—recognise that its proposals will have an impact on the right to freedom of expression; and the new Online Safety Act would have a statement of regulatory policy which expresses an intention to “balance the competing objectives of user safety and freedom of expression”. However, there is nothing else in the proposals on how risks to freedom of expression would be mitigated in practice. Given that Australia has no constitutional or legislative protection of freedom of expression, it is critical that this new online safety legislation contains effective protections, particularly when it comes to expectations on social media platforms to address content which, while potentially harmful, is not illegal. The government should consider how risks to freedom of expression can be mitigated in the new framework.
  • Potential for scope expansion: Under the proposals, the eSafety Commissioner would have the power to order removal of content for only a limited range of types of content, namely cyberbullying of children, unlawful cyber abuse of adults, non-consensual sharing of intimate images, and “seriously harmful content” (which would include child sexual abuse imagery, abhorrent violent material, and content promoting or inciting serious crime). All of these terms are relatively clearly defined in Australian law, and, as noted above, decisions about whether content fell within the definition would lie with the eSafety Commissioner, rather than with platforms. However, the Discussion Paper would allow the government to determine new forms of “seriously harmful content”, and there is no guarantee that any new forms would have to be unlawful or defined equally clearly. There should therefore be a requirement that any new form of “seriously harmful content” be already unlawful and clearly defined in Australian law.
  • Lack of appeal mechanisms: The Discussion Paper expands the types of harmful content which the eSafety Commissioner can order to be removed. However, the proposals don’t suggest that users would be able to challenge decisions made by the eSafety Commissioner for content to be removed. It is important that a simple and accessible mechanism to challenge decisions, beyond judicial review, is available to users.

 

Next steps

We’ll be responding in full to proposals in the Discussion Paper in our formal submission into the three month consultation, which ends on 19 February. That submission will then be published on the GPD website.