20 Jan 2022

The Irish Online Safety and Media Regulation Bill: our thoughts

Last week, the Irish government published the Online Safety and Media Regulation Bill, which aims to update regulations for broadcasting and video on-demand services and create a new regulatory framework for online safety in Ireland.

The Bill would establish a new regulator, the Media Commission, and provide for the appointment of an Online Safety Commissioner responsible for overseeing the online safety framework, including compliance with new binding online safety codes. The Commission would also be able to level sanctions against services for non-compliance, such as financial sanctions of up to €20 million or 10% of turnover. 

The proposals have been in development for several years, and follow a series of workshops and public consultations with external stakeholders (read our input into them here). But with the final version of the Bill now published, how does it measure up from a human rights perspective? 

 

Scope of content 

The Bill divides “harmful online content” into two distinct groups. The first group includes the most serious types of illegal material, with the Bill exhaustively and clearly listing relevant criminal offences such as child sex abuse material, encouraging terrorism and inciting hatred. This is welcome from a human rights point of view.

However, the second kind of “harmful online content” under the proposal includes categories of content which are currently lawful and less clearly defined, posing potential risks to freedom of expression. One particularly concerning category includes “online content by which a person bullies or humiliates another person”. The Bill does not define bullying or humiliation, making it difficult to determine whether or not particular content falls within this category, although this is somewhat mitigated by provisions which make clear that the content must either give rise to a reasonably foreseeable risk to a person’s life or cause significant harm to a person’s physical or mental health. Even with this caveat, it will remain difficult to know whether an individual piece of content falls within scope.

 

Scope of services

Under the provisions of the Bill, the Commission would be able to determine which online services and categories of services were within the scope of the proposed framework. We are pleased that this is grounded in a risk-based approach, requiring the Commission to consider the nature and the scale of the service and levels of availability of harmful online content as well as the rights of users (which would include their human rights). 

At the same time, we remain concerned that the scope of services fails to adequately carve out exemptions for encrypted services, posing risks to individuals’ right to privacy. The Bill provides that “an online safety code applying to an interpersonal communications service or a private online storage service applies to that service only in so far as it relates to content that falls within one of the offence-specific categories of online content”. This means that while these services may be within scope, they can only be expected to take measures in relation to illegal content, as opposed to the categories of lawful but harmful content. This is not much of a safeguard, however, since it means that expectations may still be placed on services that use end-to-end encryption, which limits their ability to filter or monitor content. If the online safety codes were to include requirements to monitor or filter content, it is difficult to see how services who use end-to-end encryption could comply without removing or weakening encryption, which is critical for the safety of many who rely on the privacy it provides—such as human rights defenders, journalists and vulnerable groups.

 

Duties and responsibilities

The online safety codes under the Bill would set out the measures that service providers should take to “minimise the availability of harmful online content and risks arising from the availability of and exposure to such content” as well as “other measures that are appropriate to protect users of their services from harmful online content”. These could include measures relating to “standards, practices or measures relating to the moderation of content”, “the assessment by service providers of the availability of harmful online content on services, of the risk of it being available, and of the risk posed to users by harmful online content” and “handling by service providers of communications from users raising complaints or other matters”.

We are pleased that the proposals in relation to the online safety codes reflect a balanced and proportionate approach to online platform regulation by primarily focusing on systems and processes. The Commission would be required to consider a variety of matters when preparing codes, including the desirability of services having transparent decision-making processes in relation to content delivery and content moderation, and the rights of providers of designated online services and of users of those services. The Commission would also be required to develop a code requiring services to report to the Commission at intervals on the handling of communications from users raising complaints. These codes would be complemented by additional guidance materials and advisory notices from the Commission to further support services and promote online safety.

Depending on what the online safety codes ultimately contain, these types of obligations may be beneficial from a human rights perspective, providing greater clarity on the standards and processes employed by designated services. They could also provide information which is helpful to the Commission in determining whether human rights, particularly freedom of expression, are being upheld online, or what changes may be needed. 

We do, however, have some concerns surrounding the potential for proactive and general monitoring of content and an increased use of automated processes as a result of the measures contained in the final online safety codes. The Bill requires the Commission to factor in the impact of automated decision-making when preparing codes, but any codes which compel or incentivise the proactive monitoring of content and the use of automated processes will pose risks to human rights, particularly to the rights to privacy and freedom of expression. Such tools invariably result in the removal of permissible content and discriminatory implementation, and should be used only where they are of sufficient quality in terms of accuracy, and always accompanied by transparency around standards and appropriate appeals mechanisms.

 

Concerns moving forward

We are pleased the Bill has incorporated a number of the recommendations put forward by the Joint Oireachtas Committee on Tourism, Culture, Arts, Sport and Media, including better defining harmful online content and establishing reporting requirements for designated online services on complaint handling. But we remain concerned that recommendations posing risks to human rights which were not included (e.g. adding disinformation as a category of harmful content) may still be incorporated at a later stage. The government has already expressed some interest in amending the Bill by announcing the establishment of an expert group to consider the development of an individual complaints mechanism for harmful content. Any amendments to the Bill will need to be scrutinised and evaluated for their potential impacts on human rights.

 

Next steps

The Bill will soon be introduced into the Houses of the Oireachtas. We’ll be following the Bill’s progress closely—sign up to our monthly Digest for regular updates and analysis.