The long-awaited Online Safety Act (“the Act”) is finally here. Following its difficult road to Royal Assent, the Act promises significant change in the way online platforms and services deal with child safety online.  

Towards the end of last year, Ofcom published its first of four consultations – “Protecting people from illegal harms online” – as part of its work to establish the new regulation over the next 18 months. This consultation proposes guidance and draft codes of practice relating to how internet services that enable the sharing of user-generated content (“user-to-user services”) and search services, should approach their new duties relating to illegal content.  

We highlight some key points and initial comments on the draft guidance below. 

1. Service risk assessment guidance 

Ofcom provides draft guidance on how to carry out risk assessments which are required by the Act to be carried out by all services in scope. It has to be said that this section is rather long-winded and repetitive in places, which is arguably not very business friendly. Summarising it in full here would be impractical, so we have instead picked out what we consider to be the interesting and salient points below. 

The key takeaways are: 

  • There will not be a ‘one size fits all approach’ when it comes to risk assessments. A risk assessment by a major platform will look very different to a risk assessment by an SME. Even between two services of a similar size, the risk assessments they carry out could take a very different form depending on the type of service they provide, their audience and so on. 
  • Ofcom’s draft guidance is useful in some respects, but it does not provide a ‘ready to go’ template for carrying out risk assessments. Businesses will have further work to do in deciding how they will translate that guidance into an actual risk assessment process, which can be rolled out for their services.
  • If you have several products, or one product which has multiple distinct user-to-user services, you may have to do separate risk assessments for each one. In-house teams will need to consider how best to resource this. If a business does not already have an internal online safety role (or team) one may need to be designated. Given that the workload is not inconsequential, in some cases this may require new hires. 
  • Risk assessments will not be a cursory tick box exercise. They will need to be structured and recorded in writing. 
  • Decisions made following a risk assessment will need to be backed up by reasoning and based on demonstrable evidence, some of which may need to come from outside the business. 
  • Hitting a threshold of “low risk” for certain types of priority harms, such as child sexual abuse material (“CSAM”) and grooming, may be challenging, unless extensive work has been done to put systems in place to address these. 

Ultimately, Ofcom’s goal is to ensure that risk assessments are specific and tailored to the service, that service’s risk profile, and the harms involved. It advocates a four-step process to achieve this: 

Step 1: Understand the harms 

Step 2: Assess the risk of harm 

Step 3: Decide on measures, implement them, and keep a record 

Step 4: Report, review and update 

Taking a brief look at each of these in turn: 

Step 1: Understand the harms 

The first step is to identify the harms that need to be assessed. The guidance sets out 15 types of “priority” illegal harm as a useful starting point – these must be considered first. 

Businesses must consider the risk of the presence of illegal content, the risk of commission of priority offences, and the risk of the facilitation of those offences. Businesses will then have to consider non-priority illegal content. 

Having done this, businesses are asked to consult the Risk Profiles set out in the guidance, consider which of the risk factors apply to them, and make a record of this. 

Step 2: Assess the risk of harm 

The next step is to assess the risk of harm. The first risk assessment will need to be carried out within 3 months of the final guidance being published for services which are in scope. They must also be carried out within 3 months of starting a new service, or before making significant changes to an existing service. 

In doing so, businesses are required to: 

  • consider if there are any additional characteristics of the service which could increase risks of harm (not already present in the Risk Profiles mentioned above);
  • assign a risk level of High, Medium or Low to each of the 15 priority harms; and
  • consider the likelihood and impact for each risk, based on what Ofcom calls Core Inputs and Enhanced Inputs

Core Inputs are evidence which will form the primary basis of a business’ decision making in conducting a risk assessment and (at least in theory) is supposedly more readily available. Examples of Core Inputs include risk factors identified through relevant Risk Profile (as part of Step 1 above); User complaints and reports; User data; Retrospective analysis of past incidents of harm; and “Other relevant information”, including any other characteristics that apply to your service that may increase or decrease risks of harm.  

Enhanced Inputs are evidence which may take some more work to obtain, possibly requiring input from third parties. Examples of Enhanced Inputs include: Results of product testing; Results of content moderation systems; Consultation with internal experts on risks and technical mitigations; Results of previous interventions to reduce online safety risks; Views of independent experts; Internal and external commissioned research; Outcomes of external audit or other risk assurance processes; Consultation with users; and Results of engagement with relevant representative groups.  

When considering the likelihood of a harm, the basic guidance is: “the more risk factors, the more likely the harm”. Core evidence pointing to a harm actually occurring is also considered a tell-tale sign (e.g. frequent user complaints). If the likelihood of a harm is still unclear, businesses are required to look at Enhanced Inputs to help them reach a decision.  

When considering the impact of a harm, businesses are required to consider the reach of the service (more users = more potential impact), whether use of recommender systems could increase number of users seeing the content, the demographics of service (for example, women and girls may be more impacted by certain types of harm, such as intimate image abuse), and whether the service’s revenue model and commercial profile influence the way in which harm is experienced. It is not entirely clear what the last of these means in practice, and exactly what revenue models would be considered to increase the impact of harm as opposed to others. 

For CSAM harms, there is separate and additional guidance, which we will not cover here in detail. 

The draft guidance mentions that having over 7 million monthly UK users could be an indicator of high impact (and therefore high risk), whereas between 700,000 and 7 million monthly UK users could be an indicator of medium impact, but both must be considered alongside other factors. 

It also states that if “there are comprehensive and effective systems and processes in place, or other factors which reduce risks of harm to users” this could be an indicator of low risk, meaning that services which have begun implementing systems ahead of the Act fully coming into force should be at an advantage when it comes to conducting risk assessments. 

Interestingly, where a service is “a file-storage and file-sharing service”, the guidance suggests that it should automatically be categorised as high risk for CSAM harms. The added footnote specifies that this type of service is “a user-to-user service that enables users to upload, store, manage and distribute digital media... A key characteristic of file storage and file sharing services is the provision of link sharing, allowing users to generate and share unique URLs or hyperlinks that directly lead to the stored content...This encompasses sharing files and also embedding stored content (such as images and videos) into external services.” 

Conversely, it appears that hitting the “low risk” threshold for CSAM is going to be difficult, as according to the guidance this will require businesses to show that they have “adopted measures that demonstrably ensure that image-based CSAM is highly unlikely to occur on the service” (emphasis added). 

There is also specific guidance and an accompanying risk table for risks related to grooming. Indications of high risk of grooming include: 

  • where a service includes child users when users are prompted to expand their networks, including through network recommender systems;
  • where a service allows users to view child users in the lists of other users’ connection; and 
  • where a service has user profiles or user groups which may allow other users to determine whether an individual user is likely to be a child. 

Step 3: Decide on measures, implement them, and keep a record 

The next step in the risk assessment is for the business to decide what measures it needs to implement to deal with the identified risks; then implement them, and keep a record. 

The suggested measures, which businesses will need to consider, are published as a separate tear sheet alongside the consultation, at Table 1 (see here). The measures are sub-categorised into six bands by the size of the service and the nature of its risks, ranging from smaller services with low risk, to large multi-risk services. 

A “large” service is any service which has an average user base greater than 7 million per month in the UK, approximately equivalent to 10% of the UK population. All other services are considered “small”. 

It’s interesting to note that a “large” but low-risk services are expected to adopt a significant portion of the proposed measures, purely by virtue of their size. This may be seen as a burden by services which fundamentally do not pose a significant risk to users, but happen to be popular. 

The full table is relatively easy to read and worth consulting. As you might expect, as you move up in size and risk level, businesses will be required to adopt more onerous measures – for example, performance targets for content moderation functions and services to measure whether they are achieving them (for small, multi-risk businesses and above). 

Once the relevant measures are implemented, the outcomes of the risk assessment(s) must be recorded, as well as how the relevant duties have been met. The guidance sets out a list of items which the record must include. We will not reproduce it here in full, but it includes the following: when the risk assessment was done, who completed and approved the risk assessment, confirmation that the business has consulted Ofcom’s Risk Profiles which are relevant to its service, a list of evidence which has informed the assessment, the levels of risk assigned to each of the 15 priority illegal harms and any non-priority illegal harms (with an explanation of why), and information on how the business is taking appropriate steps to keep the risk assessment up to date. 

Step 4: Report, review and update 

As the title suggests, Ofcom recommends internally reporting the outcome of risk assessments through the relevant governance channels in the businesses. For smaller businesses, which may not have formal governance channels as such, the recommendation is to report the outcome to a senior manager with responsibility for online safety. 

Businesses will need to monitor the effectiveness of the measures they implement and keep the risk assessment up to date by reviewing it annually. This does not mean that a new risk assessment needs to be carried out each year, but you must check that the latest risk assessment still accurately reflects risks on your service on a regular basis. However, as mentioned above, the need for a new risk assessment may be triggered if there is a significant change to the service’s Risk Profiles, or before making a significant change to the design or operation of the service. 

2. Illegal content codes of practice – user to user services 

The draft illegal content Codes of Practice for providers of user-to-user contain various “recommended measures”.  These fall into various categories: Governance and Accountability, Content Moderation, Reporting and Complaints, Terms of Service, Default Settings and Support for Child Users, Recommender Systems, Enhanced User Controls and User Access. Several of the measures only apply to large or multi-risk services.  Some particularly notable points from the Codes follow below.  

In each case, there is little detailed information about what the measure entails or exactly what is expected to show compliance with the Act. In terms of governance, for example, the draft Codes require that a provider names a person accountable to the most senior governance body for compliance with the illegal content safety duties and the reporting and complaints duties. There is no guidance, however, on the relevant qualifications this person should possess.  The governance measures also include a recommendation that a provider should track evidence of new kinds of illegal content and unusual increases in particular kinds of illegal content, which suggest at least some level of ongoing pro-active monitoring.  

With regards to content moderation, providers must have systems or processes in place designed to swiftly take down illegal content of which they are aware.  In some cases, this involves making an “illegal content judgement” and there is separate Illegal Content Judgements Guidance (about which, see below). Some of the other measures specified in respect of content moderation include: setting internal content policies, setting and recording performance targets for content moderation, applying a policy for the prioritisation of content for review, ensuring people working in content moderation receive training and materials, hash matching for CSAM and using technology effectively to analyse relevant content to assess whether it consists of or includes CSAM URL and take it down. 

When it comes to reporting and complaints, the provider must have complaints processes and systems which are easy to find, access and use.  Prior to the Act, and despite not yet being a legal obligation, many providers will already have complaints and reporting procedures in place. It will be for providers to decide whether they need to make changes to their existing processes to ensure compliance with the Act by, for example, speeding up the time taken for removal of material or making the complaints process easier to find.   

The Codes do envisage an Appeals process in respect of complaints, but it appears to relate only to a decision taken by a provider against a user to remove content, rather than the other way around (i.e. an Appeal in respect of content which has not been determined as illegal is not covered). However, there are paths of redress outlined more broadly, including making a complaint to Ofcom if you believe a platform is not complying with their duties in relation to content.   

Terms of Service must now specify how individuals are to be protected from illegal content and, like with the complaints processes, must be easy to find.  

The further provisions in the draft Codes relating to default settings and user support for child users, recommender system testing and enhanced user controls only apply in more specific circumstances.  With the recommendations relating to default settings and user support for child users, the application relates to risks of grooming, and with recommender system testing and enhanced user controls, the application relates to risks of particular kinds of illegal harm.  

3. Illegal content codes of practice – search services 

The consultation also contains draft Codes of Practice for search services in order to protect users from illegal online harms. The categories of “recommended measures” are similar to those for user-to-user services albeit there are fewer of them and the moderation requirements relate to search rather than content. For example, the Codes require that search providers have systems or processes designed to deindex or downrank illegal content of which it is aware. 

4. Content communicated “publicly” and “privately”  

The consultation offers guidance on content communicated “publicly” and “privately” under the Act. It is intended to assist providers looking to comply with duties which relate specifically to content communicated “publicly”.   

The key question is whether the communication of the content is public or private, rather than whether the content itself may be of a private nature.  Ofcom expects a pragmatic approach to be taken and for examination of the relevant factual context, taking into account the following statutory factors: the number of UK individuals able to access the content in the UK, access restrictions and the sharing/forwarding of content 

The Codes also refers to the relevant statutory framework whereby the Act provides Ofcom with the power to set a measure describing the use of “proactive technology” as a way (or one of the ways) of complying with some of the duties set out in the Act. However, it is stated that there are constraints on Ofcom’s power to include proactive technology measures in a Code of Practice.  

5. Judgement for illegal content 

A new legal concept of “illegal content” has been created under the Act. Annex 10 of Ofcom’s consultation provides guidance in relation to making judgements regarding whether online content is illegal for the purpose of the Act.  

Section 192 of the Act illustrates the approach services must take regarding whether content is illegal. Services must treat content as illegal content if, having considered all reasonably available information: 

  • they have reasonable grounds to infer that all elements necessary for the commission of a particular criminal offence, including mental elements, are present or satisfied; and  
  • they do not have reasonable grounds to infer that a defence to the offence may be successfully relied upon. 

The starting point for services therefore is to investigate whether the proper elements for an offence have been made out. The relevant offences are listed and particularised in the Code of Practice.  Anything which was already illegal is still illegal if it takes place online. The Act requires platforms or relevant entities to give thought themselves to whether the elements of an offence have been made out with reference to the content they are hosting.  

Concepts to bear in mind will include the standard of proof to be applied for the offence. Decisions will need to be taken by providers with reference to the guidance and possibly also with some initial general input from criminal lawyers or experts. However, we may see that the threshold for removals is lower than the usual criminal standard, as platforms seek to prove their compliance.  

The “priority” offences have been illegal for a long time. Not only does Ofcom make it clear which offences are relevant to online content, the Act also created new online offences. “Other” offences are those which are not “priority” offences and exist where the victim or intended victim(s) of the offence is an individual. They include:  

  • Epilepsy trolling offence: the offence of sending a flashing image with intention that it would be seen by a person with epilepsy or where it was reasonably foreseeable that this would be the case.  
  • ‘Cyberflashing’ offence: the offence of sending or giving a photograph or video of the genitals with the intent of causing alarm, distress or humiliation, or for the purpose of sexual gratification on the behalf of the sender (with recklessness as to whether alarm, distress or humiliation could be caused).
  • Self-harm offence: the offence of assisting or encouraging ‘serious’ acts of self-harm.  
  • False communications offence: sending a message which conveys knowingly false information with the intent of causing non-trivial psychological or physical harm to the likely audience (without reasonable excuse for sending).  

The creation of the false communications offence is particularly interesting given that in order for content to amount to an offence, it is not necessary that the user posting the content directed the message to a specific person. The number of victims to an offence of this nature is therefore immeasurable. 

6. Enforcement guidance

Where Ofcom finds a service provider has contravened its obligations under the Act, Ofcom has the power to impose a penalty of up to 10% of qualifying worldwide revenue or £18 million (whichever is the greater) and require remedial action to be taken. In addition to these statutory powers, Ofcom also has a range of non-statutory tools such as issuing warning letters. Ofcom proposes to take a staged approach to enforcement, starting with an initial assessment, investigation stages and various options in terms of outcome, similar to what other regulators, like the ICO, currently do. It is clear from the proposal that businesses will not be enforced against out the blue, and a framework will be in place. 

In the proposed guidance, Ofcom has acknowledged that they cannot open investigations into every potential compliance issue and set out a “priority framework” prioritising matters depending on risk of harm or seriousness and strategic significance (e.g. whether enforcement action would help clarify the regulatory or legal framework; whether the issue directly relates to Ofcom’s broader strategic goals or priorities etc). The proposal also sets out the various stages of enforcement, and what to expect including how they plan to gather information (taking a proportionate approach). Following completion of an investigation, Ofcom may issue a publication notice, similar to other regulator processes (such as the ICO). 

In certain situations, Ofcom may issue a provisional notice of contravention or a confirmation decision to both the service provider and another entity related to the service provider or to an individual or individuals controlling the service provider. Where Ofcom does so, both will be jointly and severally liable with the service provider for any contravention, broadening scope of enforcement beyond the regulated service provider. 

Ofcom welcomes responses to the consultation by Friday 23 February 2024. 

We are closely monitoring the progress of the Act as it gradually moves closer towards becoming full enforceable. If you would like to discuss any of the issues raised above, please get in touch with our Online Safety specialists listed below, or your usual H&L contact.