By Kate Young, 5th January 2022
The Government has confirmed that any incentives for social media firms to over-remove people’s legal online content will be taken out of the Online Safety Bill due to ramifications for free speech.
Firms will still need to protect children and remove content that is illegal or prohibited in their terms of service, however the Bill will no longer define specific types of legal content that companies must address.
The impact of dropping this requirement means that social media companies will continue to allow an environment where internet users can be bombarded with harmful messages. Users are currently being exposed to dangerous messages which are encouraging or inciting them to harm themselves, or exposed to the glorification of eating disorders, racism, anti-Semitism and misogyny. This move is particularly upsetting for the vulnerable including children as they will continue to be exposed to harmful messages whilst online without proper intervention by social media firms.
This is a watered down version of the Bill which would have seen further responsibility and accountability placed upon technology firms.
The NSPCC statement
The NSPCC has stated that “further delay or watering down of the legislation that addresses preventable harm to our children would now be inconceivable to parents across the UK. Any review of the duties in the Bill protecting adults from harmful content must not impact the safety duties protecting children, and we want to see a strengthening of the protections in the children’s safety duties, by ensuring they apply to all services in scope of legislation, not just services likely to be accessed by children”.
What has changed?
With regards to the change, there will be no ability to remove a user account unless they have broken service terms or the law. This gives a user free reign to continue to send harmful content to others users. Firms will be required to publish risk assessments with regards to potential harm to children on their sites, show how they enforce user age limits and publish details of enforcement action taken against them by Ofcom. Unfortunately this relies upon users to vet sites by reviewing this information and this doesn’t assist with appropriate blocking of information for vulnerable users and there is no set standard by which age will be checked. Current age related sites require a tick box rather than formal identification.
Even though the bill is being curtailed, there is however to be some specific action in relation to self-harm messages as the government has recently announced that they are intending to create a new offence relating to the encouragement of self-harm which would be included within the Online Safety Bill. This would require technology companies to take action and individuals could face prosecution. This shows that the Government are recognising the need for some further regulation in the area of exposure to harmful content, but it shows that they are only willing to go so far.