The UK government is taking a hard line when it comes to online safety, appointing what it claims is the world’s first independent regulator to keep social media companies in check. Companies that fail to live up to requirements will face huge fines, with senior directors who are proven to have been negligent of their responsibilities being held personally liable. They may also find access to their sites blocked.
The new measures, designed to make the internet a safer place, were announced jointly by the Home Office and Department of Culture, Media and Sport. The introduction of the regulator is the central recommendation of the highly anticipated government white paper, published early Monday morning in the UK.
The regulator will be tasked with ensuring social media companies are tackling a range of online problems, including:
• Inciting violence and spreading violent content (including terrorist content)
• Encouraging self-harm or suicide
• The spread of disinformation and fake news
• Cyber bullying
• Children accessing inappropriate material
• Child exploitation and abuse content
As well as applying to the major social networks, such as Facebook, YouTube and Twitter, the requirements will also have to be met by file-hosting sites, online forums, messaging services and search engines.
“For too long these companies have not done enough to protect users, especially children and young people, from harmful content,” said Prime Minister Theresa May in a statement. “We have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe.”
The government is currently trying to decide whether to appoint an existing regulator to the job, or to create a brand-new regulator purely for this purpose. Initially it will be funded by the tech industry, and the government is currently debating a levy for social media companies.
“The era of self-regulation for online companies is over,” said the government’s Digital Secretary Jeremy Wright in a statement. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”
The global move toward regulation
The measures announced by the UK on Monday are part of a larger global move toward greater regulation for big tech, which originated in Europe, but is gaining increasing traction in the US, as well as with the leaders of tech companies, including Mark Zuckerberg and Tim Cook.
It comes at a time of great political upheaval in the UK, where the country is at once deciding to stand up to Silicon Valley tech companies, while hoping they will continue to create local jobs once it has departed from the EU. There are also still some elements of the new regulatory process that are up for debate.
Damian Collins, chair of Parliament’s Digital, Culture, Media and Sport Committee, which recently published a report into fake news branding social media companies “digital gangsters,” said that it was important the regulator had the power to launch investigations when necessary.
“The Regulator cannot rely on self-reporting by the companies,” he said. “In a case like that of the Christchurch terrorist attack for example, a regulator should have the power to investigate how content of that atrocity was shared and why more was not done to stop it sooner.”
Vinous Ali, head of policy for industry body techUK, welcomed the publication of the white paper, but said in a statement that some elements of the government’s approach remained “too vague,” and that it will have to be clear about exactly what it wants the regulator to achieve. The “duty of care” that the government believes social media companies have toward users is not clearly defined and open to broad interpretation, she added.
The Internet Association, which represents a whole list of the world’s biggest tech companies, including Facebook, Google and Twitter, said it’s important that any proposals are practical for platforms to implement regardless of their size.
A spokeswoman for Twitter said in a statement that the company is committed to prioritising the safety of users, pointing to over 70 changes the platform made last year. “We will continue to engage in the discussion between industry and the UK Government, as well as work to strike an appropriate balance between keeping users safe and preserving the internet’s open, free nature,” she said.