The European Commission will propose new laws on Wednesday giving Google, Facebook, Twitter and other Internet companies one hour to remove extremist content or face fines. The Commission told such companies in March that they had three months to show they were removing extremist content more rapidly or face legislation forcing them to do so. The Commission wants content inciting or advocating extremist offences, promoting extremist groups, or showing how to commit such acts to be removed from the web within an hour of receiving a corresponding order from national authorities.
In a proposal that will need backing from EU countries and the European Parliament, internet platforms will also be required to take proactive measures, such as developing new tools to weed out abuse and human oversight of content.
Service providers will have to provide annual transparency reports to show their efforts in tackling abuse. Providers systematically failing to remove extremist content could face fines of up to 4 percent of annual global turnover.
Content providers will have the right to challenge removal orders though.
In turn, it asks national governments to put in place the capacity to identify extremist content online, sanctions, and an appeals procedure. The industry has also been working since December 2015 in a voluntary partnership to stop the misuse of the internet by international extremist groups, later creating a “database of hashes” to better detect extremist content.
The Commission will retain a voluntary code of conduct on hate speech with Facebook, Microsoft, Twitter and YouTube in 2016. Other companies have since announced plans to join it.