Europe’s New “Code of Conduct” to Censor Hate Speech
TODO ?>Several large companies – Facebook, Twitter, YouTube and Microsoft – as well as the European Commission introduced a new code of conduct earlier this week in an effort to remove hate speech from their platforms.
These companies are enacting this code in an attempt to combat hate and terrorism, however the move involves an element of censorship on previously uncensored platforms. The companies will enact this code by reviewing and then deleting (with discretion) any “suspected” hate speech on their social media sites. As described by TechCrunch:
“‘The recent terror attacks have reminded us of the urgent need to address illegal online hate speech,” Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, wrote in the European Commission press release. “Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred. This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected.'”
These new laws also include communications components. The companies will tell users that they are not permitted to post hateful content, share best practices and flag questionable content.
While this move is intended to be a positive change online, it has also sparked concerns among privacy advocates. Some groups worry that “overreaching enforcement will violate users’ rights.” This is a concern because “that companies may monitor for and remove content merely because it’s controversial and they feel they face a liability by leaving it online, says Ms. Massé.” Both Access Now and the EDRi have withdrawn from talks regarding the code of conduct, due to their opposition.
The code will be reviewed by EU justice Ministries next week, but is otherwise is finalized. The code is non-binding.
Sources: TechCrunch and CS Monitor