Privacy settings

This website protects your privacy by adhering to the European Union General Data Protection Regulation (GDPR). We will not use your data for any purpose that you do not consent to and only to the extent not exceeding data which is necessary in relation to a specific purpose(s) of processing. You can grant your consent(s) to use your data for specific purposes below or by clicking “Agree to all”.

donate to support journalism
i
EJC joins forces with DataScouting to counter hate speech directed at journalists online

Announcement

EJC joins forces with DataScouting to counter hate speech directed at journalists online

Picture of Adam Thomas
Adam Thomas — Director
May 22, 2018

We’re building open source detection models, databases, training curricula and platform policy recommendations

Hate speech is not a new phenomenon, but in recent years its volume has significantly increased. Just take a look at the rise after Brexit documented by the thinktank Demos. Or the 600% increase in Canada reported by Cision.

Fast-emerging networking platforms and online comment sections have encouraged free and open discourse on the Internet. But they have also accelerated and scaled the dissemination of hate speech.

As Danah Boyd said in Google and Facebook Can’t Just Make Fake News Disappear, “we have a cultural problem, one that is shaped by disconnects in values, relationships, and social fabric. Our media, our tools, and our politics are being leveraged to help breed polarization by countless actors who can leverage these systems for personal, economic, and ideological gain. Sometimes, it’s for the lulz. Sometimes, the goals are much more disturbing.”

Social media platforms are among those facing the most scrutiny over how they manage hateful content. Trying to deal with the issue, Facebook seems to be internally testing new automatic detection via a “hate speech button” that briefly allowed users to report hate speech on individual posts before it was removed. Twitter has begun to automatically hide posts from certain accounts that “detract from conversation.”

As for the EU, the European Commissioner for Justice, Consumers and Gender Equality, Věra Jourová, is examining how to have hateful content removed swiftly by social media platforms, with tough legislation being one option that could replace the current system.

While the social media companies Facebook, Twitter and YouTube have accelerated their removals of online hate speech in the past year, there’s still a far way to go to ensure effective and prompt removal of hate speech across platforms. In fact, Mark Zuckerberg has admitted that it will take at least five to ten years for Facebook to have AI tools that reliably identify online hate speech.

Five to ten years seems a long time. Together with DataScouting we will try to accelerate this process over the next 18 months by carrying out the EU-funded Data-driven Approach to Counter Hate Speech (DACHS) project.

Focus on journalists

Given the scale of the issue, many actors are trying to take the fight against the hate speech to the next level by testing artificial intelligence (AI) solutions. Last year, Google launched an AI tool that identifies abusive comments online, The Online Hate Prevention Institute (OHPI) have spent the past six years both tackling specific cases and working on the problem of measurement using AI approaches. More recently, Stop PropagHate has received funding from the Digital News Initiative (DNI) to use artificial intelligence to help detect and reduce hate speech in online news media.

For the DACHS project, we will focus on journalism as a test case.

Given the importance of free speech and digital communication to their work, journalists are often unofficial moderators or direct targets within the platforms that enable these conversations. Journalists need better tools and techniques to do their job online effectively and safely.

This is why the primary objective of DACHS is to detect underlying patterns of, and develop strategies for, journalists to counter hate speech. We believe, however, that what we learn from this case can then more easily be applied to other vulnerable groups.

To do that, we will map and model hate speech against journalists across social platforms in order to develop deep learning-based hate speech detection models and an open-source hate speech database.

Photo: Paula Montañà Tor
Photo: Paula Montañà Tor

But we can’t just develop technical solutions to deep socio-technical problems. I’ve talked in the past about journalists needing to build bridges between communities, playing the role of the trust architect.

So, in addition to the technology, we will be publishing guides, playbooks and frameworks to help journalists approach the problem from all angles. That will be boosted by training events, video training modules, and policy recommendations for governments and social media platforms. It will strongly complement the work of our €1.7m Engaged Journalism Accelerator.

Take action:

If you’re interested in the project, we’d love to hear from you. To stay up to date with DACHS and other projects we’re carrying out, sign up for our newsletters.

More about Datascouting

DataScouting is a software research and development company specialized in creating smart media monitoring solutions using machine learning and cloud computing technologies. With proven experience in the media intelligence industry, they will be responsible for the annotation and system architecture, semantic analysis, machine learning and predictive analytics, data maintenance, platform integration and dissemination of the database and technological tools to potential users.

Related

Receive insights, knowledge and updates on funding opportunities.
Receive our monthly update, delivered straight to your inbox.