This website protects your privacy by adhering to the European Union General Data Protection Regulation (GDPR). We will not use your data for any purpose that you do not consent to and only to the extent not exceeding data which is necessary in relation to a specific purpose(s) of processing. You can grant your consent(s) to use your data for specific purposes below or by clicking “Agree to all”.
Main takeaways from the News Impact Summit in Lyon on “Covering Politics in a Misinformation Age”
“If someone says it’s raining, and another person says it’s dry, it’s not your job to quote them both. Your job is to look out the ******* window and find out which is true,” Jonathan Foster, a lecturer at Sheffield University, once said.
But that is not enough anymore, according to Jenni Sargent, managing director of First Draft. At our latest News Impact Summit on “Covering Politics in a Misinformation Age”, she explained that in today’s media environment, journalists need to investigate if the rain could be fake, learn how to verify fake rain, find out who created it, if it’s part of a coordinated effort of fake rainmakers, investigate what their motives may be, and so on…
In other words, news organisations need to fight misinformation to (re-)gain trust from readers. To find out more, we turned to leading experts in the field for a full day of knowledge-sharing.
Here’s what we learned:
Journalists worldwide are targeted by people who deliberately make up false stories. Many newsrooms are not equipped to counter their manipulation tactics — especially at critical moments, like elections, social unrest, extreme weather events, and terrorist attacks.
Example: Are these really images of the Hong Kong demonstrations?
Misinformation is a powerful weapon. It can lead to apathy, disengagement, and distrust, according to Scott Hale, director of research at Meedan. “It can interfere in democracy by suppressing people to vote or influencing public opinion, cause economic harm, and even risk of death.”
Example: The fake cancer cure that circulated in private Facebook groups.
Visual content is particularly prone to be used to mislead, as it can spread easily on social media as it tends to elicit an emotional response. That’s why politicians have been using images for a long time and will continue to do so.
Example: Nigel Farage’s provocative use of an anti-immigrant poster.
New technology has driven the proliferation of dangerously deceptive deepfakes — videos that make it look as if a person said something they didn’t actually say. While this new phenomenon is seen as a huge political threat, it’s not the main problem.
Example: You won’t believe what Obama says in this video!
“A much bigger concern is the use of low tech unaltered images wrapped in a misleading context,” said Farida Vis, director of the Visual Social Media Lab, “especially in countries where media literacy is low and where these images circulate in closed messaging apps like WhatsApp, Signal or Telegram.”
Example: This man wearing an ISIS (Daesh) flag in Paris is not what it seems.
De-contextualised images and videos are very easy to produce and quick to spread, according to Guillaume Daudin, who is leading AFP Fact Check. “It keeps doing a lot of damage, far more than the alleged deepfake threat, which is not having any consequence for the moment,” Guillaume said.
Example: Canadian prime minister Justin Trudeau did not convert to Islam
“It’s not just politicians we should be worried about,” said Craig Silverman, the media editor at BuzzFeed News, explaining how the worlds of for-profit and political disinformation are intertwined. Here are some striking examples:
“Platforms need to create fair and clear policies and then actually enforce them in a consistent and transparent way,” said Craig.
Platforms can warn users of content that is known to be false, limit the algorithms, take down content, ban offenders or — as Twitter recently announced — ban political ads altogether.
“These measures are useful, but we need to be aware that the platforms are incredibly connected,” said Scott, “taking the content of one platform doesn’t remove it from the conversation, it simply drives it to other places online.”
It’s more difficult to research and deal with suspicious content when it circulates on encrypted messaging apps like WhatsApp, Signal, and Telegram, where even the platforms don’t know what is being discussed.
There are numerous tools available to make the verification process more efficient. Journalists can also go beyond fact-checking to investigate context, encourage offline debate and/or collaborate to promote accurate reporting:
There are also many opportunities for journalists to work together if and when appropriate to fight misinformation, for example:
Lastly, we are working hard on a new verification handbook to be released early 2020 with insights from Craig on reporting disinformation, new case studies from Farida around the context of images, and other updates from First Draft, Bellingcat, NBC News and many more. It will be available for free and in several languages. Sign up to our newsletter to get notified on its release, and other EJC activities.
*Registration for our next News Impact Summit is still open. We’ll be tackling challenges around “Identity & Inclusion: Local News With Diverse Voices” in Munich on Friday 6 December.