Fill in your email address to obtain the download verification code.
Enter the verification code
Please fill the fields below, & share with us the article's link and/or upload it:
upload file as pdf, doc, docx
SKeyes Center for Media and Cultural Freedom - Samir Kassir Foundation

The Simplest Way to Spot Coronavirus Misinformation on Social Media

Source OneZero
Wednesday , 04 March 2020
If you were on Twitter Monday, there’s a solid chance you ran across the tweet below, imploring readers not to use hand sanitizer to guard against the coronavirus. In less than a day, it was retweeted nearly 100,000 times and racked up a quarter of a million likes. It was probably seen by millions, and its central message is one that others picked up on and began spreading themselves. It even metastasized to Facebook.



To many people, the tweet rang head-smackingly true. The person behind it self-identified as a scientist; her exasperation seemed genuine and relatable; her point about bacteria being different from viruses would be familiar to anyone who’s been told by a doctor that antibiotics won’t cure their cold or flu.


As you’ve probably guessed by now, the tweet was not in fact accurate. Yes, you should wash your hands to prevent the spread of coronavirus — but alcohol-based hand sanitizers can be effective as an alternative, provided they contain at least 60% alcohol. (Most leading brands do.) That’s per the CDC, as well as numerous articles in mainstream media.


The tweet, part of a plague of misinformation that has accompanied the spread of COVID-19 around the world, illustrates how social media could worsen the outbreak by encouraging counterproductive actions. If fewer people use hand sanitizer as a backup when hand-washing isn’t feasible, it’s possible that more people will be infected, and more will die. It wouldn’t be the first time: In a 2017 Smithsonian article, a historian made the case that the 1918 Spanish Flu epidemic was made more deadly by the U.S. government’s suppression of accurate information about it.


Facebook, Twitter, YouTube, and other platforms have taken steps against coronavirus misinformation, such as directing users to official sources when they search for coronavirus. By Tuesday morning, the original tweet had been deleted (presumably by its author), and screenshots of it that had been posted to Facebook had been flagged as false by the platform’s fact-checking partners. By then, however, the claim had already reached a vast audience, and the platforms lack mechanisms for ensuring that people who saw the false info also see the debunkings. These companies are fighting an uphill battle against the dynamics of their own algorithms, which are built to prioritize speed and engagement.


There is, however, another hope for limiting the virality of false and misleading coronavirus misinformation. We can inoculate ourselves, and the people we know, by learning some basic techniques for spotting it in our feeds.


For four years, Washington State University digital literacy expert Mike Caulfield has been working on the most effective way to teach students to navigate online information and misinformation. With coronavirus heightening the urgency, he’s now trying to get the word out to the rest of us, launching an educational website and Twitter account.


His approach sounds simple — and it is. But familiar as it may be to journalists and fact-checkers, it runs counter to the average news reader’s natural instincts. And with social media companies like Facebook and Twitter ill-equipped to contain falsehoods on their platforms, helping individuals learn to recognize them might represent the best hope for containing them in the short term.


The key is, when you run across a new claim about a topic like coronavirus on social media, don’t try to evaluate it on its own terms. It’s both faster and more effective to evaluate it by cross-referencing — that is, looking elsewhere on the web for confirmation or debunkings. Crucially, it doesn’t require any subject-area expertise — it will help you avoid false coronavirus cures on WhatsApp, just as it would help your uncle avoid flat-Earth conspiracies on YouTube.


Caulfield’s approach builds on a 2017 paper by researchers at Stanford University who found the conventional approach to online literacy had left students unprepared for the world of social media. They proposed a different approach called lateral reading.


“People have been trained in schools for 12 years: Here’s a text, now read it, and use your critical thinking skills to figure out what you think about it,” Caulfield says. “Professional fact-checkers do the opposite: They get to a page, and immediately get off of it.” Instead, they start opening other tabs that can shed light on the article’s reliability and central claims.


Inspired by those fact-checkers’ efficient habits, Caulfield set out to tackle a single question: “What is the smallest set of skills that we can give people that prepares them to engage as active citizens on the web?” What he came up with is a technique he believes just about anyone can apply to a given social media post in about 30 seconds, once they’ve mastered it.


He sums it up with the acronym SIFT:

  1. Stop.
  2. Trace claims, quotes, and media to the original context.


Each of these steps comes with a couple of go-to “moves,” such as hovering over the bio of a Twitter user before retweeting them, or searching for a URL on Wikipedia before you actually visit it. You can get a quick tutorial on Caulfield’s site, “Sifting Through the Coronavirus.”


The hand sanitizer tweet makes a fine case study. “If you try to be deductive here and ‘critically think,’ you’ll fail,” Caulfield says. Unless you’re a doctor yourself, “You don’t know enough” to assess the claim with your intellect alone.


The first step is to stop: Don’t accept or share a claim about coronavirus until you’ve checked it out. Next, investigate the source: The author said she was a scientist, but she didn’t specify what kind of scientist. On complex topics, domain expertise matters. Hovering over her Twitter bio brought up no further relevant information — an anime avatar, no professional affiliation, no last name, and no blue checkmark to communicate that she’s a “verified” user. And she didn’t link to any evidence of her claim. That in itself isn’t damning, but it suggests a need for more digging.


Next comes “find better coverage.” In this case, that could mean a quick Google search of “hand sanitizer” and “coronavirus.” On Monday, that search brought up several results from public health organizations, including the CDC, recommending the use of hand sanitizer when hand-washing isn’t feasible. By Tuesday morning, the top hit was a Politifact postdebunking the tweet in question. If the Google search doesn’t quickly clarify things, searching Google News might.


The last step, tracing the claims to their original context, is less relevant in this case: With no attachments or links, the tweet was its own original context. That said, the author did turn it into a thread, and her subsequent tweets walked back the initial claim and acknowledged the consensus view. So, if you arrived at the right moment, zooming out to see the rest of the thread would have helped — though the author has deleted even the clarification at this point.


Which step turns out to be the crucial one can vary depending on the type of misinformation, Caulfield said. When it comes from a fake news site pretending to be a real news site — such as the infamous (now defunct) abcnews.com.co, which was made to look like ABC News — you can often find that out by doing a Google search for its URL and adding the word “Wikipedia” to see if there’s any crowd-sourced information about the outlet, or to see if it’s been cited in any articles. In this case, searching for “abcnews.com.co Wikipedia” turns up a Wikipedia entry that exposes it as a fraud.

Other times, the misinformation lies not in the source itself, but in how it’s framed on social media. One viral post implied that a Harvard medical professor had been arrested for conspiring with the Chinese government to create the coronavirus. The post linked to a real CNN story about that Harvard professor’s arrest for allegedly lying about his connections to the Chinese government. But Caulfield’s “trace claims” step calls for opening the article and, before reading it in full, doing a quick command-F search for the key terms. In the CNN story, the word “coronavirus” is nowhere to be found. As a FactCheck.org post confirms, the Harvard professor’s arrest had nothing to do with COVID-19.


Again, none of these steps is rocket science — which is good, because most people aren’t rocket scientists. Even learning a method as simple as Caulfield’s SIFT acronym is probably too much to ask of everyone who uses social media. But he believes the coronavirus outbreak might give at least some fraction of users the impetus to take an hour or so to learn it.


Once they do, he said, they’ll be able to identify most instances of misinformation within about 30 seconds. And then, he hopes, some of them will help to defuse it, perhaps by flagging it for fact-checkers or posting replies with links to better information. They might also teach it to some friends or family members who have a penchant for falling for hoaxes.


While the SIFT technique can also work for political misinformation, Caulfield said, that arena presents special obstacles. People already entrenched in partisan viewpoints might not particularly care if they’re spreading falsehoods as long as they help the right side. Whereas in the case of coronavirus, “People going to the internet for information are not necessarily dug into a position,” and almost everyone has an incentive to get the most accurate information they can.


Caulfield’s approach isn’t foolproof. It probably won’t help people to recognize when a reputable organization or media outlet has simply botched its coverage of a technical topic unless the screw-up was so bad that it makes news elsewhere. (“Find better coverage” could still apply, though.) And even people who take the time to learn the techniques probably won’t apply them in every case. Sometimes people just want to skim Facebook or Twitter without putting on their amateur fact-checking hat.


In an ideal world, society’s primary channels of information about a critical topic such as coronavirus would take seriously the responsibility to make sure that information is accurate. Ordinary citizens wouldn’t have to become digital sleuths. But the world we live in is one in which the tech giants that disrupted the news business are deeply invested in a model that makes it impossible to vet information before it proliferates. So for now, we’re left to rely on their reactive half-measures — and our own wits.

Share News