Once a news organization identifies something as false, the question becomes how to cover it, if at all.
The instinct of most news people when they encounter a falsehood is to correct it. The underlying principle here, ingrained in newsroom professionals over time, is to dispel inaccurate information before it spreads too far, whether that false information is a deliberate hoax about a prominent politician, a false assertion about vaccine safety innocently passed on, or a state-sponsored effort to sow discord and create doubt about democratic institutions and processes. In that sense, journalists have tended to operate like police officers at speed traps passing out tickets, hoping their vigilance will stop others on the road.
But just as journalists have become better at spotting false information online, they are also gaining a better understanding of the care they must take in debunking it to avoid amplifying the thing they are trying to correct. And the task is far more complicated than it once seemed. It also requires situational judgment. Every case is different.
The first consideration is whether the misinformation should be covered at all. The issue isn’t always a matter of whether something is wrong. Nor is it necessarily whether it has already gotten some notice. That might have been the case when the press could consider itself a gatekeeper that determined to a large degree what the public knew and didn’t know. Today, the public has many sources of information. The question of whether to cover a falsehood can also be a matter of whether giving it more attention can spread it further and do greater harm. Not every falsehood that has been sent out into the world needs to be corrected. In today’s environment, after all, not covering something can be as much of a statement as covering it.
To help address the tension between these competing impulses, two prominent scholars in the misinformation field, Joan Donovan at the Harvard Kennedy School and Microsoft researcher danah boyd, have put forth the idea of “strategic amplification,” the idea that in a complex communication landscape, news organizations and the new platforms challenging them should develop and employ best practices for producing news content and designing the algorithmic systems that help spread it.
When the notion of becoming more thoughtful about not calling out lies was introduced by scholars, it was described as “strategic silence.” But today’s information landscape “has destabilized the notion that silence as a tool for editors may be used strategically,” Donovan and boyd wrote in a fall 2019 article, Stop the Presses? Moving from Strategic Silence to Strategic Amplification in a Networked Media Ecosystem. The media’s ability to actively balance public interest and public harm in their news decisions, they wrote, “has come undone.”
The decision-making for journalists in this case is more complicated than handing out “falsehood tickets.” Instead, journalists must ask themselves whether a falsehood has become so significant that it needs to be knocked down. What falsehoods would wither more by ignoring them? Does a serious news organization legitimize problematic information by giving it recognition? How can it amplify the right content?
The authors say that the news media and technology companies need to acknowledge how their roles are “isomorphically intertwined.” Both platforms and news organizations are engaged in disseminating and amplifying information, they write, and they both should understand that they have a moral obligation to act.
The tension between publishing or remaining silent about misinformation – and other kinds of problematic information such as hate speech – is well-known to journalists. But irreversible changes in the way information spreads has altered the calculations news leaders must make in exercising that editorial judgment.
In a report for Data & Society, “The Oxygen of Amplification,” Whitney Phillips, an assistant professor in communications, culture and digital technologies at Syracuse University, noted that journalists are aware that their task has changed. In her conversations with them for the study, she wrote, she found that “…as soon as the reporter finished listing the dangers of amplification, they would then explain the dangers of doing nothing.”
How to handle that tension in everyday publishing decisions? Researchers such as Phillips and First Draft co-founder Claire Wardle talk about a “tipping point,” the moment at which journalists may feel that ignoring a false story is no longer an option. Finding the tipping point will be driven by a number of factors, they say. Those factors include assessing the value of the public knowing that the false information is circulated, the degree to which the information has been shared elsewhere, who created it, and whether it has already had a demonstrated effect.
The growth of misinformation has generated a growing body of work aimed at helping journalists understand and contend with these problems. First Draft published four guides in 2019 to help journalists navigate these questions. In one, “Responsible Reporting in an Age of Disorder,” it lays out some questions journalists can ask themselves when they’re deciding whether something has reached the tipping point.
The size, locale and makeup of a news organization’s audience can be a key factor. Different news organizations may identify their tipping point at different times. There is also likely to be disagreement, which can also lead to arguments after the fact in social media and elsewhere. But in real time, journalists still need to decide. And whatever that decision is, they need to be able to defend it.
The Truth Sandwich
Once a decision is made to go with a story about misinformation, journalists must then frame it in a way that ensures amplification of the truth, and non-amplification of falsehoods.
Some media scholars emphasize that simply saying something is not true will not persuade people it is false. Doing so, in fact, could have the effect of planting the idea more firmly in people’s minds. Various studies have shown that repetition can actually persuade people to believe even things they already know are not true.
In a 2019 study into this illusory truth effect, “Repetition increases perceived truth equally for plausible and implausible statements,” Vanderbilt University’s Lisa K. Fazio, along with David G. Rand at the Massachusetts Institute of Technology and Gordon Pennycook at the University of Regina, wrote that people’s belief in all statements – including the most implausible ones – is increased by repetition. “Even implausible falsehoods may slowly become more plausible with repetition,” they wrote.
Such studies heighten the need for emphasizing – and repeating – truthful information.
In mid-2018, one strategy for amplifying the truth over falsehoods gained prominence when CNN’s Brian Stelter interviewed George Lakoff, a linguist who is professor emeritus at the University of California, Berkeley, for Stelter’s “Reliable Sources” podcast.
Lakoff had written a piece in The Guardian arguing that President Donald Trump was using “words as a weapon” to manipulate the media, which he said too often just repeated the president’s falsehoods. Digging into this, Stelter asked how journalists could avoid this practice, and Lakoff suggested a framing in which journalists put the truth first, then the falsehood, then repeating the truth. “It’s a truth sandwich,” Stelter said, using a term that would later become more widely used, including by Lakoff himself.
The term “truth sandwich” became a momentary buzzword for a tactic journalists could use for dealing with the falsehoods emanating from the White House. The Washington Post’s media columnist, Margaret Sullivan, wrote about it, as did Mark Memmott, NPR’s standards and practices editor. And some people still advocate for it.
It is also not an entirely new concept. Back in the earliest days of the fact-checking movement in the early 1990s, scholar Kathleen Hall Jamieson advised network news divisions that if they were going to note that something in a TV ad was false, they should put the disputed advertisement inside some kind of graphic, such as a television, to illustrate visually that the images were a TV ad that was being deconstructed. Otherwise, she argued, viewers would think the false images were news images produced by the network.
So far it’s not clear whether the truth sandwich as a story-writing technique has gained much traction among journalists. One reason may be that it’s somewhat counterintuitive, since the falsehood is often a reason for writing the story in the first place, so “sandwiching” it can feel like burying the news. In a breaking news case, the approach is often to say that “Trump falsely claimed…” or “Trump wrongly said…” But that may be a thin piece of bread for the sandwich.
Another issue is that sometimes getting to the truth involves proving a negative. Thus a truth sandwich would make some news stories feel inside out. In June 2019, for example, Trump asserted that Barack Obama during his presidency “was begging” North Korean Leader Kim Jong Un for a meeting. Applying a truth sandwich to that – by putting the truth first – would have the story lead with something that didn’t happen, and didn’t happen years ago. Even in fact-checking this assertion, reporters had to rely on former Obama administration officials’ denials to disprove Trump’s assertion. (Trump repeated a similar claim in a lengthy cabinet meeting in October, setting fact-checkers into motion again).
But while the truth sandwich might not always work in a hard news situation, it could be applied to an analytical piece after the original news is reported. In the analytical take, a truth sandwich might look as follows:
The point of such an approach is to double down on the truth, as Lakoff suggests, rather than to simply state it. He argues for the sandwich technique because simply rebutting the assertion can have the effect of reinforcing the falsehood in readers’ minds.
“It’s like when Nixon said, ‘I am not a crook,’ and everyone thought of him as a crook,” Lakoff told Stelter. “The point is that denying a frame activates the frame.”
The cover/no cover tension: Vaccine hesitancy
Nowhere is the tension between remaining silent and publishing stories about false information in an attempt to correct it more apparent than on the issue of vaccine hesitancy – the decision by parents to go against medical recommendations and not have their children vaccinated because they fear there will be side effects.
Anti-vaxxers, as they are known, spread their views on social media, constantly challenging the medical establishment’s consensus that vaccines are safe and necessary to prevent outbreaks of measles and other communicable diseases. Often they share emotionally charged stories of children with autism or other conditions the parents are convinced were caused by vaccines. There is also evidence that Russian bots have worked to sow confusion and further fuel that debate.
Here is a case where the story must be told, as a public health matter, in order to debunk false information and reinforce the truth. But it must be told with care. It’s not appropriate to report the views and fears of the parents who hold anti-vaccination views in isolation. It’s not even enough to report those fears while also reporting the truth about vaccine safety. Even a seemingly neutral or question headline can send the wrong message.
In one 2019 example, NBC’s Today Show was widely criticized for a tweet, later deleted, that raised the question of vaccine safety as part of a story aimed at debunking myths about vaccines.
We removed the tweet, seen below, which included an irresponsibly presented headline. The article headline has also been updated. https://t.co/HPlkuhlwCY pic.twitter.com/s9cYrfjxPo
— TODAY Health & Wellness (@TODAYshowHealth) June 13, 2019
The tweet linked to a story with the headline: “Doctors discuss 7 common vaccine myths,” with the subtitle: “When it comes to vaccines, it is easy to be confused.” Some critics said even that was problematic, arguing that information about vaccines is not confusing at all if you’re listening to medical professionals and proven science.
In some cases, like on the local level, there might still be an argument for avoiding coverage altogether.
Say a community of anti-vaccination advocates is planning to hold a meeting where they hope to win converts and spread their views that the measles-mumps-rubella shot is dangerous. This is clearly a case in which misinformation is being spread, but it may be better to avoid publicizing the event. Editors decide to cover events based on a number of factors – how many people will it attract? Is anyone in danger? Is a confrontation expected? Is there new information that will come to light? In the end, if there is no compelling reason to cover it, the editor might legitimately choose to pass.
As noted in the section on strategic amplification, not covering something is a concept at odds with the instincts of journalists who see it as their job to deliver even the most difficult or complex news to their communities. The impulse to cover an event can also be triggered when it is getting attention elsewhere, like on Facebook or other social media platforms – again, triggering the “tipping point” decision.
Tackling misinformation head-on: The Pelosi video
Perhaps the best-known example of a news organization’s decision to tackle a story about a piece of disinformation was The Washington Post’s decision earlier this year to report on a video that was manipulated to make U.S. House Speaker Nancy Pelosi (D.-Calif.) appear drunk or somehow otherwise impaired. Memes and smears about Pelosi have made the rounds on social media for years, some of them implying that she was drunk. But this one stood out for its audacity – and its reach.
There was nothing sophisticated about the manipulation of the video of a speech Pelosi delivered to the Center for American Progress. It was not what is known as a “deepfake,” which uses artificial intelligence to create the appearance of reality; it was merely slowed down. But it looked real and it got over a million views before The Post even did its story. (For a deeper exploration of video manipulations, see Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence, by Joan Donovan and Britt Paris, an assistant professor of library and information science at Rutgers University.)
The Post’s editors put significant thought and discussion into running with the story, the paper’s business editor, David Cho, explained at API’s summit. He emphasized that every case is different, that there is no set formula for deciding on whether and when to report on misinformation.
In this case, a number of factors went into The Post’s “tipping point” – its decision to go with the story – including, but not limited to, the fact that it had already been viewed so widely on Facebook. The Post had been writing about the emergence of deepfakes and “shallow” fakes prior to the Pelosi video manipulation, so this was a continuation of that coverage. In addition, tensions between the speaker and the president were growing at that moment. And Facebook’s refusal to take down the video – in contrast to YouTube’s decision to remove it – was an important story about how the platforms treat misinformation.
In cases like this, writing about fake or manipulated content could add to the viral nature of the video. But, as The Post’s story noted, the video exemplified the kinds of misinformation people would be exposed to heading into the 2020 election. The Post’s report called it an example of the kinds of how “low-tech, relatively simple editing can dupe viewers and trigger widespread disinformation.”
Moreover, it was aimed at one of the nation’s most important politicians. The speaker of the House is second in the line of succession for the presidency and the most powerful Democrat in America in 2019. The “impairment” may have been false, but, as with much misinformation, its aim was much greater – to damage faith and confidence in the leadership of the House and the Democratic Party.
In the end, we think The Post’s treatment of the Pelosi story followed many experts’ recommendations about how to frame the fake without amplifying it. The reporters and editors made it as clear as possible that this was a story about a fake video, not one about Pelosi. Importantly, the first word in the headline was “faked” and the first word in the story was “distorted.” This is meaningful for first-impression reasons, but also for search reasons, as some search engines will deliver only partial headlines on their results page. In that sense, The Post case is a model for how to make difficult decisions about false information. Even then, publishers can expect controversy.
The unique problem of Trump
The Trump presidency has raised to new levels the complexities involved in deciding whether to cover something or remain silent about it for fear of giving it undue attention. Is it news if the president says it? What if it’s false? What if he’s already announced it to his millions of followers on Twitter? What if it’s false but so audacious that the very fact that the president is saying it is news? What if it is a falsehood that he has repeated dozens of times and has already been thoroughly discredited? Which misleading tweets are newsworthy, and which are not?
The pressures to publish something Trump says, even if untrue, can be greater than the desire not to give it traction.
In the past, presidents have used the full array of channels to communicate to the public – interviews, daily press briefings, news conferences, public appearances, and, in recent years, social media. Trump relies most days (and at all times of the day) on one channel by which he can communicate directly to the public, Twitter. When he became president, this posed a new challenge to White House reporters and editors because they suddenly had to be prepared to write about a midnight or 5 a.m. tweet – sometimes in the same 24-hour period. And social media – not the mainstream media – is the one framing his comments. There is no intermediating filter.
Trump’s use of Twitter – and the direct communication that all political figures will use going forward – illustrates a growing reality: For much of what the press now does, it is no longer a gatekeeper of what the public knows. It is often instead more an annotator and analyzer of what the public has already heard, noting what is false, out of context, or a flat-out lie, after it is already out there.
After the first year or so of his presidency, Trump’s tweets weren’t always automatic news. This is partly because the novelty of a president communicating through Twitter had worn off. Trump was often repeating himself. And the phenomenon of a president tweeting in all caps, often ungrammatically, sometimes hysterically, was no longer shocking.
Another challenge is that often what he says is simply not true.
Holding the president accountable for the veracity of his statements, of course, is an important function of the press, though, so those falsehoods need to be put on the record. That job in recent years has fallen to fact-checkers such as those at The Washington Post, which has chronicled and tallied the president’s falsehoods each day since he took office. They surpassed the 13,000 mark in September of 2019.
In a way, argues David Lauter, Washington bureau chief of the Los Angeles Times, Trump has done the media a favor. No longer is there an assumption that what the president says is true and thus there is no longer an assumption that it is always news that needs to be covered the same way as in the past. Lauter is quick to note that what presidents before Trump said wasn’t always true, either, but they commanded a greater level of credibility. Not covering his every word frees the journalists to write about issues they consider more pressing for their readers. In a sense, reporters are now employing a form of “strategic silence” when it comes to the president of the United States.
But the “go/no go” story decision is often made under the pressure of a deadline, and not quoting the president at times can also be perilous – especially in cases where he seems to be announcing some new policy (which he may or may not end up pursuing).
Trump, by using social media so widely, is seeking to simply circumvent traditional media. He also uses live television to his advantage, an older medium than Twitter but one that raises even more troubling questions because he is so unpredictable. In cabinet meetings or White House press gaggles, he can perform long monologues that often contain multitudes of untruths, leaving producers or livestream operators to either let the falsehoods flow directly to viewers or make the political decision of cutting away from the president of the United States.
Some networks have taken to live fact-checking his remarks. CNN is an example. While the network often runs Trump’s comments live, it will include on its screen a “reality check” that calls out falsehoods as he makes them. The network in the summer of 2019 hired Daniel Dale, one of the most prolific Trump fact-checkers in media, to bolster this effort.
Using attribution as cover
In their rush to keep up with the accelerated news cycle, journalists will often default to the one true thing they can publish. The person did say this thing – that is a true statement. But repeating it without context is using attribution as cover for amplifying a falsehood.
In environments where misinformation is rampant, what seems like neutrality on deadline can look after the fact like a publication has been used to amplify an agenda or a falsehood. This is particularly true in cases where politicians are being quoted on something that might be considered news, e.g., “Politician Smith said XYZ.”
In a 2019 example, after Trump held a rally in North Carolina in which his supporters started chanting “Send her back!” in a taunt at Rep. Ilhan Omar, a Minnesota Democrat who was born in Somalia, the president tried the next day to disavow those chants.
Some of the resulting headlines missed the context by simply quoting the president as saying he didn’t like the chants. That context: The chants started as Trump recounted controversial remarks made by Omar, including one in which she perpetuated an anti-Semitic trope. His claim that he tried to stop the chants is clearly contradicted by video of the event showing him waiting quietly while the audience chanted. Also, the day before, Trump had tweeted that Omar and three other congresswomen who have been critical of the president and his policies could “go back” to where they came from if they didn’t like it.
Here is an example, on Twitter, of the importance of context.
Lacks context as story was breaking:
#BREAKING: Trump says he disagrees with “send her back” chant https://t.co/BWmkMgfXXc pic.twitter.com/gI5L2wp7G1
— The Hill (@thehill) July 18, 2019
Includes the context 22 minutes later:
JUST IN: Trump claims he tried to stop rally crowd’s “send her back” chants despite letting them run for over 10 seconds https://t.co/xGK2VkGjaa pic.twitter.com/sh7Y9UJNMG
— The Hill (@thehill) July 18, 2019
The headline for the story, “Trump says he disagrees with ‘send her back’ chant” remained the same throughout. The story did note that Trump did not seek to stop the chanting, as he said he did.
A final challenge involved in covering Trump’s falsehoods are cases where he repeats something that oversimplifies and leaves out important context, such as his claim that the United States is building a wall along the southern border.
Fact-checkers have repeatedly debunked this claim, which the president has made as many as 200 times, according to The Washington Post. “A barrier is being built on the southern border, but not the 30-foot-tall, concrete, 1,000-mile wall Trump promised in the 2016 campaign,” The Post’s fact-checker wrote in October 2019. “Much of this barrier is simply replacement fencing.”
As The Post’s video editor wrote, “Images and video can create a powerful effect online in convincing American voters of the wall’s progress.”
The challenge goes beyond simply debunking the claim. Trump’s repetition of the falsehood means editors have to make a decision every time about whether to debunk it again or ignore it, which would mean letting the falsehood stand unchallenged on social media.
Strategies for covering falsehoods without amplifying them
There are various strategies journalists can employ to ensure they’re amplifying the truth and avoiding amplification of falsehoods. Among the most helpful:
1. Identifying the tipping point: First Draft and other organizations have suggested questions for journalists to ask themselves before reporting on misinformation. Every situation will be different and every news organization has a different audience, so each publication should have its own guidelines.
First Draft has a checklist of “10 questions to ask” before publishing misinformation. A key question is why go with such a story – is reporting on the misinformation helping to clear up a widespread public misunderstanding? Is it important to hold a public official accountable?
A central consideration is whether the information has reached a “tipping point” — the point at which a story about a hoax, a falsehood making the rounds, or a conspiracy theory becomes too big to ignore. Each story, newsroom and each scenario will have a different tipping point. But it is important to know there is a line and that you are going to decide when to cross it.
2. Truth Sandwich: If you report a falsehood, cushion it between two hearty slices of the truth. As noted above, this may work better in an analytical treatment than a breaking news situation.
The truth sandwich is more of a concept than a precise technique, but journalists could develop their own versions based on the observations of its creator, George Lakoff, from a podcast in 2018 or from this article on Vox.
3. Label misinformation clearly: If you report on a doctored photo, meme or video, mark it clearly as misinformation. BuzzFeed News does a good job of labeling, as in a 2018 story about explosive devices sent to prominent politicians, which shows screenshots of false information with big “fake” stickers on them. There is little chance people will think it is showing something real. Some fact-checkers use stickers or other visuals to indicate falsehoods.
First Draft provides some examples and techniques for this in the October 2019 report, Responsible Reporting in an Age of Information Disorder.
4. Avoid using attribution as cover for amplifying a falsehood: Don’t repeat the falsehood in the headline. Just because someone said something doesn’t mean you should repeat it without context.
Share with your network
You also might be interested in:
The press will be much more effective in serving people and strengthening democracy if it learns from what researchers are learning. Among the examples and takeaways, you will find that news leaders and non-news experts alike value the opportunity to think differently about the challenges in front of them, about how local news can change and how research can ask different questions.
We'll share some of the resources, tools and lessons learned from our training sessions and research help desk. We hope you can use these as you plan your continuing accountability coverage and start thinking about the next election on the horizon.
When community members are no longer voters, their needs become diffuse once again and there is no clear, focusing mandate. So many newsrooms slip back into the usual: politics coverage driven by politicians and press releases. How do we avoid that backslide?
How can we avoid that backslide this time?