This research was conducted by the Media Insight Project — an initiative of the American Press Institute and the Associated Press-NORC Center for Public Affairs Research
Overview
As the news industry searches for new revenue models to finance journalism online, a new research study suggests that some sorts of digital advertising are demonstrably more effective with users than others.
Contrary to what some might expect, new styles of ads designed for mobile screens show signs of being more useful than older forms of digital advertising.
Ads that people see as they scroll through a story—known as scroll ads, a newer form of advertising developed for mobile screens—appear to be more effective on a range of metrics than older ad types such as pop-up ads and static banner ads, according to new research by the Media Insight Project, a collaboration between the American Press Institute and The Associated Press-NORC Center for Public Affairs Research.
[pulldata context=”New research suggests some kinds of digital advertising are demonstrably more effective than others.” align=right]People are more likely to say they trust the article they read when scroll ads are present and to notice scroll ads compared to static ads. And they are more likely to recall which product was advertised in a scroll ad than they are in pop-up ad.
The findings are potentially important for publishers as audiences move further toward mobile technology as the primary means for interacting with digital content.
In the first decade of the web—largely a desktop and laptop environment—advertising was migrated from print onto digital screens. Banner and pop-up ads were the primary formats for digital advertising. But engagement with these types of advertising has proven problematic over time, and, as users have moved to mobile devices with smaller screens, advertising messages that block part of the screen have posed complications, from slower load times to the rise of ad blocking.
This new research suggests that advertising formats designed more recently, and with mobile in mind, can be less intrusive. Compared with ad formats designed for other web environments, they can also improve recall of the product being advertised and enhance trust in the article where the ad appears.
These are some of the results of the online survey experiment conducted with 1,489 participants between November 9 and December 6, 2016, using AmeriSpeak, NORC’s nationally representative survey panel. The online panel interface allows respondents to see and respond to content presented to them digitally, something that is not possible with a traditional phone survey.
Background
The new experiments are the second wave of research by the Media Insight Project on trust and media, this one with a particular look at advertising. In the spring of 2016, the project conducted focus groups and a national survey probing in detail what qualities in media are associated with trust.
That earlier work found that in digital news most consumers thought it vital that advertising not interfere with getting the news and information they wanted. Sixty-three percent of Americans who got news in a digital format said it was extremely or very important that ads not block their efforts to get to digital news content. Another 18 percent said it was somewhat important, and just 12 percent said waiting for ads to clear or having to close ads was not at all important.[ref Media Insight Project. 2016. “A New Understanding: What Makes People Trust and Rely on News.”
http://mediainsight.org/Pages/a-new-understanding-what-makes-people-trust-and-rely-on-news.aspx]
This new research is designed to go further. Using an experimental design, groups of respondents were each given the same news content to look at but saw three different kinds of digital advertising surrounding that news content.[ref The article, “Bright spot: Antarctica’s ozone hole is starting to heal,” originally appeared on The Associated Press news wire. It originally ran under the byline of Seth Borenstein on June 30, 2016, and its headline, date, and byline were unchanged from its original publication. The article was presented to respondents under a TIME banner.]
One form of advertising was a classic pop-up ad that would fill the screen and then disappear entirely when the user clicked off of it. Another was a static ad, which would appear wrapped inside an article, in the way that classic advertising does in print. The third was a scroll ad, which would appear atop an article and the user would scroll past it.
It should be noted that the experiment tested a version of scroll ads that differs slightly from what people might see in real life. In the experiment, the scroll ad appeared before the text of the article began. In many current designs, particularly on mobile devices, scroll ads often appear after the article has begun, and in longer stories might appear more than once, which may increase recall.
Several advantages to scroll ads
The experiment finds that people are more likely to recall the product accurately if it appears in a scroll ad than if it appears in either a pop-up or a static ad. Indeed, 34 percent of users accurately recalled the product from the scroll ad, versus 26 percent in static ads and 25 percent in pop-up ads.
Static ad | Scroll ad | Pop-up ad | |
---|---|---|---|
Correctly recalled product advertised | 26% | 34% | 25% |
Correctly recalled brand advertised | 10% | 13% | 9% |
Data Source: What product was the advertisement promoting? What brand was the advertisement promoting?
Source: Media Insight Project poll conducted Nov.9 – Dec. 6, 2016, with 1,489 adults nationwide.
Media Insight Project
The differences were even more pronounced if you take just those people who say they noticed the ad at all. The majority of all respondents, 57 percent, fell into this group.
Among this majority of users who remember the ad, 55 percent of people recall the product accurately in a scroll ad. The static, or traditional display ad where the article wraps around, also fares better among this group. Here, 53 percent accurately recall the product in the ad. But just 41 percent who saw a pop-up version of the same ad could accurately recall the product. Noticing the ad at all, however, is lower for the static ad (48 percent) compared to the scroll ad (62 percent) or the pop-up ad (61 percent).
For the most part, the type of ad people see does not materially impact what they think of the article, except for the overall indicator of whether they trust it. More people who saw the scroll version of the ad (37 percent) explicitly say they trust the article than those who saw the static version of the same ad (29 percent).
Significant pushback against pop-up ads
Pop-up advertising evolved in part because advertisers were worried that people online might miss the ads or ignore ads on digital screens. If they popped up and had to be closed, people would be more likely to at least pay attention to them, the thinking went.
This research suggests that pop-up ads have almost the reverse effect. People are no more likely to notice pop-up ads than they are scroll ads (61 percent versus 62 percent for scroll) and less likely to recall the product in pop-up ads, just 41 percent versus 55 percent in scroll ads and 53 percent in static ads.
[pulldata stat=”78%” context=”of people say pop-up ads annoy them.” align=right]
Pop-up ads also have several negative effects. For instance, 61 percent of people say pop-up ads make the article more difficult to read (versus 37 percent for scroll ads and 19 percent for static ads).
More than three-quarters of people (78 percent) say pop-up ads annoy them, significantly higher than the number of people who say they are annoyed by having to scroll past the ad at the top of the article (55 percent), and the 40 percent who find the ad annoying when the article wraps around it.
And more people who saw the pop-up version of the ad say they stopped reading because of the ad. Indeed, 30 percent say they stopped reading the article after seeing the pop-up ad, while only 19 percent of those who saw the scroll ad say they stopped reading, a similar percentage (17 percent) to those who saw the static version of the same ad.
In short, while sometimes advertising of any type can be a distraction, the research suggests that there are numerous advantages to advertising that users can scroll past, perhaps even more than once, and numerous disadvantages to pop-up ads.
Static ad | Scroll ad | Pop-up ad | |
---|---|---|---|
Made it difficult to read article | 19% | 37% | 61% |
Annoyed you | 40% | 55% | 78% |
Stopped reading because of ad | 17% | 19% | 30% |
Interested in the ad | 10% | 6% | 9% |
Wanted to click on ad to learn more | 6% | 6% | 8% |
Data Source: Did the advertisements have any of the following impacts or didn’t they?
Source: Media Insight Project poll conducted Nov.9 – Dec. 6, 2016, with 1,489 adults nationwide.
Media Insight Project
In the earlier focus groups, some participants described what they did not like about pop-up ads. One participant explained, “If I see more ads than the article, there is a problem. If I cannot read an article because an ad is blocking it and I cannot close the ad, there is a problem.”
With the experiment, roughly equal numbers of those who saw each ad say that it was easy to find important information in the article, and also that the article got the facts right, provided diverse points of view, was entertaining, and had a professional appearance.
The type of ad has little impact in other kinds of engagement, either. For instance, there is no significant difference in the number who say they would share the article, sign up for news alerts from the source, follow the source on social media, recommend the source to friends, or visit the source again.
When readers are interested in the topic of the ad, they are more likely to express some positive evaluations of the article and engage with it
One major trend in advertising in the digital landscape is targeting. Matching the right consumer to the right ad might have all kinds of benefits for engagement with the ad, viewer recall, and much more.
This research finds that the relevance of the ad to the user does not impact evaluations of the ad itself. Prior to seeing the article or ad, respondents were asked how interested they were in a variety of topics. Those who say they are extremely or very interested in cooking, which was the subject of the ad, are no more or less likely to say positive things about the ad, including that they are interested in the ad or want to click on the ad to learn more. This is the case among all respondents in the aggregate and for each of the individual ad types. The personal relevance of the ad, as measured in the experiment, also is not tied to any negative reactions either, such as whether the ad was considered intrusive or made it difficult to read the article.
Respondents who express interest in cooking are also no more likely to notice the ad, recall the product, or recall the brand than those who are not interested in cooking.
Those who say they are interested in the subject of the ad are more likely to evaluate the article positively on a couple of measures, though. Overall, people who are interested in the topic of the ad provide some more positive evaluations of the article, including that it provides diverse points of view (28 percent vs. 19 percent) and that it is entertaining (32 percent vs. 25 percent). But, those interested in the subject of the ad are no more likely to say the article got the facts right, had a professional appearance, that it was easy to find important information, or that the information was trustworthy. And, the differences that were observed overall were not observed within individual ad types.
Those who express interest in the ad’s content are more likely to say they would engage with the article in several ways, including share the article with friends, family, or co-workers (45 percent vs. 36 percent), recommend the source to friends, family, or co-workers (37 percent vs. 29 percent), and follow the source on social media (28 percent vs. 21 percent). They are no more or less likely to say they would sign up for news alerts from the source or visit the source again, however.
About the study
Experiment methodology
This survey experiment was conducted by the Media Insight Project, an initiative of the American Press Institute (API) and The Associated Press‑NORC Center for Public Affairs Research. The survey was conducted from November 9 through December 6, 2016. The survey was funded by API. Staff from API, NORC at the University of Chicago, and AP collaborated on all aspects of the study.
Data were collected using the AmeriSpeak Panel, which is NORC’s probability‑based panel designed to be representative of the U.S. household population. During the initial recruitment phase of the panel, randomly selected U.S. households were sampled with a known, nonzero probability of selection from the NORC National Sample Frame and then contacted by U.S. mail, email, telephone, and field interviewers (face‑to‑face).
The experiment randomly assigned respondents to one of three ad conditions—a static ad tucked to the right of the article’s text, a scroll that lowers down from the top of the article as the reader scans down the article, or a pop-up ad that the reader has to click off of to see the article. Each respondent then saw an ad with an article originally from AP titled, “Bright spot: Antarctica’s ozone hole is starting to heal,” under a banner using the logo for TIME. The content and display of the article was identical across all three conditions aside from the type of ad displayed. The advertisement itself was for Williams-Sonoma cookware. Examples of how each of these ad types appeared on the screen can be seen in the Appendix.
Interviews for this survey were conducted with adults age 18 and over representing the 50 states and the District of Columbia. Panel members were randomly drawn from the AmeriSpeak Panel, and 1,489 completed the survey, all via the web. The final stage completion rate is 34.8 percent, the weighted household panel response rate is 32.4 percent, and the weighted household panel retention rate is 95.5 percent, for a cumulative response rate of 10.8 percent.
The overall margin of sampling error is +/‑ 3.5 percentage points at the 95 percent confidence level, including the design effect. The margin of sampling error may be higher for subgroups.
Respondents were offered a small monetary incentive for completing the survey ($2 or $4 depending on their initial panel recruitment). All interviews were conducted in English by professional interviewers who were carefully trained on the specific survey for this study.
Once the sample was selected and fielded, and all the study data had been collected and made final, a poststratification process was used to adjust for any survey nonresponse as well as any noncoverage or under‑ and over‑sampling resulting from the study‑specific sample design. Poststratification variables included age, gender, Census region, race/ethnicity, and education. The weighted data, which reflect the U.S. population of adults age 18 and over, were used for all analyses.
All analyses were conducted using STATA (version 14), which allows for adjustment of standard errors for complex sample designs. All differences reported between subgroups of the U.S. population are at the 95 percent level of statistical significance, meaning that there is only a 5 percent (or less) probability that the observed differences could be attributed to chance variation in sampling. Additionally, bivariate differences between subgroups are only reported when they also remain robust in a multivariate model controlling for other demographic, political, and socioeconomic covariates. A comprehensive listing of all study questions, complete with tabulations of top‑level results for each question, is available on the Media Insight Project’s website: www.mediainsight.org.
Contributing researchers
From the American Press Institute: Tom Rosenstiel, Jeff Sonderman, Kevin Loker
From NORC at the University of Chicago: Jennifer Benz, David Sterrett, Dan Malato, Trevor Tompson, Liz Kantor
From the Associated Press: Emily Swanson
About the Media Insight Project
The Media Insight Project is a collaboration of the American Press Institute (API) and the AP‑NORC Center for Public Affairs Research with the objective of conducting high‑quality, innovative research meant to inform the news industry and the public about various important issues facing journalism and the news business. The Media Insight Project brings together the expertise of both organizations and their respective partners, and involves collaborations among key staff at API, NORC at the University of Chicago, and The Associated Press.
About the American Press Institute
The American Press Institute (API) conducts research and training, convenes thought leaders, and creates tools to help chart a path ahead for journalism in the 21st century. API is an educational nonadvocacy 501(c)3 nonprofit organization affiliated with the News Media Alliance. It aims to help the news media—especially local publishers and newspaper media—advance in the digital age.
About the Associated Press-NORC Center for Public Affairs Research
The AP‑NORC Center for Public Affairs Research taps into the power of social science research and the highest‑quality journalism to bring key information to people across the nation and throughout the world.
The Associated Press (AP) is the world’s essential news organization, bringing fast, unbiased news to all media platforms and formats.
NORC at the University of Chicago is one of the oldest and most respected, independent research institutions in the world.
The two organizations have established the AP‑NORC Center for Public Affairs Research to conduct, analyze, and distribute social science research in the public interest on newsworthy topics, and to use the power of journalism to tell the stories that research reveals.
The founding principles of the AP‑NORC Center include a mandate to preserve carefully and protect the scientific integrity and objectivity of NORC and the journalistic independence of AP. All work conducted by the Center conforms to the highest levels of scientific integrity to prevent any real or perceived bias in the research. All of the work of the Center is subject to review by its advisory committee to help ensure it meets these standards. The Center will publicize the results of all studies and make all datasets and study documentation available to scholars and the public.
__________________
A formatted PDF version of this report is available for download or printing here.
Share with your network
You also might be interested in:
Liz Worthington has interacted with more than 800 publishers worldwide and worked directly with 400 of them over the past 10 years.
Longtime philanthropic supporters of journalism are doubling down on local journalism specifically—and encouraging others to join them.
By sending data from their targeted audiences in Adobe Analytics into MFN, Crain was able to more clearly understand what topics, categories, and even story types were engaging readers in key parts of their coverage area.