This study was conducted by the Media Insight Project, an initiative of the American Press Institute (API) and The Associated Press NORC Center for Public Affairs Research. The study was funded by API. Staff from API and The AP-NORC Center collaborated on all aspects of the study.
The study featured two surveys. Interviews for the first survey were conducted between November 22 and December 15, 2019, with 2,727 adults age 18 and older representing the 50 states and the District of Columbia. The first survey included both a probability-based sample and a nonprobability based sample.
For the second survey, 2,124 probability-based respondents who completed the first survey and did not skip key survey questions were invited to complete it. The second survey was conducted between August 18 and August 24, 2020, with 1,155 adults age 18 and older.
Download the PDF version of the report and topline results from the first survey and second survey for ‘Do Americans share journalism’s core values?’
The probability interviews were all conducted using AmeriSpeak®, NORC’s probability-based panel designed to be representative of the U.S. household population.
During the initial recruitment phase of the AmeriSpeak Panel, randomly selected U.S. households were sampled with a known, non-zero probability of selection from the NORC National Sample Frame and then contacted by U.S. mail, email, telephone, and field interviewers (face-to-face). The panel provides sample coverage of approximately 97% of the U.S. household population. Those excluded from the sample include people with P.O. Box only addresses, some addresses not listed in the USPS Delivery Sequence File, and some newly constructed dwellings.
Panel members were randomly drawn from AmeriSpeak panel, and interviews for both surveys were conducted online in English.
The final stage completion rate for the first survey was 26.8%, the weighted household panel response rate was 24.1%, and the weighted household panel retention rate was 85.6%, for a cumulative response rate of 5.5%.
The second survey had a final stage completion rate of 68%, a weighted household panel response rate of 24%, and a weighted household panel retention rate of 86%, for a cumulative response rate of 14%.
The first survey also included a nonprobability sample. Dynata provided 1,020 nonprobability interviews. The Dynata sample was derived based on quotas related to age, race and ethnicity, and gender. Interviews were conducted in English and via the web only. For panel recruitment, Dynata uses invitations of all types, including email invitations, phone alerts, banners, and messaging on panel community sites to include people with a diversity of motivations to take part in research. Because nonprobability panels do not start with a frame where there is a known probability of selection, standard measures of sampling error and response rates cannot be calculated.
To incorporate the nonprobability sample with the probability sample for the first survey, NORC used TrueNorth®, a calibration approach developed at NORC that features small domain estimation methods to account for potential bias associated with the nonprobability sample. The purpose of TrueNorth calibration is to adjust the weights for the nonprobability sample, so as to bring weighted distributions of the nonprobability sample in line with the population distribution for characteristics correlated with the survey variables. Such calibration adjustments help to reduce potential bias, yielding more accurate population estimates.
A small domain model was used with the combined samples to generate estimates at the domain level, where the domains were defined by race/ethnicity, age, and gender. The dependent variables for the models were key survey variables derived from a gradient boosted tree model, and the small domain model included covariates and domain-level random effects. The covariates were external data available from other national surveys such as health insurance, internet access, and housing type from the American Community Survey. The final combined AmeriSpeak and nonprobability sample weights were derived so the weighted estimates of the combined sample were consistent with the small domain model estimates derived for key survey variables.
Once the samples for the two surveys had been selected and fielded, and all the study data had been collected and made final, a raking process was used to adjust for any survey nonresponse in the probability sample as well as any noncoverage or under- and oversampling resulting from the study-specific sample design. Raking variables for the probability sample included age, gender, census division, race/ethnicity, and education. Population control totals for the raking variables were obtained from the 2019 Community Population Survey for the first survey, and the 2020 Community Population Survey for the second survey. The weighted data reflect the U.S. population of adults age 18 or older.
The overall margin of error for the first survey sample is +/- 2.3 percentage points at the 95% confidence level, including the design effect. The margin of sampling error may be higher for subgroups. Although there is no statistically agreed upon approach for calculating margins of error for nonprobability samples, these margins of error were estimated using a calculation called the root mean squared error, along with other statistical adjustments. A mean square error is a measure of uncertainty that incorporates the variability associated with the estimates, as well as the bias associated with the estimates derived from a nonprobability sample.
The overall margin of error for the second survey sample is +/- 4.1 percentage points at the 95% confidence level, including the design effect. The margin of sampling error may be higher for subgroups.
For more information, email info@norc.org.
Contributing Researchers
From the American Press Institute
Tom Rosenstiel
Kevin Loker
Jeff Sonderman
From NORC at the University of Chicago
David Sterrett
Mariana Meza Hernandez
Caroline Smith
Jennifer Benz
Dan Malato
Trevor Tompson
From The Associated Press
Emily Swanson
Hannah Fingerhut
API’s Stephanie Castellano contributed to production of this study and its release. API’s Amy Kovac-Ashley and Susan Benkelman participated in the expert panel of journalists that informed the work, see Appendix II.
About The Media Insight Project
The Media Insight Project is a collaboration of the American Press Institute (API) and The AP‑NORC Center for Public Affairs Research with the objective of conducting high-quality, innovative research meant to inform the news industry and the public about various important issues facing journalism and the news business. The Media Insight Project brings together the expertise of both organizations and their respective partners, and involves collaborations among key staff at API, NORC at the University of Chicago, and The Associated Press.
About The American Press Institute
The American Press Institute (API) advances an innovative and sustainable local news industry by helping publishers understand and engage audiences, grow revenue, improve public-service journalism, and succeed at organizational change. API is a national 503(c) nonprofit educational organization affiliated with the News Media Alliance. It works with and draws on the best ideas from technology, business, and publishing.
About The Associated Press‑NORC Center For Public Affairs Research
The AP-NORC Center for Public Affairs Research taps into the power of social science research and the highest-quality journalism to bring key information to people across the nation and throughout the world.
The Associated Press (AP) is the world’s essential news organization, bringing fast, unbiased news to all media platforms and formats.
NORC at the University of Chicago is one of the oldest and most respected, independent research institutions in the world.
The two organizations have established The AP-NORC Center for Public Affairs Research to conduct, analyze, and distribute social science research in the public interest on newsworthy topics, and to use the power of journalism to tell the stories that research reveals.
The founding principles of The AP-NORC Center include a mandate to preserve carefully and protect the scientific integrity and objectivity of NORC and the journalistic independence of AP. All work conducted by the Center conforms to the highest levels of scientific integrity to prevent any real or perceived bias in the research. All of the work of the Center is subject to review by its advisory committee to help ensure it meets these standards. The Center will publicize the results of all studies and make all datasets and study documentation available to scholars and the public.
Share with your network
- A new way of looking at trust in media: Do Americans share journalism’s core values?
- How we studied moral values to understand trust in the news media
- What are Americans’ moral values and journalism values?
- Broadening the moral values addressed in a news story can increase trust
- Cluster analysis: Four groups of Americans based on their responses to moral and journalistic values
- How to sell more news subscriptions by appealing to broader moral values
- Study methodology for ‘Do Americans share journalism’s core values?’
- Appendix I: Moral and journalism values questionnaires
- Appendix II: Panel of experts who advised on defining journalism values
- Appendix III: Experimental stories we used to test broadened appeals to moral values
You also might be interested in:
Election-focused flyers, postcards and print voter guides will add to the knowledge of how news organizations can deploy print to reach new audiences and deepen community ties.
Bringing reporters and residents together to listen and learn builds trust for news outlets while forging the social connections that help communities thrive.
Each year, these guides are our most popular and shared content on social media. They’ve also really helped us to grow our audience and develop deeper trust and connection with our readers year round.