Dissertations, Theses, and Capstone Projects

Date of Degree

9-2024

Document Type

Dissertation

Degree Name

Ph.D.

Program

Psychology

Advisor

Charles Stone

Committee Members

Kelly McWilliams

Jan-Willem van Prooijen

Emily Thorson

Deryn Strange

Subject Categories

American Politics | Cognitive Psychology | Cognitive Science | Communication | Social Media

Keywords

misinformation, conspiracy theories, conspiratorial misinformation, social media, disinformation

Abstract

The threat of misinformation is widely acknowledged among researchers and laypeople (NORC, 2021). When misinformation coalesces with conspiracy theories, the repercussions can be especially dangerous, sometimes even fatal. Events such as the January 6th insurrection and the Buffalo Tops Supermarket shooting were both, in part, inspired by misleading information and conspiracy theories (Burke, 2022; Dawsey, 2023). While misinformation involving conspiracy theories is recognized as possibly posing a greater threat to correction efforts (Lewandowsky, 2021a), no study has yet experimentally manipulated the presence or absence of conspiratorial elements within misinformation. This dissertation explores whether conspiratorial misinformation is more difficult to correct than non-conspiratorial misinformation and whether partisan-biased topics further confound the correction of conspiratorial misinformation. To do so, a pilot and an experimental study were conducted. The pilot study aimed to identify tweets that represented conspiratorial and non-conspiratorial descriptions of events, both partisan and non-partisan. It tested 13 pairs of conspiratorial/non-conspiratorial misinformation tweets about current events. Tweets were categorized as liberal-leaning, conservative-leaning, neutral, vaccination-related, or environment-related. Tweets were selected based on whether participants rated them as significantly different in terms of their conspiratorial or non-conspiratorial nature and whether the liberal and conservative tweets displayed a partisan bias.

Based on the pilot study results, the main study included five pairs of conspiratorial/non-conspiratorial tweets, one from each of the five topics. The main study included 162 participants in a mixed experimental design with two between-subjects factors: misinformation type (conspiratorial, non-conspiratorial) and correction condition (correction present vs. correction absent). There were two within-subject factors: time (belief in the tweet before and after correction) and tweet type (liberal, conservative, neutral, vaccination, environmental). Participants tended to have a lower belief in all misinformation tweets when a correction was issued versus when no correction was given. While corrections significantly reduced belief in non-conspiratorial misinformation, they did not significantly reduce participants’ belief in the conspiratorial counterparts of that misinformation. Participants were more certain of their beliefs and willing to share the misinformation at Time 2 for conspiratorial misinformation than non-conspiratorial misinformation, especially when corrected. When Democrats were shown a belief-congruent (liberal) tweet, their belief was lower at Time 2 than at Time 1, regardless of whether a correction was issued. For Republicans, however, belief in ideologically congruent (conservative) tweets did not decrease at Time 2, regardless of whether a correction was issued. This research partially supports the hypothesis that ideologically congruent tweets will be more difficult to correct. While this was true for Republicans viewing ideologically congruent conservative tweets, this outcome was not observed for Democrats. This study supports previous findings that corrections to misinformation can be effective at immediately lowering belief in that misinformation. However, this study also finds that correcting conspiratorial misinformation is less effective than correcting non-conspiratorial misinformation.

Share

COinS