Findings

Who Knows What

Kevin Lewis

April 30, 2024

Metaknowledge of Experts Versus Nonexperts: Do Experts Know Better What They Do and Do Not Know?
Yuyan Han & David Dunning
Journal of Behavioral Decision Making, April 2024

Abstract:
Experts are usually valued for their knowledge. However, do they possess metaknowledge, that is, knowing how much they know as well as the limits of that knowledge? The current research examined expert metaknowledge by comparing experts' and nonexperts' confidence when they made correct versus incorrect choices as well as the difference in-between (e.g., Murphy's Resolution and Yate's Separation). Across three fields of expertise (climate science, psychological statistics, and investment), we found that experts tended to display better metaknowledge than nonexperts but still showed systematic and important imperfections. They were less overconfident than nonexperts in general and expressed more confidence in their correct answers. However, they tend to exhibit low Murphy's Resolution similar to nonexperts and gave endorsed wrong answers with equal to higher confidence than did their nonexpert peers. Thus, it appears that expertise is associated with knowing with more certainty what one knows but conceals awareness of what one does not know.


A Practical Significance Bias in Laypeople's Evaluation of Scientific Findings
Audrey Michal & Priti Shah
Psychological Science, April 2024, Pages 315-327

Abstract:
People often rely on scientific findings to help them make decisions -- however, failing to report effect magnitudes might lead to a potential bias in assuming findings are practically significant. Across two online studies (Prolific; N = 800), we measured U.S. adults' endorsements of expensive interventions described in media reports that led to effects that were small, large, or of unreported magnitude between groups. Participants who viewed interventions with unreported effect magnitudes were more likely to endorse interventions compared with those who viewed interventions with small effects and were just as likely to endorse interventions as those who viewed interventions with large effects, suggesting a practical significance bias. When effect magnitudes were reported, participants on average adjusted their evaluations accordingly. However, some individuals, such as those with low numeracy skills, were more likely than others to act on small effects, even when explicitly prompted to first consider the meaningfulness of the effect.


The distorting effects of producer strategies: Why engagement does not reveal consumer preferences for misinformation
Alexander Stewart et al.
Proceedings of the National Academy of Sciences, 5 March 2024

Abstract:
A great deal of empirical research has examined who falls for misinformation and why. Here, we introduce a formal game-theoretic model of engagement with news stories that captures the strategic interplay between (mis)information consumers and producers. A key insight from the model is that observed patterns of engagement do not necessarily reflect the preferences of consumers. This is because producers seeking to promote misinformation can use strategies that lead moderately inattentive readers to engage more with false stories than true ones -- even when readers prefer more accurate over less accurate information. We then empirically test people's preferences for accuracy in the news. In three studies, we find that people strongly prefer to click and share news they perceive as more accurate -- both in a general population sample, and in a sample of users recruited through Twitter who had actually shared links to misinformation sites online. Despite this preference for accurate news -- and consistent with the predictions of our model -- we find markedly different engagement patterns for articles from misinformation versus mainstream news sites. Using 1,000 headlines from 20 misinformation and 20 mainstream news sites, we compare Facebook engagement data with 20,000 accuracy ratings collected in a survey experiment. Engagement with a headline is negatively correlated with perceived accuracy for misinformation sites, but positively correlated with perceived accuracy for mainstream sites. Taken together, these theoretical and empirical results suggest that consumer preferences cannot be straightforwardly inferred from empirical patterns of engagement.


Can Confirmation Bias Improve Group Learning?
Nathan Gabriel & Cailin O'Connor
Philosophy of Science, April 2024, Pages 329-350

Abstract:
Confirmation bias has been widely studied for its role in failures of reasoning. Individuals exhibiting confirmation bias fail to engage with information that contradicts their current beliefs, and, as a result, can fail to abandon inaccurate beliefs. But although most investigations of confirmation bias focus on individual learning, human knowledge is typically developed within a social structure. We use network models to show that moderate confirmation bias often improves group learning. However, a downside is that a stronger form of confirmation bias can hurt the knowledge-producing capacity of the community.


Bayesianism and wishful thinking are compatible
David Melnikoff & Nina Strohminger
Nature Human Behaviour, April 2024, Pages 692-701

Abstract:
Bayesian principles show up across many domains of human cognition, but wishful thinking -- where beliefs are updated in the direction of desired outcomes rather than what the evidence implies -- seems to threaten the universality of Bayesian approaches to the mind. In this Article, we show that Bayesian optimality and wishful thinking are, despite first appearances, compatible. The setting of opposing goals can cause two groups of people with identical prior beliefs to reach opposite conclusions about the same evidence through fully Bayesian calculations. We show that this is possible because, when people set goals, they receive privileged information in the form of affective experiences, and this information systematically supports goal-consistent conclusions. We ground this idea in a formal, Bayesian model in which affective prediction errors drive wishful thinking. We obtain empirical support for our model across five studies.


How people decide who is correct when groups of scientists disagree
Branden Johnson, Marcus Mayorga & Nathan Dieckmann
Risk Analysis, April 2024, Pages 918-938

Abstract:
Uncertainty that arises from disputes among scientists seems to foster public skepticism or noncompliance. Communication of potential cues to the relative performance of contending scientists might affect judgments of which position is likely more valid. We used actual scientific disputes -- the nature of dark matter, sea level rise under climate change, and benefits and risks of marijuana -- to assess Americans' responses (n = 3150). Seven cues -- replication, information quality, the majority position, degree source, experience, reference group support, and employer -- were presented three cues at a time in a planned-missingness design. The most influential cues were majority vote, replication, information quality, and experience. Several potential moderators -- topical engagement, prior attitudes, knowledge of science, and attitudes toward science -- lacked even small effects on choice, but cues had the strongest effects for dark matter and weakest effects for marijuana, and general mistrust of scientists moderately attenuated top cues' effects. Risk communicators can take these influential cues into account in understanding how laypeople respond to scientific disputes, and improving communication about such disputes.


Can Invalid Information Be Ignored When It Is Detected?
Adam Ramsey, Yanjun Liu & Jennifer Trueblood
Psychological Science, April 2024, 328-344

Abstract:
With the rapid spread of information via social media, individuals are prone to misinformation exposure that they may utilize when forming beliefs. Over five experiments (total N = 815 adults, recruited through Amazon Mechanical Turk in the United States), we investigated whether people could ignore quantitative information when they judged for themselves that it was misreported. Participants recruited online viewed sets of values sampled from Gaussian distributions to estimate the underlying means. They attempted to ignore invalid information, which were outlier values inserted into the value sequences. Results indicated participants were able to detect outliers. Nevertheless, participants' estimates were still biased in the direction of the outlier, even when they were most certain that they detected invalid information. The addition of visual warning cues and different task scenarios did not fully eliminate systematic over- and underestimation. These findings suggest that individuals may incorporate invalid information they meant to ignore when forming beliefs.


Durably reducing conspiracy beliefs through dialogues with AI
Thomas Costello, Gordon Pennycook & David Rand
MIT Working Paper, April 2024

Abstract:
Conspiracy theories are a paradigmatic example of beliefs that, once adopted, are extremely difficult to dispel. Influential psychological theories propose that conspiracy beliefs are uniquely resistant to counterevidence because they satisfy important needs and motivations. Here, we raise the possibility that previous attempts to correct conspiracy beliefs have been unsuccessful merely because they failed to deliver counterevidence that was sufficiently compelling and tailored to each believer's specific conspiracy theory (which vary dramatically from believer to believer). To evaluate this possibility, we leverage recent developments in generative artificial intelligence (AI) to deliver well-argued, person-specific debunks to a total of N = 2,190 conspiracy theory believers. Participants in our experiments provided detailed, open-ended explanations of a conspiracy theory they believed, and then engaged in a 3 round dialogue with a frontier generative AI model (GPT-4 Turbo) which was instructed to reduce each participant's belief in their conspiracy theory (or discuss a banal topic in a control condition). Across two experiments, we find robust evidence that the debunking conversation with the AI reduced belief in conspiracy theories by roughly 20%. This effect did not decay over 2 months time, was consistently observed across a wide range of different conspiracy theories, and occurred even for participants whose conspiracy beliefs were deeply entrenched and of great importance to their identities. Furthermore, although the dialogues were focused on a single conspiracy theory, the intervention spilled over to reduce beliefs in unrelated conspiracies, indicating a general decrease in conspiratorial worldview, as well as increasing intentions to challenge others who espouse their chosen conspiracy. These findings highlight that even many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds in the face of sufficient evidence.


The Status Foundations of Conspiracy Beliefs
Saverio Roscigno
Socius: Sociological Research for a Dynamic World, April 2024

Abstract:
Prior survey research has mostly centered on the psychological dispositions and political leanings associated with conspiracy beliefs rather than underlying and potentially consequential status dynamics. Drawing on prior scholarship and recent national survey data, I analyze the social patterning of conspiracy beliefs and their variations by several status attributes. Notably, and rather than the typical assumption that such beliefs are mostly held by those of lower education, my findings point clearly to a bimodal (U-shaped) distribution by socioeconomic status. Specifically, and unique to my results, there exists a cluster of graduate-degree-holding white men who display a penchant for conspiracy beliefs. Further analyses highlight important variation between specific beliefs, with distinctly taboo beliefs being exceptionally popular among those in this highly educated group -- a pattern corroborated with additional data sources. I conclude by discussing potential mechanisms and avenues that future sociological work on conspiracy beliefs might consider.


Disagreement Gets Mistaken for Bad Listening
Zhiying (Bella) Ren & Rebecca Schaumberg
Psychological Science, forthcoming

Abstract:
It is important for people to feel listened to in professional and personal communications, and yet they can feel unheard even when others have listened well. We propose that this feeling may arise because speakers conflate agreement with listening quality. In 11 studies (N = 3,396 adults), we held constant or manipulated a listener's objective listening behaviors, manipulating only after the conversation whether the listener agreed with the speaker. Across various topics, mediums (e.g., video, chat), and cues of objective listening quality, speakers consistently perceived disagreeing listeners as worse listeners. This effect persisted after controlling for other positive impressions of the listener (e.g., likability). This effect seemed to emerge because speakers believe their views are correct, leading them to infer that a disagreeing listener must not have been listening very well. Indeed, it may be prohibitively difficult for someone to simultaneously convey that they disagree and that they were listening.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.