Findings

Think Carefully

Kevin Lewis

June 26, 2025

Learning More Than You Can Know: Introductory Education Produces Overly Expansive Self-Assessments of Knowledge
Stav Atir & David Dunning
Management Science, forthcoming

Abstract:
Education is a primary engine for gaining knowledge, yet it is unclear if introductory education helps learners gain meta-knowledge, that is, an accurate awareness of the scope and limits of their knowledge. We found that after taking an introductory finance class, relative to a control class, students overclaimed more knowledge they did not have, that is, endorsed more familiarity with bogus finance terms and expressed more confidence under incentives in their ability to answer questions about these terms. This finding was replicated in a Psychology and Law class, compared with a control class, with overclaiming still elevated two years later. In two follow-up experiments, participants in a hypothetical consulting context were randomly assigned to introductory training on GPS or a control topic. Participants in the GPS condition overclaimed more knowledge of bogus GPS terms and were more confident in their knowledge of real material never covered in the training, controlling for test performance. These effects were explained by introductory education both increasing self-perceived expertise in the education domain and creating basic schematic understanding that accommodates plausible but incorrect interpretations of unknown content. Introductory education, then, does not necessarily improve learners’ skills at identifying lack of knowledge. Rather, it can lead to an illusion of knowledge for unknown material, causing learners to overestimate their “circle of competence.”


Differences in psychologists’ cognitive traits are associated with scientific divides
Justin Sulik et al.
Nature Human Behaviour, June 2025, Pages 1147-1161

Abstract:
Scientific research is often characterized by schools of thought. We investigate whether these divisions are associated with differences in researchers’ cognitive traits such as tolerance for ambiguity. These differences may guide researchers to prefer different problems, tackle identical problems in different ways, and even reach different conclusions when studying the same problems in the same way. We surveyed 7,973 researchers in psychological sciences and investigated links between what they research, their stances on open questions in the field, and their cognitive traits and dispositions. Our results show that researchers’ stances on scientific questions are associated with what they research and with their cognitive traits. Further, these associations are detectable in their publication histories. These findings support the idea that divisions in scientific fields reflect differences in the researchers themselves, hinting that some divisions may be more difficult to bridge than suggested by a traditional view of data-driven scientific consensus.


Capability Inversion: The Turing Test Meets Information Design
Joshua Gans
NBER Working Paper, June 2025

Abstract:
This paper analyzes the design of tests to distinguish human from artificial intelligence through the lens of information design. We identify a fundamental asymmetry: while AI systems can strategically underperform to mimic human limitations, they cannot overperform beyond their capabilities. This leads to our main contribution -- the concept of capability inversion domains, where AIs fail detection not through inferior performance, but by performing “suspiciously well” when they overestimate human capabilities. We show that if an AI significantly overestimates human ability in even one domain, it cannot reliably pass an optimally designed test. This insight reverses conventional intuition: effective tests should target not what humans do well, but the specific patterns of human imperfection that AIs systematically misunderstand. We identify structural sources of persistent misperception -- including the difficulty of learning about failure from successful examples and fundamental differences in embodied experience -- that make certain capability inversions exploitable for detection even as AI systems improve.


Chance Neglect in Performance Judgements
Ze Hong & Joseph Henrich
Harvard Working Paper, May 2025

Abstract:
Humans often struggle to incorporate chance information into performance evaluations. Across diverse samples in China and the United States (total N = 1,387), we show that people systematically misperceive or ignore chance-level success rates when judging the efficacy of technological practices. Using scenarios where chance performance is objectively known (e.g., ~50% success rate for fetal sex prediction), we find that (1) many participants underestimate the success achievable by random guessing, (2) even when they accurately recognize chance-level performance, they often fail to use it as a baseline for evaluating expert predictions, and (3) this "chance neglect" is especially pronounced in performance-related judgments. These findings highlight a cognitive bias that may contribute to the persistence of ineffective technologies across societies.


Using the Nested Structure of Knowledge to Infer What Others Know
Edgar Dubourg et al.
Psychological Science, June 2025, Pages 443-450

Abstract:
Humans rely on more knowledgeable individuals to acquire information. But when we are ignorant, how are we to tell who is knowledgeable? We propose that human knowledge is nested: People who know only a few things tend to know very common pieces of information, whereas rare pieces of information are known only by people who know many things, including common things. This leads to the possibility of reliably inferring knowledgeability from minimal cues. In this study (N = 848 U.S. adults recruited online), we show that individuals can accurately gauge others’ knowledgeability on the basis of very limited information, relying on their ability to estimate the rarity of different pieces of knowledge and on the fact that knowing a rare piece of information indicates a high likelihood of knowing more information in the same theme. Even participants who are largely ignorant of a theme can infer how knowledgeable other individuals are on the basis of the possession of a single piece of knowledge.


How laypeople evaluate scientific explanations containing jargon
Francisco Cruz & Tania Lombrozo
Nature Human Behaviour, forthcoming

Abstract:
Individuals rely on others’ expertise to achieve a basic understanding of the world. But how can non-experts achieve understanding from explanations that, by definition, they are ill-equipped to assess? Across 9 experiments with 6,698 participants (Study 1A = 737; 1B = 734; 1C = 733; 2A = 1,014; 2B = 509; 2C = 1,012; 3A = 1,026; 3B = 512; 4 = 421), we address this puzzle by focusing on scientific explanations with jargon. We identify ‘when’ and ‘why’ the inclusion of jargon makes explanations more satisfying, despite decreasing their comprehensibility. We find that jargon increases satisfaction because laypeople assume the jargon fills gaps in explanations that are otherwise incomplete. We also identify strategies for debiasing these judgements: when people attempt to generate their own explanations, inflated judgements of poor explanations with jargon are reduced, and people become better calibrated in their assessments of their own ability to explain.


Improving Human Deception Detection Using Algorithmic Feedback
Marta Serra-Garcia & Uri Gneezy
Management Science, forthcoming

Abstract:
Can algorithms help people detect deception in high-stakes strategic interactions? Participants watching the preplay communication of contestants in the TV show Golden Balls display a limited ability to predict contestants’ behavior, whereas algorithms do significantly better. To increase participants’ accuracy, we provide them with algorithmic advice by flagging videos for which an algorithm predicts a high likelihood of cooperation or defection. We test how the effectiveness of flags depends on their timing. We show that participants rely significantly more on flags shown before they watch the videos than flags shown after they watch them. These findings show that the timing of algorithmic feedback is key for its adoption.


On the Robustness and Provenance of the Gambler’s Fallacy
Yang Xiang, Kevin Dorst & Samuel Gershman
Psychological Science, June 2025, Pages 451-464

Abstract:
The gambler’s fallacy is typically defined as the false belief that a random event is less likely to occur if it has occurred recently. Although forms of this fallacy have been documented numerous times, past work either has not actually measured probabilistic predictions but rather point predictions or used sequences that were not independent. To address these problems, we conducted a series of high-powered, preregistered studies in which we asked 750 adult Amazon Mechanical Turk workers from the United States to report probabilistic predictions for truly independent sequences. In contrast to point predictions, which generated a significant gambler’s fallacy, probabilistic predictions were not found to lead to a gambler’s fallacy. Moreover, the point predictions could not be reconstructed by sampling from the probability judgments. This suggests that the gambler’s fallacy originates at the decision stage rather than in probabilistic reasoning, as posited by several leading theories. New theories of the gambler’s fallacy may be needed to explain these findings.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.