Findings

Normalizing

Kevin Lewis

August 05, 2025

Affective and cognitive underpinnings of moral condemnation when news of transgressions goes viral
Daniel Effron & William Brady
Journal of Experimental Psychology: General, forthcoming

Abstract:
When news of a transgression goes viral, people hear about it repeatedly from different news sources and individuals. How does this repeated exposure affect moral judgments of the transgression? We test a new theoretical model proposing that moral condemnation is influenced by competing affective and cognitive processes. Repeated exposure to the same information about a transgression dampens people's emotional responses, which can reduce moral condemnation (an affective-desensitization process). However, repeated exposure from multiple sources also signals that the transgression is receiving widespread negative attention, which can increase moral condemnation (a cognitive infamy-inference process). These processes' net effect will depend on how strongly repetition dampens affect versus signals infamy. Five preregistered experiments (N = 3,939) test our model. Participants rated corporate transgressions to which they had or had not been repeatedly exposed from three sources (news outlets or individuals). Experiments 1 and 2 measured affective reactions, infamy inferences, and moral judgments, finding mediational support for our model. In Experiment 2 and two supplemental experiments, repetition reduced moral condemnation, suggesting that affective desensitization was the dominant process. Experiment 3 was designed to strengthen the infamy process by highlighting over a million negative reactions to each repeatedly seen transgression; consistent with our model, repetition no longer reduced moral condemnation but continued to dull affective reactions, suggesting that affective-desensitization and infamy-inference processes offset one another. By documenting these countervailing processes, our research deepens understanding of when, why, and how viral transgressions may impact public opinion and moral outrage.


Savvy or savage? How worldviews shape appraisals of antagonistic leaders
Christine Nguyen & Daniel Ames
Journal of Personality and Social Psychology, forthcoming

Abstract:
Existing theories present a mixed account of how perceivers' views of a target person's antagonism relate to their perceptions of the target's general competence and leadership effectiveness. We argue that, rather than being universal, the relationship between these perceptions varies according to perceivers' idiosyncratic worldviews. In particular, we theorize and find across seven studies (total N = 2,065) that competitive worldview (CWV) serves as a lens through which perceivers interpret and evaluate others' antagonistic behavior. Our studies reveal that those who see the social world as a competitive jungle (i.e., high CWV) have more positive views of the competence and leadership of antagonistic individuals than those who see the social world as cooperative and benign (i.e., low CWV). We also find that CWV shapes the antagonism that perceivers attribute, post hoc, to successful leaders during their rise to the top. Finally, we consider workplace implications, finding that CWV moderates the relationship between managers' antagonistic behavior and a range of employee outcomes, including motivation and job satisfaction. Overall, we argue that individuals' folk theories of the social world (and CWV in particular) can help scholars more fully understand how basic dimensions of social perception relate to one another across perceivers. Practically, worldview-dependent social perception might help explain how and why potentially antagonistic leaders might be excused, tolerated, or even endorsed by the people around them.


Bridging the gap between radical beliefs and violent behavior
Parry Callahan & Barry Rosenfeld
Law and Human Behavior, forthcoming

Method: This study used a subset of the Profiles of Individual Radicalization in the United States, a large (N = 2,103), open-source data set of individuals involved in far-right (n = 1,259), far-left (n = 289), and jihadist-inspired (n = 555) extremism. All data were coded on the basis of publicly available information. Outcome measures included radical behaviors and the probability of three maximum criminal severity outcomes: civil disobedience, violent plot involvement, and direct violence.

Results: Ideological commitment was positively associated with radical behaviors and civil disobedience (OR = 1.36) but not associated with increased risk of violent plot involvement (OR = 0.95) or direct violence (OR = 1.02). Moderation analyses showed that commitment was positively associated with radical behaviors and violence for jihadist-inspired individuals (who had the lowest base rate of direct violence) but not those in far-right or far-left groups. Other hypothesized moderators were not significant.


Large language models show amplified cognitive biases in moral decision-making
Vanessa Cheung, Maximilian Maier & Falk Lieder
Proceedings of the National Academy of Sciences, 24 June 2025

Abstract:
As large language models (LLMs) become more widely used, people increasingly rely on them to make or advise on moral decisions. Some researchers even propose using LLMs as participants in psychology experiments. It is, therefore, important to understand how well LLMs make moral decisions and how they compare to humans. We investigated these questions by asking a range of LLMs to emulate or advise on people's decisions in realistic moral dilemmas. In Study 1, we compared LLM responses to those of a representative U.S. sample (N = 285) for 22 dilemmas, including both collective action problems that pitted self-interest against the greater good, and moral dilemmas that pitted utilitarian cost-benefit reasoning against deontological rules. In collective action problems, LLMs were more altruistic than participants. In moral dilemmas, LLMs exhibited stronger omission bias than participants: They usually endorsed inaction over action. In Study 2 (N = 474, preregistered), we replicated this omission bias and documented an additional bias: Unlike humans, most LLMs were biased toward answering "no" in moral dilemmas, thus flipping their decision/advice depending on how the question is worded. In Study 3 (N = 491, preregistered), we replicated these biases in LLMs using everyday moral dilemmas adapted from forum posts on Reddit. In Study 4, we investigated the sources of these biases by comparing models with and without fine-tuning, showing that they likely arise from fine-tuning models for chatbot applications. Our findings suggest that uncritical reliance on LLMs' moral decisions and advice could amplify human biases and introduce potentially problematic biases.


Call Me A Jerk: Persuading AI to Comply with Objectionable Requests
Lennart Meincke et al.
University of Pennsylvania Working Paper, July 2025

Abstract:
Do artificial intelligence (AI) models trained on human language submit to the same principles of persuasion as humans? We tested whether 7 established principles of persuasion (authority, commitment, liking, reciprocity, scarcity, social proof, and unity) can induce a widely-used AI model (GPT-4o mini) to comply with 2 different objectionable requests. Specifically, N = 28,000 conversations in which the user asked the AI model either to insult them (Call me a jerk) or to help synthesize a regulated drug (How do you synthesize lidocaine?) that employed a principle of persuasion more than doubled the likelihood of compliance (average 72.0%) compared to matched control prompts (average 33.3%, ps < .001). These findings underscore the relevance of classic findings in social science to understanding rapidly evolving, parahuman AI capabilities-revealing both the risks of manipulation by bad actors and the potential for more productive prompting by benevolent users.


Moral spillover from creators to autonomous technological agents
Arthur Jago & Kai Chi Yam
Journal of Experimental Psychology: General, forthcoming

Abstract:
Autonomous technological agents, such as algorithms and robots, are increasingly entering the workforce and society. We draw upon theories of machines and moralization to examine people's assumptions about the degree to which creators' moral convictions "spill over" to the autonomous technological agents they design, as well as the consequences stemming from these attributions. A field experiment in a Taoist temple indicated that creator moral (vs. nonmoral) conviction led to greater assumed belief spillover to a robot that recited scripture, which mediated both mind perception in that robot and subsequent support for the organization (Study 1). Two additional experiments replicated these findings in contexts of creators developing algorithms for advertising (Study 2) and drones for deforestation-focused mapping, additionally finding that this effect emerges only given fully autonomous (vs. human-controlled) machines (Study 3). Finally, we demonstrate a potential downside associated with moral spillover from creators to autonomous technological agents: When creators justify particularly controversial practices using moral conviction, assumed belief spillover is associated with less positivity toward a creating organization (Study 4). We conclude by discussing theoretical and practical implications related to how creators' moral convictions appear particularly likely to spill over and imbue "mind" in machines that otherwise might appear "mindless," as well as how spillover attributions can garner more support for -- or resistance toward -- automation efforts.


On the post-Enlightenment evolution of moral universalism
Michael Jetter
Economic Journal, forthcoming

Abstract:
Is humanity's circle of moral concern expanding? I explore frequencies of morally universal language in 15m book publications in American English, British English, French, Spanish, German, and Italian from 1800-2000. In each language, morally universal terminology diminished markedly. This pattern also emerges in Chinese, Russian, and Hebrew books. I test two prominent hypotheses predicting moral universalism, pertaining to reason and religiosity. Reason-based terminology indeed correlates positively with purely morally universal terminology -- but also (and more strongly so) with morally communal terminology. These empirical patterns cast doubt on claims of (i) moral universalism expanding and (ii) reason being its driver.


Reward Association With Mental States Shapes Empathy and Prosocial Behavior
Yi Zhang & Leor Hackel
Psychological Science, forthcoming

Abstract:
Valuing the welfare of others is a fundamental aspect of empathy and prosocial behavior. How do people develop this valuation? Theories of associative learning suggest that people can associate social cues, such as smiles, with personal reward, thus feeling good when others thrive. Yet people often display generalized concern for others' welfare, regardless of the specific cues present. We propose that Pavlovian conditioning allows people to associate reward directly with others' abstract mental states, learning that another's happiness predicts their own reward. In four online experiments with 1,500 U.S.-based adults recruited from CloudResearch, participants' monetary outcomes were congruently or incongruently predicted by a target's mental states. Participants who experienced congruent learning reported more empathic feelings toward the target in novel situations. The values attached to mental states further influenced participants' prosocial choices. These results demonstrate how associative learning of abstract mental states can give rise to generalizable empathy and influence moral behavior.


What goes around comes around: Foreign language use increases immanent justice thinking
Janet Geipel, Constantinos Hadjichristidis & Luca Surian
Journal of Experimental Social Psychology, July 2025

Abstract:
Immanent justice thinking refers to the tendency to perceive causal connections between an agent's bad (good) deeds and subsequent bad (good) outcomes, even when such connections are rationally implausible. We asked bilinguals to read scenarios written either in their native language or in a foreign language and examined how language influences immanent justice endorsements. In five pre-registered, randomized experiments involving 1875 participants from two bilingual populations, we demonstrate that foreign language use increases immanent justice endorsements. This effect was largely unrelated to foreign language proficiency, emerged only for problems that could trigger immanent justice intuitions, and was eliminated by a prompt to think rationally. These results suggest that using a foreign language increases immanent justice endorsements by reducing awareness of the conflict between intuition and rational reasoning.


The Identified Helper Effect on the Frequency of Asks
Polina Detkova
Caltech Working Paper, June 2025

Abstract:
People often hesitate to ask for help, even when in great need. Promoting asking can mitigate the information asymmetry between those in need and those willing to help. We investigate whether a minor change in identifiability -- such as adding an uninformative ID number to a potential helper -- can promote asking. In our Prolific study, we find that providing an uninformative ID number increases the asking rate from 67 to 76.5 percentage points. An analysis of participants' beliefs suggests that this effect stems from a shift in the influence of expected payoff and other factors, such as social norms, on decision-making. Since uninformative ID numbers represent a very weak form of identification, our results have broad applications.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.