Kevin Lewis

February 20, 2018

Sudden-Death Aversion: Avoiding Superior Options Because They Feel Riskier
Jesse Walker et al.
Journal of Personality and Social Psychology, forthcoming


We present evidence of Sudden-Death Aversion (SDA) – the tendency to avoid “fast” strategies that provide a greater chance of success, but include the possibility of immediate defeat, in favor of “slow” strategies that reduce the possibility of losing quickly, but have lower odds of ultimate success. Using a combination of archival analyses and controlled experiments, we explore the psychology behind SDA. First, we provide evidence for SDA and its cost to decision makers by tabulating how often NFL teams send games into overtime by kicking an extra point rather than going for the 2-point conversion (Study 1) and how often NBA teams attempt potentially game-tying 2-point shots rather than potentially game-winning 3-pointers (Study 2). To confirm that SDA is not limited to sports, we demonstrate SDA in a military scenario (Study 3). We then explore two mechanisms that contribute to SDA: myopic loss aversion and concerns about “tempting fate.” Studies 4 and 5 show that SDA is due, in part, to myopic loss aversion, such that decision makers narrow the decision frame, paying attention to the prospect of immediate loss with the “fast” strategy, but not the downstream consequences of the “slow” strategy. Study 6 finds people are more pessimistic about a risky strategy that needn’t be pursued (opting for sudden death) than the same strategy that must be pursued. We end by discussing how these twin mechanisms lead to differential expectations of blame from the self and others, and how SDA influences decisions in several different walks of life.

Direct replications of Ottati et al. (2015): The earned dogmatism effect occurs only with some manipulations of expertise
Robert Calin-Jageman
Journal of Experimental Social Psychology, forthcoming


The Earned Dogmatism Hypothesis is that social norms entitle experts to behave in a close-minded fashion, and therefore that manipulations which increase perceived expertise reduce open-minded cognition. This manuscript reports direct replications of three key experiments reported by Ottati et al. 2015 in this journal in support of the Earned Dogmatism Hypothesis. Consistent with the original findings, it was found that dogmatic behavior is considered substantially more appropriate for experts relative to novices (d = 0.45 [0.35, 0.55] over all replications and original study). In addition, it was confirmed that when participants envision themselves as experts they predict they will be more close-minded (d = − 0.54 [− 0.64, − 0.44]). Unfortunately, replications involving manipulation of expertise through task difficulty showed little to no effect on open-minded cognition (d = 0.00 [− 0.21, 0.21] for easy/difficult recall task; d = − 0.02 [− 0.21, 0.17] for easy/difficult trivia task). The balance of evidence suggests that the Earned Dogmatism Hypothesis is currently well-supported only for prospective manipulations of expertise that require participants to predict their social behaviors.

How seemingly innocuous words can bias judgment: Semantic prosody and impression formation
David Hauser & Norbert Schwarz
Journal of Experimental Social Psychology, March 2018, Pages 11-18


Would we think more negatively of a person who caused rather than produced an outcome or who is described as utterly rather than totally unconventional? While these word choices may appear to be trivial, cause and utterly occur more frequently in a negative context in natural language use than produced or totally, even though these words do not have an explicit valenced meaning. Words that are primarily used in a valenced context are said to have semantic prosody. Five studies show that semantically-prosodic descriptors affect the impressions formed of others. These effects occur even in situations where perceivers are likely to be skeptical of messages, and they impact behavioral intentions toward targets. An utterly changed person was perceived as less warm and competent than a totally changed person (Study 1), and people held more negative impressions of an utterly rather than totally unconventional boss (Study 2). People had stronger intentions to vote for a political candidate who produced budget changes over one who caused them (Study 3) and preferred a bank that lends money (a word with positive semantic prosody) over a bank that loans money (Study 4). Finally, participants had more (less) romantic interest in potential dating partners with Tinder profiles that utilized words with positive (negative) semantic prosody (Study 5). We conclude that semantically prosodic descriptors that lack a clear positive or negative meaning still lead people to infer the valence of what is to come, which colors the impressions they form of others.

Overrepresentation of extreme events in decision making reflects rational use of cognitive resources
Falk Lieder, Thomas Griffiths & Ming Hsu
Psychological Review, January 2018, Pages 1-32


People’s decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision makers should overrepresent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account, we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people’s availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall.

The wisdom of the inner crowd in three large natural experiments
Dennie van Dolder & Martijn van den Assem
Nature Human Behaviour, January 2018, Pages 21–26


The quality of decisions depends on the accuracy of estimates of relevant quantities. According to the wisdom of crowds principle, accurate estimates can be obtained by combining the judgements of different individuals. This principle has been successfully applied to improve, for example, economic forecasts, medical judgements and meteorological predictions. Unfortunately, there are many situations in which it is infeasible to collect judgements of others. Recent research proposes that a similar principle applies to repeated judgements from the same person. This paper tests this promising approach on a large scale in a real-world context. Using proprietary data comprising 1.2 million observations from three incentivized guessing competitions, we find that within-person aggregation indeed improves accuracy and that the method works better when there is a time delay between subsequent judgements. However, the benefit pales against that of between-person aggregation: the average of a large number of judgements from the same person is barely better than the average of two judgements from different people.

Smaller crowds outperform larger crowds and individuals in realistic task conditions
Mirta Galesic, Daniel Barkoczi & Konstantinos Katsikopoulos
Decision, January 2018, Pages 1-15


Decisions about political, economic, legal, and health issues are often made by simple majority voting in groups that rarely exceed 30–40 members and are typically much smaller. Given that wisdom is usually attributed to large crowds, shouldn’t committees be larger? In many real-life situations, expert groups encounter a number of different tasks. Most are easy, with average individual accuracy being above chance, but some are surprisingly difficult, with most group members being wrong. Examples are elections with surprising outcomes, sudden turns in financial trends, or tricky knowledge questions. Most of the time, groups cannot predict in advance whether the next task will be easy or difficult. We show that under these circumstances moderately sized groups, whose members are selected randomly from a larger crowd, can achieve higher average accuracy across all tasks than either larger groups or individuals. This happens because an increase in group size can lead to a decrease in group accuracy for difficult tasks that is larger than the corresponding increase in accuracy for easy tasks. We derive this nonmonotonic relationship between group size and accuracy from the Condorcet jury theorem and use simulations and further analyses to show that it holds under a variety of assumptions. We further show that situations favoring moderately sized groups occur in a variety of real-life situations including political, medical, and financial decisions and general knowledge tests. These results have implications for the design of decision-making bodies at all levels of policy.

Anxious attachment and belief in conspiracy theories
Ricky Green & Karen Douglas
Personality and Individual Differences, 15 April 2018, Pages 30-37


This research examined the link between attachment styles and belief in conspiracy theories. It was hypothesized, due to the tendency to exaggerate the intensity of threats, that higher anxiously attached individuals would be more likely to hold conspiracy beliefs, even when accounting for other variables such as right-wing authoritarianism, interpersonal trust, and demographic factors that have been found to predict conspiracy belief in previous research. In Study 1 (N = 246 Amazon Mechanical Turk workers), participants higher in anxious attachment style showed a greater tendency to believe in conspiracy theories. Further, this relationship remained significant when accounting for other known predictors of conspiracy belief. Study 2 (N = 230 Prolific Academic workers) revealed that anxious attachment again predicted the general tendency to believe conspiracy theories, but also belief in specific conspiracy theories and conspiracy theories about groups. These relationships held when controlling for demographic factors. The current studies add to the body of research investigating the individual differences predictors of conspiracy belief, demonstrating that conspiracy belief may, to some degree, have roots in early childhood experiences.

When Action-Inaction Framing Leads to Higher Escalation of Commitment: A New Inaction-Effect Perspective on the Sunk-Cost Fallacy
Gilad Feldman & Kin Fai Ellick Wong
Psychological Science, forthcoming


Escalation of commitment to a failing course of action occurs in the presence of (a) sunk costs, (b) negative feedback that things are deviating from expectations, and (c) a decision between escalation and de-escalation. Most of the literature to date has focused on sunk costs, yet we offer a new perspective on the classic escalation-of-commitment phenomenon by focusing on the impact of negative feedback. On the basis of the inaction-effect bias, we theorized that negative feedback results in the tendency to take action, regardless of what that action may be. In four experiments, we demonstrated that people facing escalation-decision situations were indeed action oriented and that framing escalation as action and de-escalation as inaction resulted in a stronger tendency to escalate than framing de-escalation as action and escalation as inaction (mini-meta-analysis effect d = 0.37, 95% confidence interval = [0.21, 0.53]).

When Reality Is Out of Focus: Can People Tell Whether Their Beliefs and Judgments Are Correct or Wrong?
Asher Koriat
Journal of Experimental Psychology: General, forthcoming


Can we tell whether our beliefs and judgments are correct or wrong? Results across many domains indicate that people are skilled at discriminating between correct and wrong answers, endorsing the former with greater confidence than the latter. However, it has not been realized that because of people’s adaptation to reality, representative samples of items tend to favor the correct answer, yielding object-level accuracy (OLA) that is considerably better than chance. Across 16 experiments that used 2-alternative forced-choice items from several domains, the confidence/accuracy (C/A) relationship was positive for items with OLA >50%, but consistently negative across items with OLA <50%. A systematic sampling of items that covered the full range of OLA (0–100%) yielded a U-function relating confidence to OLA. The results imply that the positive C/A relationship that has been reported in many studies is an artifact of OLA being better than chance rather than representing a general ability to discriminate between correct and wrong responses. However, the results also support the ecological approach, suggesting that confidence is based on a frugal, “bounded” heuristic that has been specifically tailored to the ecological structure of the natural environment. This heuristic is used despite the fact that for items with OLA <50%, it yields confidence judgments that are counterdiagnostic of accuracy. Our ability to tell between correct and wrong judgments is confined to the probability structure of the world we live in. The results were discussed in terms of the contrast between systematic design and representative design.

The effect of ad hominem attacks on the evaluation of claims promoted by scientists
Ralph Barnes et al.
PLoS ONE, January 2018


Two experiments were conducted to determine the relative impact of direct and indirect (ad hominem) attacks on science claims. Four hundred and thirty-nine college students (Experiment 1) and 199 adults (Experiment 2) read a series of science claims and indicated their attitudes towards those claims. Each claim was paired with one of the following: A) a direct attack upon the empirical basis of the science claim B) an ad hominem attack on the scientist who made the claim or C) both. Results indicate that ad hominem attacks may have the same degree of impact as attacks on the empirical basis of the science claims, and that allegations of conflict of interest may be just as influential as allegations of outright fraud.

The Role of Experimenter Belief in Social Priming
Thandiwe Gilder & Erin Heerey
Psychological Science, forthcoming


Research suggests that stimuli that prime social concepts can fundamentally alter people’s behavior. However, most researchers who conduct priming studies fail to explicitly report double-blind procedures. Because experimenter expectations may influence participant behavior, we asked whether a short pre-experiment interaction between participants and experimenters would contribute to priming effects when experimenters were not blind to participant condition. An initial double-blind experiment failed to demonstrate the expected effects of a social prime on executive cognition. To determine whether double-blind procedures caused this result, we independently manipulated participants’ exposure to a prime and experimenters’ belief about which prime participants received. Across four experiments, we found that experimenter belief, rather than prime condition, altered participant behavior. Experimenter belief also altered participants’ perceptions of their experimenter, suggesting that differences in experimenter behavior across conditions caused the effect. Findings reinforce double-blind designs as experimental best practice and suggest that people’s prior beliefs have important consequences for shaping behavior with an interaction partner.

The Listener Sets the Tone: High-Quality Listening Increases Attitude Clarity and Behavior-Intention Consequences
Guy Itzchakov et al.
Personality and Social Psychology Bulletin, forthcoming


We examined how merely sharing attitudes with a good listener shapes speakers’ attitudes. We predicted that high-quality (i.e., empathic, attentive, and nonjudgmental) listening reduces speakers’ social anxiety and leads them to delve deeper into their attitude-relevant knowledge (greater self-awareness). This, subsequently, differentially affects two components of speaker’s attitude certainty by increasing attitude clarity, but not attitude correctness. In addition, we predicted that this increased clarity is followed by increased attitude-expression intentions, but not attitude-persuasion intentions. We obtained consistent support for our hypotheses across five experiments (including one preregistered study), manipulating listening behavior in a variety of ways. This is the first evidence that an interpersonal variable, unrelated to the attitude itself, can affect attitude clarity and its consequences.

Don’t Want to Look Dumb? The Role of Theories of Intelligence and Humanlike Features in Online Help Seeking
Sara Kim, Ke Zhang & Daeun Park
Psychological Science, February 2018, Pages 171-180


Numerous studies have shown that individuals’ help-seeking behavior increases when a computerized helper is endowed with humanlike features in nonachievement contexts. In contrast, the current research suggests that anthropomorphic helpers are not universally conducive to help-seeking behavior in contexts of achievement, particularly among individuals who construe help seeking as a display of incompetence (i.e., entity theorists). Study 1 demonstrated that when entity theorists received help from an anthropomorphized (vs. a nonanthropomorphized) helper, they were more concerned about negative judgments from other people, whereas incremental theorists were not affected by anthropomorphic features. Study 2 showed that when help was provided by an anthropomorphized (vs. a nonanthropomorphized) helper, entity theorists were less likely to seek help, even at the cost of lower performance. In contrast, incremental theorists’ help-seeking behavior and task performance were not affected by anthropomorphism. This research deepens the current understanding of the role of anthropomorphic computerized helpers in online learning contexts.

What are my chances? An imagery versus discursive processing approach to understanding ratio-bias effects
Ann Schlosser
Organizational Behavior and Human Decision Processes, January 2018, Pages 112–124


Ratios are often used to communicate risk. Thus, it is important to understand when and why certain ratios communicate greater risk. Prior research on the ratio-bias effect suggests that people often assume greater risk when ratios use larger than smaller numbers. Yet, support for this effect has been mixed. The present research contributes to this literature by applying a dual-process theory that distinguishes between discursive and imagery-based processing of ratios, thereby offering new insights into the ratio-bias effect and when it occurs. Specifically, when processing discursively (as numbers), the ratio-bias effect should emerge. However, because imagery processing is more holistic, the ratio-bias effect should reverse when imagery processing is encouraged (via graphics or instructions to imagine). The results of six studies support these predictions. In addition to shedding light on how different ways of processing numerical information influences risk judgments and willingness to act, this research has important implications for crafting messages designed to communicate risk.


from the


A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.


to your National Affairs subscriber account.

Already a subscriber? Activate your account.


Unlimited access to intelligent essays on the nation’s affairs.

Subscribe to National Affairs.