Expertise
Backup Plans: The Impact of Disclosure on Perceptions of Expert Competence
Mauricio Palmeira & Evan Polman
Management Science, forthcoming
Abstract:
People often consult experts to provide solutions to their problems. Oftentimes, an initial attempt to solve a problem fails, and these experts (e.g., advisors, consultants, specialists, and service providers) will resort to a backup plan. A backup plan can be presented when it is needed (i.e., after an initial failure) or in advance (i.e., before a potential failure). We examine how the timing of disclosure of a backup plan affects judgments of an expert’s competence. We show that although people expect experts to have a backup plan, they react more negatively if it is presented after (versus before) it is needed, all things equal. This “competence penalty” disappears when people are prompted to think about what an expert would do in case of failure as indeed, an alternative plan is expected. Importantly, our findings indicate that there is no competence penalty for early disclosure of a backup plan, regardless of the outcome of the initial plan: if it has failed, has succeeded, or is still ongoing. However, we do find one exception; early disclosure has a negative impact if multiple backup plans are disclosed while the initial plan is in progress.
Does artificial intelligence cause artificial confidence? Generative artificial intelligence as an emerging social referent
Taly Reich & Jacob Teeny
Journal of Personality and Social Psychology, forthcoming
Abstract:
As generative artificial intelligence (gen-AI) becomes more prevalent, it becomes increasingly important to understand how people psychologically respond to the content it explicitly creates. In this research, we demonstrate that exposure to gen-AI produced content can affect people’s self-confidence at the same task through a social comparison process. Anchoring this research in the domain of creativity, we find that exposing people to creative content believed to have been created by gen-AI (vs. a human peer) increases people’s self-confidence in their own relevant creative abilities. This effect emerges for jokes, stories, poetry, and visual art, and it can consequently increase people’s willingness to attempt the activity -- even though the greater confidence underscoring their actions might be unwarranted. We further show that these effects emerge because gen-AI is perceived as a lower social referent for creative endeavors, bolstering people’s own self-perceptions. As a result, for domains in which gen-AI is perceived as an equal or greater social referent (i.e., in fact-based domains), the effects are attenuated. These findings have significant implications for understanding human–AI interactions, antecedents for creative self-confidence, and the known referents that people use for social comparison effects.
Doubling-Back Aversion: A Reluctance to Make Progress by Undoing It
Kristine Cho & Clayton Critcher
Psychological Science, forthcoming
Abstract:
Four studies (N = 2,524 U.S.-based adults recruited from the University of California, Berkeley, or Amazon Mechanical Turk) provide support for doubling-back aversion, a reluctance to pursue more efficient means to a goal when they entail undoing progress already made. These effects emerged in diverse contexts, both as participants physically navigated a virtual-reality world and as they completed different performance tasks. Doubling back was decomposed into two components: the deletion of progress already made and the addition to the proportion of a task that was left to complete. Each contributed independently to doubling-back aversion. These effects were robustly explained by shifts in subjective construals of both one’s past and future efforts that would result from doubling back, not by changes in perceptions of the relative length of different routes to an end state. Participants’ aversion to feeling their past efforts were a waste encouraged them to pursue less efficient means. We end by discussing how doubling-back aversion is distinct from established phenomena (e.g., the sunk-cost fallacy).
Till Tech Do Us Part: Betrayal Aversion and Its Role in Algorithm Use
Cameron Kormylo et al.
Management Science, forthcoming
Abstract:
Failing to follow expert advice can have real and dangerous consequences. While any number of factors may lead a decision maker to refuse expert advice, the proliferation of algorithmic experts has further complicated the issue. One potential mechanism that restricts the acceptance of expert advice is betrayal aversion, or the strong dislike for the violation of trust norms. This study explores whether the introduction of expert algorithms in place of human experts can attenuate betrayal aversion and lead to higher overall rates of seeking expert advice. In other words, we ask: are decision makers averse to algorithmic betrayal? The answer to this question is uncertain ex ante. We answer this question through an experimental financial market where there is an identical risk of betrayal from either a human or algorithmic financial advisor. We find that the willingness to delegate to human experts is significantly reduced by betrayal aversion, while no betrayal aversion is exhibited toward algorithmic experts. The impact of betrayal aversion toward financial advisors is considerable: the resulting unwillingness to take the advice of the human expert leads to a 20% decrease in subsequent earnings, while no loss in earnings is observed in the algorithmic expert condition. This study has significant implications for firms, policymakers, and consumers, specifically in the financial services industry.
Learning to Reason for Long-Form Story Generation
Alexander Gurung & Mirella Lapata
University of Edinburgh Working Paper, March 2025
Abstract:
Generating high-quality stories spanning thousands of tokens requires competency across a variety of skills, from tracking plot and character arcs to keeping a consistent and engaging style. Due to the difficulty of sourcing labeled datasets and precise quality measurements, most work using large language models (LLMs) for long-form story generation uses combinations of hand-designed prompting techniques to elicit author-like behavior. This is a manual process that is highly dependent on the specific story-generation task. Motivated by the recent success of applying RL with Verifiable Rewards to domains like math and coding, we propose a general story-generation task (Next-Chapter Prediction) and a reward formulation (Verified Rewards via Completion Likelihood Improvement) that allows us to use an unlabeled book dataset as a learning signal for reasoning. We learn to reason over a story's condensed information and generate a detailed plan for the next chapter. Our reasoning is evaluated via the chapters it helps a story-generator create, and compared against non-trained and supervised finetuning (SFT) baselines. Pairwise human judgments reveal the chapters our learned reasoning produces are preferred across almost all metrics, and the effect is more pronounced in Scifi and Fantasy genres.
Source Memory Is More Accurate for Opinions than for Facts
Daniel Mirny & Stephen Spiller
Journal of Consumer Research, forthcoming
Abstract:
Effective communication relies on consumers remembering, sharing, and applying relevant information. Source memory, the ability to link a claim to its original source, is an essential aspect of accurate recall, attitude formation, and decision making. We propose that claim objectivity, whether a claim is a fact or an opinion, affects memory for the claim’s source. This proposal follows a two-step process: (i) opinions provide more information about sources than facts do; (ii) claims that provide more information about sources during information encoding are more likely to be accurately attributed to original sources during recall. Across thirteen pre-registered experiments (N = 7,510) and a variety of consumer domains, we investigate the effect of claim objectivity on source memory. We find that source memory is more accurate for opinions than for facts, with no consistent effect on claim recognition memory. We find support for the proposed process by manipulating facts to be more informative about sources and opinions to be less informative about sources. When forming inferences and seeking advice from sources, participants rely more on previously-shared opinions than on previously-shared facts. Our results indicate that opinions are more likely to be accurately attributed to original sources than are facts.
Designing Human-AI Collaboration: A Sufficient-Statistic Approach
Nikhil Agarwal, Alex Moehring & Alexander Wolitzky
MIT Working Paper, April 2025
Abstract:
We develop a sufficient-statistic approach to designing collaborative human-AI decision-making policies in classification problems, where AI predictions can be used to either automate decisions or selectively assist humans. The approach allows for endogenous and biased beliefs, and effort crowd-out, without imposing a structural model of human decision-making. We deploy and validate our approach in an online fact-checking experiment. We find that humans under-respond to AI predictions and reduce effort when presented with confident AI predictions. AI under-response stems more from human overconfidence in own-signal precision than from under-confidence in AI. The optimal policy automates cases where the AI is confident and delegates uncertain cases to humans while fully disclosing the AI prediction. Although automation is valuable, the additional benefit from assisting humans with AI predictions is negligible.
Behind the screens: A replication and extension of Coasian bargaining experiments in the digital age
Jesse Backstrom et al.
European Economic Review, June 2025
Abstract:
This paper replicates Hoffman and Spitzer's seminal Coasian bargaining experiments from the early 1980s and extends them to examine the impact of digital communication. We find that, while the face-to-face replication results mostly align with the original findings, transitioning to a digital environment induces a 23.3 percent decrease in efficient decision-making and over a fourfold increase in self-regarding behavior. These effects are amplified in one-shot bargaining scenarios and when property rights are strengthened and persist as bargainers gain experience. Our findings allude to several implications of digital communication for efficiency and welfare distributions in negotiation settings.
An Illusion of Time Caused by Repeated Experience
Brynn Sherman & Sami Yousif
Psychological Science, April 2025, Pages 278-295
Abstract:
How do people remember when something occurred? One obvious possibility is that, in the absence of explicit cues, people remember on the basis of memory strength. If a memory is fuzzy, it likely occurred longer ago than a memory that is vivid. Here, we demonstrate a robust illusion of time that stands in stark contrast with this prediction. In six experiments testing adults via an online research platform, we show that experiences that are repeated (and, consequently, better remembered) are counterintuitively remembered as having initially occurred further back in time. This illusion is robust (amounting to as much as a 25% distortion in perceived time), consistent (exhibited by the vast majority of participants tested), and applicable at the scale of ordinary day-to-day experience (occurring even when tested over one full week). We argue that this may be one of the key mechanisms underlying why people’s sense of time often deviates from reality.