Decision Points

Kevin Lewis

November 21, 2010

Intuitive Biases in Choice vs. Estimation: Implications for the Wisdom of Crowds

Joseph Simmons, Leif Nelson, Jeff Galak & Shane Frederick
Journal of Consumer Research, forthcoming

Although researchers have documented many instances of crowd wisdom, it is important to know whether some kinds of judgments may lead the crowd astray, whether crowds' judgments improve with feedback over time, and whether crowds' judgments can be improved by changing the way judgments are elicited. We investigated these questions in a sports gambling context (predictions against point spreads) believed to elicit crowd wisdom. In a season-long experiment, fans wagered over $20,000 on NFL football predictions. Contrary to the wisdom-of-crowds hypothesis, faulty intuitions led the crowd to predict "favorites" more than "underdogs" against point spreads that disadvantaged favorites, even when bettors knew that the spreads disadvantaged favorites. Moreover, the bias increased over time, a result consistent with attributions for success and failure that rewarded intuitive choosing. However, when the crowd predicted game outcomes by estimating point differentials rather than by predicting against point spreads, its predictions were unbiased and wiser.


Decision making and the avoidance of cognitive demand

Wouter Kool, Joseph McGuire, Zev Rosen & Matthew Botvinick
Journal of Experimental Psychology: General, November 2010, Pages 665-682

Behavioral and economic theories have long maintained that actions are chosen so as to minimize demands for exertion or work, a principle sometimes referred to as the law of less work. The data supporting this idea pertain almost entirely to demands for physical effort. However, the same minimization principle has often been assumed also to apply to cognitive demand. The authors set out to evaluate the validity of this assumption. In 6 behavioral experiments, participants chose freely between courses of action associated with different levels of demand for controlled information processing. Together, the results of these experiments revealed a bias in favor of the less demanding course of action. The bias was obtained across a range of choice settings and demand manipulations and was not wholly attributable to strategic avoidance of errors, minimization of time on task, or maximization of the rate of goal achievement. It is remarkable that the effect also did not depend on awareness of the demand manipulation. Consistent with a motivational account, avoidance of demand displayed sensitivity to task incentives and covaried with individual differences in the efficacy of executive control. The findings reported, together with convergent neuroscientific evidence, lend support to the idea that anticipated cognitive demand plays a significant role in behavioral decision making.


Keeping the illusion of control under control: Ceilings, floors, and imperfect calibration

Francesca Gino, Zachariah Sharek & Don Moore
Organizational Behavior and Human Decision Processes, forthcoming

Prior research has claimed that people exaggerate probabilities of success by overestimating personal control in situations that are heavily or completely chance-determined. We examine whether such overestimation of control persists in situations where people do have control. Our results suggest a simple model that accounts for prior findings on illusory control as well as for situations where actual control is high: People make imperfect estimates of their level of control. By focusing on situations marked by low control, prior research has created the illusion that people generally overestimate their level of control. Across three studies, we show that when they have a great deal of control, people under-estimate it. Implications for research on perceived control and co-variation assessment are discussed.


Key Steps in Developing a Cognitive Vaccine against Traumatic Flashbacks: Visuospatial Tetris versus Verbal Pub Quiz

Emily Holmes, Ella James, Emma Kilford & Catherine Deeprose
PLoS ONE, November 2010, e13706

Background: Flashbacks (intrusive memories of a traumatic event) are the hallmark feature of Post Traumatic Stress Disorder, however preventative interventions are lacking. Tetris may offer a ‘cognitive vaccine' [1] against flashback development after trauma exposure. We previously reported that playing the computer game Tetris soon after viewing traumatic material reduced flashbacks compared to no-task [1]. However, two criticisms need to be addressed for clinical translation: (1) Would all games have this effect via distraction/enjoyment, or might some games even be harmful? (2) Would effects be found if administered several hours post-trauma? Accordingly, we tested Tetris versus an alternative computer game - Pub Quiz - which we hypothesized not to be helpful (Experiments 1 and 2), and extended the intervention interval to 4 hours (Experiment 2).

Methodology/Principal Findings: The trauma film paradigm was used as an experimental analog for flashback development in healthy volunteers. In both experiments, participants viewed traumatic film footage of death and injury before completing one of the following: (1) no-task control condition (2) Tetris or (3) Pub Quiz. Flashbacks were monitored for 1 week. Experiment 1: 30 min after the traumatic film, playing Tetris led to a significant reduction in flashbacks compared to no-task control, whereas Pub Quiz led to a significant increase in flashbacks. Experiment 2: 4 hours post-film, playing Tetris led to a significant reduction in flashbacks compared to no-task control, whereas Pub Quiz did not.

Conclusions/Significance: First, computer games can have differential effects post-trauma, as predicted by a cognitive science formulation of trauma memory. In both Experiments, playing Tetris post-trauma film reduced flashbacks. Pub Quiz did not have this effect, even increasing flashbacks in Experiment 1. Thus not all computer games are beneficial or merely distracting post-trauma - some may be harmful. Second, the beneficial effects of Tetris are retained at 4 hours post-trauma. Clinically, this delivers a feasible time-window to administer a post-trauma "cognitive vaccine".


Don't Spread Yourself Too Thin: The Impact of Task Juggling on Workers' Speed of Job Completion

Decio Coviello, Andrea Ichino & Nicola Persico
NBER Working Paper, October 2010

We show that task juggling, i.e., the spreading of effort across too many active projects, decreases the performance of workers, raising the chances of low throughput, long duration of projects and exploding backlogs. Individual speed of job completion cannot be explained only in terms of effort, ability and experience: work scheduling is a crucial "input" that cannot be omitted from the production function of individual workers. We provide a simple theoretical model to study the effects of increased task juggling on the duration of projects. Using a sample of Italian judges we show that those who are induced for exogenous reasons to work in a more parallel fashion on many trials at the same time, take longer to complete similar portfolios of cases. The exogenous variation that identifies this causal effect is constructed exploiting the lottery that assigns cases to judges together with the procedural prescription requiring judges to hold the first hearing of a case no later than 60 days from filing.


The robust beauty of ordinary information

Konstantinos Katsikopoulos, Lael Schooler & Ralph Hertwig
Psychological Review, October 2010, Pages 1259-1266

Heuristics embodying limited information search and noncompensatory processing of information can yield robust performance relative to computationally more complex models. One criticism raised against heuristics is the argument that complexity is hidden in the calculation of the cue order used to make predictions. We discuss ways to order cues that do not entail individual learning. Then we propose and test the thesis that when orders are learned individually, people's necessarily limited knowledge will curtail computational complexity while also achieving robustness. Using computer simulations, we compare the performance of the take-the-best heuristic - with dichotomized or undichotomized cues - to benchmarks such as the naïve Bayes algorithm across 19 environments. Even with minute sizes of training sets, take-the-best using undichotomized cues excels. For 10 environments, we probe people's intuitions about the direction of the correlation between cues and criterion. On the basis of these intuitions, in most of the environments take-the-best achieves the level of performance that would be expected from learning cue orders from 50% of the objects in the environments. Thus, ordinary information about cues - either gleaned from small training sets or intuited - can support robust performance without requiring Herculean computations.


Measuring Beliefs and Rewards: A Neuroeconomic Approach

Andrew Caplin, Mark Dean, Paul Glimcher & Robb Rutledge
Quarterly Journal of Economics, August 2010, Pages 923-960

The neurotransmitter dopamine is central to the emerging discipline of neuroeconomics; it is hypothesized to encode the difference between expected and realized rewards and thereby to mediate belief formation and choice. We develop the first formal tests of this theory of dopaminergic function, based on a recent axiomatization by Caplin and Dean (Quarterly Journal of Economics, 123 (2008), 663-702). These tests are satisfied by neural activity in the nucleus accumbens, an area rich in dopamine receptors. We find evidence for separate positive and negative reward prediction error signals, suggesting that behavioral asymmetries in responses to losses and gains may parallel asymmetries in nucleus accumbens activity.


A brighter side to memory illusions: False memories prime children's and adults' insight-based problem solving

Mark Howe, Sarah Garner, Monica Charlesworth & Lauren Knott
Journal of Experimental Child Psychology, February 2011, Pages 383-393

Can false memories have a positive consequence on human cognition? In two experiments, we investigated whether false memories could prime insight problem-solving tasks. Children and adults were asked to solve compound remote associate task (CRAT) problems, half of which had been primed by the presentation of Deese/Roediger-McDermott (DRM) lists whose critical lures were also the solutions to the problems. In Experiment 1, the results showed that regardless of age, when the critical lure was falsely recalled, CRAT problems were solved more often and significantly faster than problems that were not primed by a DRM list. When the critical lure was not falsely recalled, CRAT problem solution rates and times were no different from when there was no DRM priming. In Experiment 2, without an intervening recall test, children and adults still exhibited higher solution rates and faster solution times to CRAT problems that were primed than to those that were not primed. This latter result shows that priming occurred as a result of false memory generation at encoding and not at retrieval during the recall test. Together, these findings demonstrate that when false memories are generated at encoding, they can prime solutions to insight-based problems in both children and adults.


Neural signatures of strategic types in a two-person bargaining game

Meghana Bhatt, Terry Lohrenz, Colin Camerer & Read Montague
Proceedings of the National Academy of Sciences, 16 November 2010, Pages 19720-19725

The management and manipulation of our own social image in the minds of others requires difficult and poorly understood computations. One computation useful in social image management is strategic deception: our ability and willingness to manipulate other people's beliefs about ourselves for gain. We used an interpersonal bargaining game to probe the capacity of players to manage their partner's beliefs about them. This probe parsed the group of subjects into three behavioral types according to their revealed level of strategic deception; these types were also distinguished by neural data measured during the game. The most deceptive subjects emitted behavioral signals that mimicked a more benign behavioral type, and their brains showed differential activation in right dorsolateral prefrontal cortex and left Brodmann area 10 at the time of this deception. In addition, strategic types showed a significant correlation between activation in the right temporoparietal junction and expected payoff that was absent in the other groups. The neurobehavioral types identified by the game raise the possibility of identifying quantitative biomarkers for the capacity to manipulate and maintain a social image in another person's mind.


Glucose promotes controlled processing: Matching, maximizing, and root beer

Anthony McMahon & Matthew Scheel
Judgment and Decision Making, October 2010, Pages 450-457

Participants drank either regular root beer or sugar-free diet root beer before working on a probability-learning task in which they tried to predict which of two events would occur on each of 200 trials. One event (E1) randomly occurred on 140 trials, the other (E2) on 60. In each of the last two blocks of 50 trials, the regular group matched prediction and event frequencies. In contrast, the diet group predicted E1 more often in each of these blocks. After the task, participants were asked to write down rules they used for responding. Blind ratings of rule complexity were inversely related to E1 predictions in the final 50 trials. Participants also took longer to advance after incorrect predictions and before predicting E2, reflecting time for revising and consulting rules. These results support the hypothesis that an effortful controlled process of normative rule-generation produces matching in probability-learning experiments, and that this process is a function of glucose availability.


IQ Variations Across Time, Race, And Nationality: An Artifact Of Differences In Literacy Skills

David Marks
Psychological Reports, June 2010, Pages 643-664

A body of data on IQ collected over 50 years has revealed that average population IQ varies across time, race, and nationality. An explanation for these differences may be that intelligence test performance requires literacy skills not present in all people to the same extent. In eight analyses, population mean full scale IQ and literacy scores yielded correlations ranging from .79 to .99. In cohort studies, significantly larger improvements in IQ occurred in the lower half of the IQ distribution, affecting the distribution variance and skewness in the predicted manner. In addition, three Verbal subscales on the WAIS show the largest Flynn effect sizes and all four Verbal subscales are among those showing the highest racial IQ differences. This pattern of findings supports the hypothesis that both secular and racial differences in intelligence test scores have an environmental explanation: secular and racial differences in IQ are an artifact of variation in literacy skills. These findings suggest that racial IQ distributions will converge if opportunities are equalized for different population groups to achieve the same high level of literacy skills. Social justice requires more effective implementation of policies and programs designed to eliminate inequities in IQ and literacy.


The phylogenetic roots of cognitive dissonance

Samantha West, Stephanie Jett, Tamra Beckman & Jennifer Vonk
Journal of Comparative Psychology, forthcoming

We presented 7 Old World monkeys (Japanese macaques [Macaca fuscata], gray-cheeked mangabey [Lophocebus albigena], rhesus macaques [Macaca mulatta], bonnet macaque [Macaca radiate], and olive baboon [Papio anubis]), 3 chimpanzees (Pan troglodytes), 6 members of the parrot (Psittacinae) family, and 4 American black bears (Ursus americanus) with a cognitive dissonance paradigm modeled after Egan, Santos, and Bloom (2007). In experimental trials, subjects were given choices between 2 equally preferred food items and then presented with the unchosen option and a novel, equally preferred food item. In control trials, subjects were presented with 1 accessible and 1 inaccessible option from another triad of equally preferred food items. They were then presented with the previously inaccessible item and a novel member of that triad. Subjects, as a whole, did not prefer the novel item in experimental or control trials. However, there was a tendency toward a subject by condition interaction. When analyzed by primate versus nonprimate categories, only primates preferred the novel item in experimental but not control trials, indicating that they resolved cognitive dissonance by devaluing the unchosen option only when an option was derogated by their own free choice. This finding suggests that this phenomenon might exist within but not outside of the primate order.


Development of hot and cool executive function during the transition to adolescence

Angela Prencipe et al.
Journal of Experimental Child Psychology, forthcoming

This study examined the development of executive function (EF) in a typically developing sample from middle childhood to adolescence using a range of tasks varying in affective significance. A total of 102 participants between 8 and 15 years of age completed the Iowa Gambling Task, the Color Word Stroop, a Delay Discounting task, and a Digit Span task. Age-related improvements were found on all tasks, but improvements on relatively cool tasks (Color Word Stroop and Backward Digit Span) occurred earlier in this age range, whereas improvements on relatively hot tasks (Iowa Gambling Task and Delay Discounting) were more gradual and occurred later. Exploratory factor analysis indicated that performance on all tasks could be accounted for by a single-factor model. Together, these findings indicate that although similar abilities may underlie both hot and cool EF, hot EF develops relatively slowly, which may have implications for the risky behavior often observed during adolescence. Future work should include additional measures to characterize more intensively the development of both hot and cool EF during the transition to adolescence.


Stability and change in intelligence from age 11 to ages 70, 79, and 87: The Lothian birth cohorts of 1921 and 1936

Alan Gow et al.
Psychology and Aging, forthcoming

Investigating the predictors of age-related cognitive change is a research priority. However, it is first necessary to discover the long-term stability of measures of cognitive ability because prior cognitive ability level might contribute to the amount of cognitive change experienced within old age. These two issues were examined in the Lothian Birth Cohorts of 1921 and 1936. Cognitive ability data were available from age 11 years when the participants completed the Moray House Test No. 12 (MHT). The Lothian Birth Cohort 1936 (LBC1936) completed the MHT a second time at age 70. The Lothian Birth Cohort 1921 (LBC1921) completed the MHT at ages 79 and 87. We examined cognitive stability and change from childhood to old age in both cohorts, and within old age in the LBC1921. Raw stability coefficients for the MHT from 11-70, 11-79, and 11-87 years were .67, .66, and .51, respectively; and larger when corrected for range restriction in the samples. Therefore, minimum estimates of the variance in later-life MHT accounted for by childhood performance on the same test ranged from 26-44%. This study also examined, in the LBC1921, whether MHT score at age 11 influenced the amount of change in MHT between ages 79 and 87. It did not. Higher intelligence from early life was apparently protective of intelligence in old age due to the stability of cognitive function across the lifespan, rather than because it slowed the decline experienced in later life.


The Effects of Critical Thinking Instruction on Training Complex Decision Making

Anne Helsdingen et al.
Human Factors: The Journal of the Human Factors and Ergonomics Society, Summer 2010, Pages 537-545

Objective: Two field studies assessed the effects of critical thinking instruction on training and transfer of a complex decision-making skill.

Background: Critical thinking instruction is based on studies of how experienced decision makers approach complex problems.

Method: Participants conducted scenario-based exercises in both simplified (Study 1) and high-fidelity (Study 2) training environments. In both studies, half of the participants received instruction in critical thinking. The other half conducted the same exercises but without critical thinking instruction. After the training, test scenarios were administered to both groups.

Results: The first study showed that critical thinking instruction enhanced decision outcomes during both training and the test. In the second study, critical thinking instruction benefited both decision outcomes and processes, specifically on the transfer to untrained problems.

Conclusion: The results suggest that critical thinking instruction improves decision strategy and enhances understanding of the general principles of the domain.

Application: The results of this study warrant the implementation of critical thinking instruction in training programs for professional decision makers that have to operate in complex and highly interactive, dynamic environments.

to your National Affairs subscriber account.

Already a subscriber? Activate your account.


Unlimited access to intelligent essays on the nation’s affairs.

Subscribe to National Affairs.