Findings

Neurons

Kevin Lewis

September 29, 2018

The Impact of Beliefs Concerning Deception on Perceptions of Nonverbal Behavior: Implications for Neuro-Linguistic Programming-Based Lie Detection
Flavia Spiroiu
Journal of Police and Criminal Psychology, September 2018, Pages 244-256

Abstract:

Regularly employed in a forensic context, the Neuro-Linguistic Programming (NLP) model purports that the behavioral distinction between somebody who is remembering information and somebody who is constructing information lies in the direction of their eye movements. This strategy reflects numerous current approaches to lie detection, which presume that nonverbal behavior influences perceptions and judgments about deception. The present study emphasized a reverse order by investigating whether beliefs that an individual is deceptive influence perceptions of the respective individual's nonverbal behavior as indicated by observed eye movement patterns. Sixty participants were randomly assigned to either a group informed that right eye movements indicate constructed and thus deceptive information or a group informed that left eye movements indicate constructed and thus deceptive information. Each participant viewed six investigative interviews depicting the eye movement patterns of mock suspects labeled as deceptive or truthful. The interviews were structured according to different right/left eye movement ratios. Results revealed that participants reportedly observed the deceptive suspects displaying significantly more eye movements in the direction allegedly indicative of deception than did the truthful suspects. This result occurred despite the fact that the actual eye movement ratios in both deceptive/truthful sets of interviews were identical and the eye movements were predominantly in the opposite direction of that allegedly indicative of deception. The results are discussed in the context of encoding-based cognitive-processing theories. Limitations on the generality of the results are emphasized and the applicability (or lack thereof) of NLP-based lie detection in forensic contexts is discussed.


The social buffering of pain by affective touch: A laser-evoked potentials study in romantic couples
Mariana Mohr et al.
Social Cognitive and Affective Neuroscience, forthcoming

Abstract:

Pain is modulated by social context. Recent neuroimaging studies have shown that romantic partners can provide a potent form of social support during pain. However, such studies have only focused on passive support, finding a relatively late-onset modulation of pain-related neural processing. In this study, we examine for the first time dynamic touch by one's romantic partner as an active form of social support. Specifically, 32 partners provided social, active, affective (versus active but neutral) touch according to the properties of a specific C tactile afferent pathway to their romantic partners, who then received laser-induced pain. We measured subjective pain ratings and early N1 and later N2-P2 laser-evoked potentials (LEPs) to noxious stimulation, as well as individual differences in adult attachment style. We found that affective touch from one's partner reduces subjective pain ratings and similarly attenuates LEPs both at earlier (N1) and later (N2-P2) stages of cortical processing. Adult attachment style did not affect laser-evoked potentials, but attachment anxiety had a moderating role on pain ratings. This is the first study to show early neural modulation of pain by active, partner touch and we discuss these findings in relation to the affective and social modulation of sensory salience.


Acetaminophen Enhances the Reflective Learning Process
Rahel Pearson et al.
Social Cognitive and Affective Neuroscience, forthcoming

Background: Acetaminophen has been shown to influence cognitive and affective behavior possibly via alterations in serotonin function. This study builds upon this previous work by examining the relationship between acetaminophen and dual learning systems, comprising reflective (rule-based) and reflexive (information-integration) processing.

Methods: In a double-blind, placebo-controlled study, a sample of community-recruited adults (N=87) were randomly administered acetaminophen (1000mg) or placebo and then completed reflective-optimal and reflexive-optimal category learning tasks.

Outcomes: For the reflective-optimal category learning task, acetaminophen compared to placebo was associated with enhanced accuracy prior to the first rule switch (but not overall accuracy), with needing fewer trials to reach criterion, and with a faster learning rate. Acetaminophen modestly attenuated performance on the reflexive-optimal category learning task compared to placebo.


New insights on real-world human face recognition
Christel Devue, Annabelle Wride & Gina Grimshaw
Journal of Experimental Psychology: General, forthcoming

Abstract:

Humans are supposedly expert in face recognition. Because of limitations in existing research paradigms, little is known about how faces become familiar in the real world, or the mechanisms that distinguish good from poor recognizers. Here, we capitalized on several unique features of the TV series Game of Thrones to develop a highly challenging test of face recognition that is ecologically grounded yet controls for important factors that affect familiarity. We show that familiarization with faces and reliable person identification require much more exposure than previously suggested. Recognition is impaired by the mere passage of time and simple changes in appearance, even for faces we have seen frequently. Good recognizers are distinguished not by the number of faces they recognize, but by their ability to reject novel faces as unfamiliar. Importantly, individuals with superior recognition abilities also forget faces and are not immune to identification errors.


Efficiency of mitochondrial functioning as the fundamental biological mechanism of general intelligence (g)
David Geary
Psychological Review, forthcoming

Abstract:

General intelligence or g is one of the most thoroughly studied concepts in the behavioral sciences. Measures of intelligence are predictive of a wide range of educational, occupational, and life outcomes, including creative productivity and are systematically related to physical health and successful aging. The nexus of relations suggests 1 or several fundamental biological mechanisms underlie g, health, and aging, among other outcomes. Cell-damaging oxidative stress has been proposed as 1 of many potential mechanisms, but the proposal is underdeveloped and does not capture other important mitochondrial functions. I flesh out this proposal and argue that the overall efficiency of mitochondrial functioning is a core component of g; the most fundamental biological mechanism common to all brain and cognitive processes and that contributes to the relations among intelligence, health, and aging. The proposal integrates research on intelligence with models of the centrality of mitochondria to brain development and functioning, neurological diseases, and health more generally. Moreover, the combination of the maternal inheritance of mitochondrial DNA (mtDNA), the evolution of compensatory nuclear DNA, and the inability of evolutionary processes to purge deleterious mtDNA in males may contribute to the sex difference in variability in intelligence and in other cognitive domains. The proposal unifies many now disparate literatures and generates testable predictions for future studies.


Epigenetic variance in dopamine D2 receptor: A marker of IQ malleability?
Jakob Kaminski et al.
Translational Psychiatry, August 2018

Abstract:

Genetic and environmental factors both contribute to cognitive test performance. A substantial increase in average intelligence test results in the second half of the previous century within one generation is unlikely to be explained by genetic changes. One possible explanation for the strong malleability of cognitive performance measure is that environmental factors modify gene expression via epigenetic mechanisms. Epigenetic factors may help to understand the recent observations of an association between dopamine-dependent encoding of reward prediction errors and cognitive capacity, which was modulated by adverse life events. The possible manifestation of malleable biomarkers contributing to variance in cognitive test performance, and thus possibly contributing to the "missing heritability" between estimates from twin studies and variance explained by genetic markers, is still unclear. Here we show in 1475 healthy adolescents from the IMaging and GENetics (IMAGEN) sample that general IQ (gIQ) is associated with (1) polygenic scores for intelligence, (2) epigenetic modification of DRD2 gene, (3) gray matter density in striatum, and (4) functional striatal activation elicited by temporarily surprising reward-predicting cues. Comparing the relative importance for the prediction of gIQ in an overlapping subsample, our results demonstrate neurobiological correlates of the malleability of gIQ and point to equal importance of genetic variance, epigenetic modification of DRD2 receptor gene, as well as functional striatal activation, known to influence dopamine neurotransmission. Peripheral epigenetic markers are in need of confirmation in the central nervous system and should be tested in longitudinal settings specifically assessing individual and environmental factors that modify epigenetic structure.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.