Skip to main content

Is visual information use during facial emotion recognition related to eating disorder symptoms in college-aged men and women? An experimental study

Abstract

Background

Previous studies of emotion recognition abilities of people with eating disorders used accuracy to identify performance deficits for these individuals. The current study examined eating disorder symptom severity as a function of emotion categorization abilities, using a visual cognition paradigm that offers insights into how emotional faces may be categorized, as opposed to just how well these faces are categorized.

Methods

Undergraduate students (N = 87, 50 women, 34 men, 3 non-binary) completed the Bubbles task and a standard emotion categorization task, as well as a set of questionnaires assessing their eating disorder symptomology and comorbid disorders. We examined the relationship between visual information use (assessed via Bubbles) and eating disorder symptomology (EDDS) while controlling for anxiety (STAI), depression (BDI-II), alexithymia (TAS), and emotion regulation difficulties (DERS-sf).

Results

Overall visual information use (i.e. how well participants used facial features important for accurate emotion categorization) was not significantly related to eating disorder symptoms, despite producing interpretable patterns for each emotion category. Emotion categorization accuracy was also not related to eating disorder symptoms.

Conclusions

Results from this study must be interpreted with caution, given the non-clinical sample. Future research may benefit from comparing visual information use in patients with an eating disorder and healthy controls, as well as employing designs focused on specific emotion categories, such as anger.

Plain English summary

Men and women with severe eating disorder symptoms may find it harder to identify and describe emotions than people with less severe eating disorder symptoms. However, previous work makes it difficult to determine why emotion recognition deficits exist, and what underlying abilities or strategies are actually different due to a deficit. In addition to a typical emotion recognition task (emotion categorization), this study used the Bubbles task, which allowed us to determine which parts of an image are important for emotion recognition, and whether participants used these parts during the task. In 87 undergraduate students (47 female; 49 with clinically-significant eating disorder symptoms), there was no significant relationship between task performance and eating disorder symptom severity, before and after controlling for the relationship with other comorbid disorders. Our results imply that emotion recognition deficits are unlikely to be an important mechanism underlying eating disorder pathology in participants with a range of eating disorders symptoms.

Background

Eating disorders (EDs) are serious psychiatric illnesses characterized by dysregulated eating behaviors and associated cognitions (American Psychiatric Association, 2013). These disorders are prevalent, with lifetime rates of up to 3% in men and 6% in women [20]. Recovery from EDs involves not only the re-initiation of healthy dietary behaviors, but also improvements on dimensions of psychological and social well-being [4, 12].

Problematic social cognition, including deficits in one’s ability to accurately discern the emotional responses of others and the ability to regulate one’s own emotions, has been theorized to be an important transdiagnostic psychosocial mechanism contributing to the maintenance of EDs [29]. For example, emotion recognition deficits may impede adaptive emotion regulation strategy use, which may contribute to the use of ED symptoms (e.g., binge eating) as a means of trying to regulate one’s emotional state [27]. Further, emotion recognition deficits may exacerbate symptoms of comorbid disorders (e.g., depression), ultimately hampering treatment response for individuals with EDs [42]. Importantly, research on social cognition deficits among ED populations has yielded mixed results [29], highlighting the need for continued work to clarify the role of distinct aspects of social cognition (e.g., emotion recognition and emotion processing) in relation to ED symptoms.

Emotional-specific deficits in eating disorders have been outlined in two prior reviews of ED antecedents and models [29, 31]. Emotional components of ED pathology include emotion dysregulation [29, 31], self-esteem deficits [31], and incorrectly perceived body acceptance by others [31]. Based on converging evidence from separate literature, emotion recognition abilities are a factor in the above components: emotion regulation [1, 19]; self-esteem [23, 45]. Emotion recognition of body acceptance falls under the general umbrella of accurate social perception, for which emotion recognition abilities are necessary [11, 43]. Overall, emotion recognition abilities are a potentially important component of ED pathology via indirect associations with previously found emotion-related deficits. The aim of this study and previous work from the same literature is therefore to isolate and assess emotion recognition abilities of individuals via targeted behavioral tasks. Results of such work are then used to determine if these abilities may have an independent relation with ED pathology in addition to their indirect contribution.

Emotion recognition, as one element of social cognition, can be reliably assessed using brief cognitive-affective tasks administered in home, laboratory, clinic, or community settings. These tasks often display emotions to participants (primarily in the form of facial expressions), and ask the participants to name the presented emotion. For example, a standard emotion recognition task is six-alternative-forced-choice categorization, where participants are shown facial images from one of six classical categories (e.g., happy, sad), and asked to respond with one of those labels [14]. Accuracy (i.e., whether participants correctly identified the emotion displayed) is then commonly investigated as the primary index of emotion recognition.

Early research using these types of visual tasks established a connection between reduced emotion recognition accuracy and anorexia nervosa (AN, a disorder characterized by severe dietary restriction; Kucharska-Pietura et al., [26]. Since then, a number of studies have identified emotion recognition deficits across a range of EDs and samples [9, 10, 15, 21, 33, 34, 47]. However, results have been somewhat mixed with regard to the exact nature and specificity of these deficits. For example, Pollatos et al. [33] found accuracy deficits for categorizing neutral, sad, and disgusted faces, while Pringle et al. [34] found deficits for neutral and angry faces. Yet another study [21] found deficits for angry, fearful, and disgusted faces. While it is challenging to reconcile these mixed findings, some common themes are evident. Impaired recognition of positive emotions (e.g., happiness) is not typically observed in patients with EDs, while impaired recognition of disgust has been observed in multiple studies [10, 15, 47].

In addition, some studies have found no significant relationship between ED pathology and emotion recognition ability [8, 32, 38], even when using highly comparable methodologies, albeit in small samples (Ns typically ≤ 30, but see Wyssen et al., [47].

Importantly, the existing literature base is limited in a number of ways. First, the emotion categorization tasks used in existing studies do not offer insight into possible mechanisms underlying emotion recognition deficits associated with ED pathology. That is, simple measures of emotion categorization accuracy may not reveal subtle differences in how emotion recognition is carried out. For example, individuals with disordered eating may use different or atypical facial features for emotion recognition (e.g., overreliance on specific parts of the face), which could explain observed deficits in emotion recognition, or contribute to aberrations in social cognition that may not be detected by simple emotion recognition accuracy tasks and could potentially obscure important associations between social cognition and ED symptoms.

A reverse correlation method referred to as the Bubbles task [17] makes it possible to characterize which elements of a visual stimulus (e.g., face) are used in categorization tasks. This technique involves randomly obscuring parts of an image while participants categorize the image using a predetermined set of labels, and can be used to identify which facial features lead to successful recognition of emotions. Besides comparing participants to one another, participants’ spatial information use can also be compared to a computational model, often referred to as the ideal observer [16], which optimally utilizes image information to support efficient and accurate categorization (see Fig. 1). By clarifying the specific facial regions utilized for emotion recognition among individuals with ED symptoms, as well as the amount of available information needed to support emotion categorization in this population, the Bubbles task allows for a more nuanced examination of emotion recognition deficits in this group, which could help to inform theories of social cognition in EDs.

Fig. 1
figure 1

Visualizations of visual information by category. The visualization is derived from the bubbles task [17], where random parts of the image are obscured, and the importance of individual pixels is determined based on the participant’s task performance when the pixel is obscured or revealed. Brighter color reflects greater importance of the pixels for emotion categorization. Note that the ideal observer images have the same shape, in order to force the algorithm to focus on the features within the face rather than facial contours

Second, previous studies have not consistently evaluated the impact of multiple co-occurring psychiatric symptoms (e.g., depression, anxiety) on the relationship between ED pathology and emotion recognition ability. However, emotion recognition deficits appear to be associated with a range of clinical characteristics (e.g., alexithymia, emotion regulation deficits; Fujiwara et al., [15, 28, 29, 31, 36]) and psychiatric conditions that frequently co-occur with EDs [1325]). Given this, it is important to clarify whether aberrations in emotion recognition can be attributed to ED symptoms specifically, or if they may be better accounted for by other psychological factors.

Third, many of the existing studies in this domain suffer from smaller sample sizes, and were only powered to detect moderate-large effects. Additionally, previous work has primarily focused on female participants, in part due to their prevalence in ED patient samples. However, EDs and ED symptoms are increasingly recognized among males, suggesting the need for additional research in this population.

Finally, while the literature primarily focused on clinically-significant eating disorder symptoms, the same deficits have been tested with sub-clinical samples [35, 44]. There is a need to establish at which eating disorder severity level emotion recognition deficits are manifested if they exist. However, clinical samples generally only contain very severe cases, and control cases without any symptoms. A wider range of severity is necessary to further elucidate the relationship between eating disorders and emotion recognition. Shifting the focus from diagnosis to symptoms also allows us to potentially apply existing findings to the general population of individuals with sub-clinical ED symptoms.

In order to address the gaps identified above, the current study delivered a standard emotion recognition accuracy task and the Bubbles task to a large sample of male and female college students who reported varying levels of ED pathology. The study aimed to evaluate (1) the association between ED symptoms and emotion recognition accuracy, (2) the association between ED symptoms and performance on the Bubbles task, and (3) the contribution of other comorbid psychological symptoms.

Methods

Participants

Participants (N = 110; 54 male, 53 female, 3 non-binary) were recruited from the undergraduate psychology research pool at North Dakota State University (NDSU), located in the midwestern United States. Data for eight participants could not be analyzed due to incomplete visual tasks or survey questionnaires. A further ten participants were excluded due to technical difficulties during the task. Finally, five more individuals were excluded due to responding too quickly (responses faster than 250ms for more than 5% of task trials). No other inclusion or exclusion criteria were applied. The final sample for this study consisted of 87 individuals (37 male, 47 female, 3 non-binary). An attrition analysis was conducted to confirm that excluded participants did not differ on questionnaire variables. This analysis did not reveal a significant difference between the groups on symptoms of ED (p = .755), depression (p = .441), anxiety (p = .264), alexithymia (p = .976), and emotion regulation (p = .187). Participants were primarily Caucasian (84%), with 5% of East Asian descent, 3% of African descent, 2% of Hispanic descent, and 2% of Native American descent. They reported a mean age of 19.04 years (SD = 1.99).

Materials

Questionnaires

All questionnaires have been scored so that higher values imply more severe symptoms.

Eating disorder diagnostic scale

The Eating Disorder Diagnostic Scale [40, 41] is a brief 22-item self-report questionnaire measuring symptoms of three key eating disorders (AN, bulimia nervosa, and binge-eating disorder) based on diagnostic criteria from the 5th edition of the Diagnostic and Statistical Manual). The scale provides a continuous symptom composite score, which represents an individual’s overall level of ED pathology and was utilized in all analyses. A score of greater than 16.5 points implies clinically significant symptoms [24]; 49 participants (56%) scored above this threshold. In the current sample, the Cronbach’s alpha value for the EDDS symptom composite score was 0.83, 95% CI: [0.77, 0.87].

Beck depression inventory

The Beck Depression Inventory, Second Edition (BDI-II; Beck et al., [5]) is a 21-item self-report instrument intended to assess the severity of depression symptoms. The scale was chosen due to its prevalence in the field, and the consistently high reliability and validity scores across studies. A total score of 14–19 implies mild, 20–28 moderate, and 29–63 severe depression; 13 participants (15%) scored as mild, 16 participants (18%) scored as moderate, and 8 participants (9%) scored as severe. In the current sample, the Cronbach’s alpha value of BDI-II was 0.93, 95% CI: [0.91, 0.95].

State-trait anxiety inventory

The State-Trait Anxiety Inventory (STAI; Spielberger et al., [39]) is a 40-item self-report scale assessing current and typical levels of anxiety. The scale was chosen due to its prevalence in the field, as well as the option of evaluating both trait and state anxiety. In the current study, only the trait component of the scale was used in order to maintain consistency with the other disorder scales. A score of greater than 40 points implies clinically significant symptoms; 11 participants (13%) scored above this threshold. In the current sample, the Cronbach’s alpha value of STAI was 0.93, 95% CI: [0.91, 0.95].

Toronto alexithymia scale

The Toronto Alexithymia Scale-20 [2] is a 20-item scale assessing the severity of alexithymia (i.e., difficulty identifying and describing one’s own emotions). The TAS was chosen due to being the most validated and prevalent measure of alexithymia in the literature, as well as offering measures of separate alexithymia constructs (difficulty identifying feelings, difficulty describing feelings, and externally oriented thinking). The current study used the summed score across the three components in all analyses. A score between 51 and 60 implies possible alexithymia, with scores of above 60 implying definite alexithymia; 27 participants (31%) scored as “possible”, and 20 participants (23%) scored as “definite”. It should be noted that Bagby has since preferred to use the TAS as a continuous dimensional construct [3], which was done in this study. In the current sample, the Cronbach’s alpha value of TAS was 0.77, 95% CI: [0.70, 0.84].

Difficulties in emotion regulation - short form

The Difficulties in Emotion Regulation - Short Form (DERS-sf; Kaufman et al., [22]) is a shortened version of the Difficulties in Emotion Regulation Scale [18], which involves half the items of the original (from 36 to 18), while retaining over 80% of the variance relative to the full measure. The DERS-sf was chosen to reduce participant burden without substantially sacrificing reliability. The DERS-sf consists of six subscales, which were combined into a general index of emotion regulation difficulties in the current study. As this is considered a continuous dimensional measure, no threshold cut-offs were provided. In the current sample, the Cronbach’s alpha value of DERS-sf was 0.91, 95% CI: [0.88, 0.93].

Demographic information

Demographic information including age, gender, ethnicity, and education was collected. To assess gender, participants responded to the question “What gender do you identify as?”. Response options included male, female, non-binary, and other. Participants also reported whether they had ever received a previous diagnosis of an ED, anxiety disorder, or major depressive disorder.

Visual recognition tasks

Bubbles task

Participants sat 80 cm away from the computer screen. On each trial, the participant saw an emotional face image, partially obscured by the bubbles mask (see Figure S1 for a depiction of these images). Participants were required to make a choice between six emotion categories (anger, disgust, fear, happiness, sadness, surprise) within 2500ms. We employed the QUEST staircasing procedure [46] in order to maintain task difficulty at 75%. This was achieved by increasing the number of bubbles after each correct response, and decreasing the number after each incorrect response in a manner which provided enough information to achieve 75% accuracy. A participant saw 150 trials for each image of an emotion category, for a total of 900 trials, with breaks evenly spaced in-between. A single image for each category was used, with the specific images chosen based on emotion ratings from previous work within the lab. Visual information use was computed as the average Spearman ranked correlation between a Bubbles visualization image of a participant and the ideal observer image for that category. For details of generating bubbles visualization images for participants and the ideal observer, see Supplementary Materials.

Facial emotion recognition task

Participants sat 80 cm away from the computer screen. Participants were required to make a choice between six emotion categories (anger, disgust, fear, happiness, sadness, surprise), without a time limit. On each trial, the participant saw a single face depicting an emotion from one of the six categories. There were three identities per category, for a total of 18 unique images. A participant saw 10 repetitions of each identity, with 30 repetitions per emotion category, for a total of 180 trials. This task was always completed after the Bubbles tasks in order to avoid familiarizing participants with the stimuli that were used in the Bubbles task. Facial emotion categorization accuracy was computed as the average categorization accuracy on this task for each participant.

All questionnaires and behavioral tasks, stimulus sets, and other supplementary information is included on the OSF page for this study: https://doi.org/10.17605/OSF.IO/FH9VD.

Procedure

The authors assert that all procedures contributing to this work comply with ethical standards on human experimentation and the Declaration of Helsinki. All research procedures were approved by the NDSU Institutional Review Board, with the study being granted exempt status. Participants were recruited online via NDSU’s Psychology study registration system. The study description outlined that participants would answer questionnaires about disorder symptoms and perform multiple facial expression recognition tasks. All participants were compensated with course credit. All participants provided informed consent at the beginning of the study. All study tasks and questionnaires took place in a computer testing room, with dividers between individual computers. After consent, participants performed the Bubbles task, followed by the facial emotion recognition task. Self-report questionnaires and demographics were administered at the end of the study with their order randomized, but with the demographic questions always occurring last.

Data analytic approach

Multiple linear regression was used in order to examine the relationship between visual task performance and ED symptom severity. This analysis was conducted in structured steps with both models being nested with the same N. In Model 1, only variables representing overall task performance averaged across emotion conditions were entered into the model. In Model 2, variables representing symptoms of comorbid mental disorders were added. An additional regression analysis was conducted with the six category-specific variables replacing the average task performance. We did not add comorbid disorder symptoms to category-specific models as power for these analyses generally fell below 60%. All predictors, including questionnaires, were mean-centered. EDDS (the outcome variable) was not centered. RStudio (RStudio Team [37]), was used for all analyses.

Power analysis calculations assuming a linear regression analysis framework indicated that a total sample size of 55 participants would provide a power of 0.90 to detect a medium effect (i.e., f2 = 0.15, assuming an alpha of 0.05 and power of 0.80), when examining the contribution of a single predictor (task performance) while controlling for the four covariates (i.e., the comorbid disorder questionnaires). Cohen (1992)’s original effect size guidelines indicate that f2 of 0.15 is medium and an f2 of 0.02 is small effect. A post-hoc sensitivity analysis revealed that the smallest detectable statistically significant effect size is f2 = 0.09 with 80% power, and f2 = 0.12 with 90% power. Thus, the study was powered to detect small-to-medium effects.

Due to the lack of a significant difference in study variables by gender (visual information use, categorization accuracy, and EDDS; all ps > 0.05), and to preserve statistical power for the main analytic variables of task performance, the main analyses are reported for the full sample. See Tables S1-2 for bivariate correlations of all study variables separately for men and women. See Table S3 for all regression study results with gender as a covariate.

We performed two robustness checks. The first evaluated regression assumptions and multivariate outliers. We computed values for leverage, Cook’s distance, and studentized residuals in order to test for multivariate outliers. Participants were excluded if they exceeded the cut-off on at least two of these metrics, and the models were re-ran in a follow-up analysis. Additionally, linearity was examined with loess plots. Homoscedasticity was assessed with a fitted value vs. standardized residual plot, as well as the non-constant variance score test and the Breusch-Pagan test. Normality in the residuals was assessed with a residual qq-plot and the Shapiro-Wilk test. Beyond the potential multivariate outliers described above, no further problems with the regression assumptions were identified during this process.

The second robustness check examined whether our results were sensitive to the assessment of ED symptoms. Instead of using the EDDS composite symptom score,

we followed diagnosis coding, where a 0 means no ED pathology (44%) and a 1 means some ED pathology (56%). We used the score cut-off outlined in Krabbenborg et al., [24] where a score of at least 16.5 points implies clinically-significant ED symptoms. We then performed a logistic regression to compare these categorical groups. This analysis did not reveal a significant effect of EDDS diagnosis (see Table S4).

Results

Task performance

Visual inspection of the average classification images obtained for each emotion category is consistent with prior work (Fig. 1). First, there are clear patterns in the data indicating that observers use distinct facial regions to recognize different emotions, consistent with prior eye tracking and reverse correlation studies [6, 7]. Besides simply differing from one another, these highlighted areas of importance approximately correspond to the facial action units derived from common expressions of emotion (Ekman & Friesen [14]; Mehu & Scherer [30]).

Bivariate correlations

See Table 1 for descriptives and zero-order correlations in the full sample; see Tables S1 and S2 for descriptives and correlations separately for men and women. Participant psychopathology symptoms adequately covered the ranges of the questionnaires used in the current study. There was no significant association between visual information use and emotion categorization accuracy, suggesting that visual information use and emotion recognition accuracy represent different perceptual abilities, and may provide unique insights into emotion recognition. ED symptoms were not associated with visual information use or emotion recognition accuracy. Associations between visual information use or emotion recognition accuracy and comorbid psychopathology scales were not statistically significant.

Table 1 Zero-order correlations between study variables and descriptive statistics in the full sample

Visual information use and ED symptoms

The results of the hierarchical nested regressions are depicted in Table 2. Model 1 demonstrates that visual information use alone is not significantly related to ED symptom severity. The addition of comorbid disorder symptoms in Model 2 resulted in a statistically significant model, but visual information use remained a non-significant predictor of ED symptoms. Results were robust to regression assumption diagnostic checks. The results of these models suggest that there is no statistically significant association between visual information use and ED symptom severity in the current sample.

Table 2 Regression model parameters for visual information use predicting EDDS symptoms

Results for category-specific visual information use can be found in Table S5. The model consisting of visual information use for individual categories predicting ED symptom severity did not reach statistical significance. However, the outlier robustness check in this case did result in a significant model, after the exclusion of two participants. The updated model demonstrated that visual information use for the anger category significantly contributed to ED symptom severity (see Table S6). In a subsequent exploratory analysis, we added the contribution of other disorder symptoms to the anger-only model. The partial contribution of anger remained significant, with the model once again showing a substantial contribution of the BDI-II (see Table S6). A comparison of model fit between the symptom + anger model and a nested symptom-only model suggested that the symptom + anger model provided significantly better fit to predicting ED symptom severity. This implies that strategy use when perceiving anger may differ across ED symptom severity, even after accounting for the contribution of other comorbid conditions.

Facial emotion recognition

Details of regression models based on categorization accuracy can be found in the Supplementary Materials (see Tables S7 & S8). Overall and category-specific facial emotion recognition accuracy was not related to ED symptoms. Sensitivity analyses were conducted on these results with little to no effect on model fit.

Discussion

The overarching goal of the current study was to examine the relationship between visual emotion recognition and ED symptom severity in a large sample of male and female college students, and to evaluate the impact of commonly co-occurring symptoms on this relationship. With regard to the Bubbles task, overall visual information use was not a significant predictor of ED symptoms. However, it’s possible that visual information use for the anger category is a significant predictor of ED symptoms, even after accounting for the contribution of comorbid disorders.

In the facial emotion categorization task, neither overall nor category-specific accuracy was associated with ED symptom severity. Performance on the Bubbles task and the categorization task were not correlated, suggesting that these tasks assessed unique perceptual and representational abilities of individuals. Nonetheless, overall performance on both metrics were not significant predictors of ED symptomology. Given the adequate power to find small-medium emotion recognition effects in the current study, the most conservative interpretation is that overall visual emotion recognition abilities are not significantly related to ED symptom severity in this sample of college-aged students.

Implications

What might be a reason for our null findings, despite previous literature uncovering significant differences? In two previous studies with sub-clinical samples which focused on eating disorder severity, only very modest differences in some categories [35] or no differences [44] were found. The bulk of the remaining literature has focused on patients with eating disorders, primarily women with AN compared to healthy age-matched controls. While many of these studies did find differences in facial emotion recognition between patients and controls, a substantial number of studies [8, 32, 38], including large scale recent work [47], also found null results. Given this, it is possible that emotion recognition deficits may most reliably characterize individuals who exhibit more severe eating pathology (including those with AN), who are more commonly represented within clinical samples. However, based on the present study, it does not appear that emotion recognition deficits are reliably associated with more moderate levels of ED pathology, which were represented in our study. Alternatively, the current laboratory-based approach may not capture the specific deficits experienced by those with EDs. For example, Wyssen et al. [47] suggest that patients with EDs may particularly struggle with interpreting what emotions mean within a social context, rather than with the simple recognition of the emotion itself. Thus, more naturalistic approaches or adapted laboratory-based tasks may be needed to test this hypothesis.

The current study builds on previous literature to provide a more extensive test of facial emotion recognition than detectable from purely accuracy-based designs. The Bubbles task provides unique insights into a person’s emotion recognition abilities. Category-specific visual information use could be used as a metric of what information people consider important on a face when making an emotion recognition decision. The current study suggests that emotion categorization, and visual information use during this process, are unlikely to differ across lower levels of ED symptom severity.

Limitations

One important limitation when considering these results within the framework of perceptual difficulties of individuals with ED is that the sample for the current study consists of a convenience sample of college students. Consequently, the presence of severe eating pathology in the sample was relatively low (N = 49 with at least some ED, according to the EDDS cut-off) as compared to an ED patient sample, but the full range of symptoms (including patients with mild, moderate, and severe) is more ideal in terms of statistical power for detecting linear associations. While our results suggest a non-significant, linear association between visual information use and ED symptoms, it is possible that there is a stronger relationship for people with severe ED pathology. Because we used a non-clinical sample with only a none-to-moderate range of ED symptoms, this limited our ability to draw conclusions about emotion recognition processes and people with severe ED symptoms. Therefore, the results of this study should only be considered to apply to the general population of neurotypical healthy young adults, and may not directly apply to patient populations which are commonly seen in ED literature.

Additionally, it is possible that university student participants have a slightly more liberal interpretation of the EDDS questions as compared to ED patients, which may have led to an overestimation of ED symptoms by our participants. However, it is also possible that the association is truly linear, but not statistically significant. Future research within this area may benefit from including participants with a range of ED symptoms, investigating possible non-linear associations between emotion recognition and ED pathology, and using cognitive interviewing techniques to confirm that participants with a range of symptoms interpret questionnaire items in a similar fashion.

Our sample only contained a single person was detected to have subthreshold AN according to the criteria outlined in Stice et al., [40] This means that we would not be able to detect associations present in studies with primarily AN ED samples. Additionally, our sample did not contain enough ethnic diversity to examine differences between these groups, with only the Caucasian group containing enough participants for adequately powered analyses. Future work can benefit from including more diverse samples in order to more extensively examine emotion recognition deficits in eating disorder pathology.

Despite these limitations, two major take-home messages are implied by our results. First, research on perceptual abilities of individuals with ED pathology should distinguish between associations in severe ED patients versus samples with a wide range of ED symptoms. Second, emotion recognition difficulties are unlikely to be uncovered at the aggregate level of category averages. Instead, future work should be adequately powered and designed to test category-specific predictions of emotion recognition deficits in patients with ED and other disorders. Limiting the total number of emotion categories tested in a design requires fewer trials and shorter testing times, allowing for larger sample sizes. Naturally, this type of work would need to build on existing accounts about the utility of specific emotions, instead of focusing on an overall visual deficit.

Overall, our study offers a new look at the relationship between ED symptomology and perceptual strategies underlying emotion recognition capabilities. It is important to conduct well-powered research linking cognitive neuroscience phenomena such as visual strategy use with eating disorders psychopathology. Work of this type has the potential to uncover transdiagnostic mechanisms responsible for ED onset or maintenance, with more specific information about these than offered by previous approaches.

Conclusions

Neither visual information use during categorization nor the categorization accuracy significantly related to ED symptom severity. A focus on other social-emotional processing deficits and work with clinically severe ED samples may be more fruitful for future studies.

Data availability

All questionnaires and behavioral tasks, stimulus sets, and other supplementary information is included on the OSF page for this study: https://doi.org/10.17605/OSF.IO/FH9VD.

Abbreviations

AN:

Anorexia Nervosa

AU:

Action Unit

BDI-II:

Beck Depression Inventory 2

BED:

Binge Eating Disorder

BN:

Bulimia Nervosa

CI:

Confidence Interval

DERS-sf:

Difficulties in Emotion Regulation Scale, short form

ED:

Eating Disorder

EDDS:

Eating Disorder Diagnostic Scale

FER:

Facial Emotion Recognition

N:

participant number

NDSU:

North Dakota State University

OSF:

Open Science Framework

SD:

standard deviation

STAI:

State-Trait Anxiety Inventory

TAS:

Toronto Alexithymia Scale

References

  1. Aldinger M, Stopsack M, Barnow S, Rambau S, Spitzer C, Schnell K, Ulrich I. The association between depressive symptoms and emotion recognition is moderated by emotion regulation. Psychiatry Res. 2013;205(1–2):59–66.

    Article  PubMed  Google Scholar 

  2. Bagby RM, Parker JD, Taylor GJ. The twenty-item Toronto Alexithymia Scale—I. item selection and cross-validation of the factor structure. J Psychosom Res. 1994;38(1):23–32.

    Article  PubMed  Google Scholar 

  3. Bagby RM, Parker JD, Taylor GJ. Twenty-five years with the 20-item Toronto Alexithymia Scale. J Psychosom Res. 2020;131:109940.

    Article  PubMed  Google Scholar 

  4. Bardone-Cone AM, Harney MB, Maldonado CR, Lawson MA, Robinson DP, Smith R, Tosh A. Defining recovery from an eating disorder: conceptualization, validation, and examination of psychosocial functioning and psychiatric comorbidity. Behav Res Ther. 2010;48(3):194–202.

    Article  PubMed  Google Scholar 

  5. Beck AT, Steer RA, Ball R, Ranieri WF. Comparison of Beck Depression Inventories-IA and-II in psychiatric outpatients. J Pers Assess. 1996;67(3):588–97.

    Article  PubMed  Google Scholar 

  6. Blais C, Fiset D, Roy C, Saumure Régimbald C, Gosselin F. Eye fixation patterns for categorizing static and dynamic facial expressions. Emotion. 2017;17(7):1107.

    Article  PubMed  Google Scholar 

  7. Blais C, Roy C, Fiset D, Arguin M, Gosselin F. The eyes are not the window to basic emotions. Neuropsychologia. 2012;50(12):2830–8.

    Article  PubMed  Google Scholar 

  8. Brewer R, Cook R, Cardi V, Treasure J, Bird G. Emotion recognition deficits in eating disorders are explained by co-occurring alexithymia. Royal Soc open Sci. 2015;2(1):140382.

    Article  Google Scholar 

  9. Castro L, Davies H, Hale L, Surguladze S, Tchanturia K. Facial affect recognition in anorexia nervosa: is obsessionality a missing piece of the puzzle? Australian New Z J Psychiatry. 2010;44(12):1118–25.

    Article  Google Scholar 

  10. Dapelo MM, Surguladze S, Morris R, Tchanturia K. Emotion recognition in blended facial expressions in women with anorexia nervosa. Eur Eat Disorders Rev. 2016;24(1):34–42.

    Article  Google Scholar 

  11. De Melo CM, Carnevale PJ, Read SJ, Gratch J. Reading people’s minds from emotion expressions in interdependent decision making. J Personal Soc Psychol. 2014;106(1):73.

    Article  Google Scholar 

  12. de Vos JA, LaMarre A, Radstaak M, Bijkerk CA, Bohlmeijer ET, Westerhof GJ. Identifying fundamental criteria for eating disorder recovery: a systematic review and qualitative meta-analysis. J Eat Disorders. 2017;5(1):1–14.

    Google Scholar 

  13. Demenescu LR, Kortekaas R, den Boer JA, Aleman A. (2010). Impaired attribution of emotion to facial expressions in anxiety and major depression. PLoS ONE, 5(12), e15058.

  14. Ekman P, Friesen W. Pictures of facial affect. Palo Alto, CA: Consulting Psychologists; 1976.

    Google Scholar 

  15. Fujiwara E, Kube VL, Rochman D, Macrae-Korobkov AK, Peynenburg V, University of Alberta Hospital Eating Disorder Program. Visual attention to ambiguous emotional faces in eating disorders: role of alexithymia. Eur Eat Disorders Rev. 2017;25(6):451–60.

    Article  Google Scholar 

  16. Geisler WS. Ideal observer analysis. Visual Neurosciences. 2003;10(7):12–12.

    Google Scholar 

  17. Gosselin F, Schyns PG. Bubbles: a technique to reveal the use of information in recognition tasks. Vision Res. 2001;41(17):2261–71.

    Article  PubMed  Google Scholar 

  18. Gratz KL, Roemer L. Multidimensional assessment of emotion regulation and dysregulation: development, factor structure, and initial validation of the difficulties in emotion regulation scale. J Psychopathol Behav Assess. 2004;26(1):41–54.

    Article  Google Scholar 

  19. Harrison A, Sullivan S, Tchanturia K, Treasure J. Emotion recognition and regulation in anorexia nervosa. Clin Psychol Psychotherapy: Int J Theory Pract. 2009;16(4):348–56.

    Article  Google Scholar 

  20. Hudson JI, Hiripi E, Pope Jr HG, Kessler RC. The prevalence and correlates of eating disorders in the National Comorbidity Survey Replication. Biol Psychiatry. 2007;61(3):348–58.

    Article  PubMed  Google Scholar 

  21. Jänsch C, Harmer C, Cooper MJ. Emotional processing in women with anorexia nervosa and in healthy volunteers. Eat Behav. 2009;10(3):184–91.

    Article  PubMed  Google Scholar 

  22. Kaufman EA, Xia M, Fosco G, Yaptangco M, Skidmore CR, Crowell SE. The difficulties in emotion regulation Scale Short Form (DERS-SF): validation and replication in adolescent and adult samples. J Psychopathol Behav Assess. 2016;38(3):443–55.

    Article  Google Scholar 

  23. Kong DT. Ostracism perception as a multiplicative function of trait self-esteem, mindfulness, and facial emotion recognition ability. Pers Indiv Differ. 2016;93:68–73.

    Article  Google Scholar 

  24. Krabbenborg MA, Danner UN, Larsen JK, van der Veer N, van Elburg AA, de Ridder DT, Engels RC. The Eating Disorder Diagnostic Scale: psychometric features within a clinical population and a cut-off point to differentiate clinical patients from healthy controls. Eur Eat Disorders Rev. 2012;20(4):315–20.

    Article  Google Scholar 

  25. Krause FC, Linardatos E, Fresco DM, Moore MT. Facial emotion recognition in major depressive disorder: a meta-analytic review. J Affect Disord. 2021;293:320–8.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Kucharska-Pietura K, Nikolaou V, Masiak M, Treasure J. The recognition of emotion in the faces and voice of anorexia nervosa. Int J Eat Disord. 2004;35(1):42–7.

    Article  PubMed  Google Scholar 

  27. Lavender JM, Wonderlich SA, Engel SG, Gordon KH, Kaye WH, Mitchell JE. Dimensions of emotion dysregulation in anorexia nervosa and bulimia nervosa: a conceptual review of the empirical literature. Clin Psychol Rev. 2015;40:111–22.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Lulé D, Schulze UM, Bauer K, Schöll F, Müller S, Fladung AK, Uttner I. (2014). Anorexia nervosa and its relation to depression, anxiety, alexithymia and emotional processing deficits. Eating and Weight Disorders-Studies on Anorexia, Bulimia and Obesity, 19(2), 209–216.

  29. Mason TB, Lesser EL, Dolgon-Krutolow AR, Wonderlich SA, Smith KE. (2021). An updated transdiagnostic review of social cognition and eating disorder psychopathology. Journal of psychiatric research, 143, 602–627. Mason, T. B., Lesser, E. L., Dolgon-Krutolow, A. R., Wonderlich, S. A., & Smith, K. E. (2020). An updated transdiagnostic review of social cognition and eating disorder psychopathology. Journal of Psychiatric Research.

  30. Mehu M, Scherer KR. Emotion categories and dimensions in the facial communication of affect: an integrated approach. Emotion. 2015;15(6):798.

    Article  PubMed  Google Scholar 

  31. Pennesi JL, Wade TD. A systematic review of the existing models of disordered eating: do they inform the development of effective interventions? Clin Psychol Rev. 2016;43:175–92.

    Article  PubMed  Google Scholar 

  32. Phillipou A, Abel LA, Castle DJ, Hughes ME, Gurvich C, Nibbs RG, Rossell SL. Self perception and facial emotion perception of others in anorexia nervosa. Front Psychol. 2015;6:1181.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Pollatos O, Herbert BM, Schandry R, Gramann K. Impaired central processing of emotional faces in anorexia nervosa. Psychosom Med. 2008;70(6):701–8.

    Article  PubMed  Google Scholar 

  34. Pringle A, Harmer CJ, Cooper MJ. Investigating vulnerability to eating disorders: biases in emotional processing. Psychol Med. 2010;40(4):645.

    Article  PubMed  Google Scholar 

  35. Ridout N, Wallis DJ, Autwal Y, Sellis J. The influence of emotional intensity on facial emotion recognition in disordered eating. Appetite. 2012;59(1):181–6.

    Article  PubMed  Google Scholar 

  36. Robinson AL, Dolhanty J, Greenberg L. Emotion-focused family therapy for eating disorders in children and adolescents. Clin Psychol Psychother. 2015;22(1):75–82.

    Article  PubMed  Google Scholar 

  37. (2022). RStudio: Integrated Development Environment for R. RStudio Team, Boston. MA. Retrieved from http://www.rstudio.com/

  38. Sfärlea A, Greimel E, Platt B, Bartling J, Schulte-Körne G, Dieler AC. Alterations in neural processing of emotional faces in adolescent anorexia nervosa patients–an event-related potential study. Biol Psychol. 2016;119:141–55.

    Article  PubMed  Google Scholar 

  39. Spielberger CD, Gorsuch RL, Lushene R, Vagg PR, Jacobs GA. (1983). Manual for the state-trait anxiety scale. Consulting Psychologists.

  40. Stice E, Fisher M, Martinez E. Eating disorder diagnostic scale: additional evidence of reliability and validity. Psychol Assess. 2004;16(1):60.

    Article  PubMed  Google Scholar 

  41. Stice E, Telch CF, Rizvi SL. Development and validation of the Eating Disorder Diagnostic Scale: a brief self-report measure of anorexia, bulimia, and binge-eating disorder. Psychol Assess. 2000;12(2):123.

    Article  PubMed  Google Scholar 

  42. Treasure J, Corfield F, Cardi V. A three-phase model of the social emotional functioning in eating disorders. Eur Eat Disorders Rev. 2012;20(6):431–8.

    Article  Google Scholar 

  43. Van Kleef GA, Côté S. The social effects of emotions. Ann Rev Psychol. 2022;73:629–58.

    Article  Google Scholar 

  44. Vander Wal JS, Kauffman AA, Soulliard ZA. Differences in alexithymia, emotional awareness, and facial emotion recognition under conditions of self-focused attention among women with high and low eating disorder symptoms: a 2 x 2 experimental study. J Eat Disorders. 2020;8(1):1–9.

    Google Scholar 

  45. Wang J, Chen Y, Dai B. Self-esteem predicts sensitivity to dynamic changes in emotional expression. Pers Indiv Differ. 2021;173:110636.

    Article  Google Scholar 

  46. Watson AB, Pelli DG. QUEST: a bayesian adaptive psychometric method. Percept Psychophys. 1983;33(2):113–20.

    Article  PubMed  Google Scholar 

  47. Wyssen A, Lao J, Rodger H, Humbel N, Lennertz J, Schuck K, Munsch S. Facial emotion recognition abilities in women experiencing eating disorders. Psychosom Med. 2019;81(2):155–64.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Stephen A. Wonderlich for his support.

Funding

This research was supported in part by funding from the National Institute of General Medical Sciences (grants P20 GM134969 and P30 GM114748) and the National Institute on Aging (R01 AG075117). The funders had no role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

IN wrote the study proposal, completed the IRB application, completed the literature review, performed the data collection, performed the statistical analyses, and wrote and updated the manuscript. LS contributed to the literature review. KD and BB contributed to the statistical analyses. KD, LS, and BB contributed to editing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ilya Nudnou.

Ethics declarations

Ethics approval and consent to participate

The authors assert that all procedures contributing to this work comply with the ethical standards on human experimentation and with the Declaration of Helsinki. All research procedures were approved by the North Dakota State University (NDSU) Institutional Review Board, with the study being granted exempt status.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nudnou, I., Duggan, K.A., Schaefer, L. et al. Is visual information use during facial emotion recognition related to eating disorder symptoms in college-aged men and women? An experimental study. J Eat Disord 12, 152 (2024). https://doi.org/10.1186/s40337-024-01102-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40337-024-01102-z

Keywords