Dynamic faces

  • 文章类型: Journal Article
    整体处理是人脸识别研究的基本要素。一些行为研究已经调查了刚性面部运动对整体面部处理的影响,然而,目前尚不清楚刚性运动如何影响不同面部种族的整体面部处理的时间过程。目前的研究调查了这个问题,使用复合面效应(CFE)作为整体加工的直接衡量标准。在测试阶段,要求参与者将静态复合面部上半部分的身份与研究面部进行匹配,研究面要么是静态的,要么是刚性移动的。ERP结果表明,在识别自己的种族面孔时,相对于N170组件中的静态研究面孔,刚性移动的研究面孔会产生更大的CFE。P1,N170和P2分量的振幅表明刚性运动促进了整体面部处理,随着时间的推移观察到半球之间的差异。具体来说,仅在暴露于右半球P1和P2组件的刚性移动面后才观察到CFE。此外,与静态面相比,暴露于刚性移动面后观察到更大的CFE,特别是在左半球的N170组件中。这项研究表明,整体处理是面部感知的一个基本方面,适用于静态和移动的面部,不只是静态的。此外,在结构编码阶段,刚性面部运动改善了自身种族面部的整体处理。这些发现为静态和移动面部的整体处理提供了不同的神经机制的证据。
    Holistic processing is a fundamental element of face-recognition studies. Some behavioral studies have investigated the impact of rigid facial motion on holistic face processing, yet it is still unclear how rigid motion affects the time course of holistic face processing for different face races. The current study investigated this issue, using the composite face effect (CFE) as a direct measure of holistic processing. Participants were asked to match the identity of the top half of a static composite face with the study face during the test stage, where the study face was either static or rigidly-moving. ERP results showed that rigidly-moving study faces elicited a larger CFE relative to static study faces in the N170 component when recognizing own-race faces. The amplitude of P1, N170 and P2 components indicated that rigid motion facilitated holistic face processing, with differences observed between the hemispheres over time. Specifically, the CFE was only observed after exposure to rigidly-moving faces in the P1 and P2 components of the right hemisphere. Additionally, a greater CFE was observed following exposure to rigidly-moving faces compared to static faces, particularly in the N170 component of the left hemisphere. This study suggests that holistic processing is a fundamental aspect of face perception that applies to both static and moving faces, not just static ones. Furthermore, rigid facial motion improves holistic processing of own-race faces during the structural encoding stage. These findings provide evidence of distinct neural mechanisms underlying the holistic processing of static and moving faces.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    视频记录准确地捕捉面部表情运动;然而,对于面部感知研究人员来说,它们很难标准化和操纵。出于这个原因,照片的动态变形经常被使用,尽管他们缺乏自然的面部运动。这项研究旨在研究人类如何使用真实视频和两种不同的方法来人工生成动态表情-动态变形,从面部感知情绪。和AI合成的深度假货。与视频(所有情绪)和深度假货(恐惧,快乐,sad).视频和deepfakes被认为是类似的。此外,他们感觉到了快乐和悲伤的变形,但没有变形的愤怒或恐惧,不如其他格式真实。我们的发现支持先前的研究,表明对变形情绪的社会反应并不代表视频记录。研究结果还表明,与变体相比,深度假货可能提供更合适的标准化刺激类型。此外,从参与者那里收集定性数据,并使用ChatGPT进行分析,一个大的语言模型。ChatGPT成功地在数据中确定了与独立人类研究人员确定的主题一致的主题。根据这一分析,我们的参与者认为动态变形与视频和深度假货相比不那么自然。参与者认为深度假货和视频类似地表明,深度假货有效地复制了自然的面部运动,使它们成为面部感知研究的有希望的替代品。这项研究有助于越来越多的研究探索生成人工智能对推进人类感知研究的有用性。
    Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:已知情绪处理缺陷伴随抑郁症状,并且常见于中风患者。关于中风后抑郁(PSD)症状和特定脑损伤对情绪处理能力改变的影响以及这些现象如何随着时间的推移而发展,人们知之甚少。这种潜在的关系可能会影响卒中后神经和心理社会功能的康复。为了解决这个科学差距,我们在一项纵向研究设计中调查了从卒中后第一天到慢性期早期的PSD症状与情绪处理能力之间的关系.
    方法:26名缺血性卒中患者对带有情绪面孔的视频进行了情绪处理任务(\'快乐,\'\'悲伤,\'\'愤怒,\'\'恐惧,\'和\'中性\')在不同的强度水平(20%,40%,60%,80%,100%)。测量了识别精度和响应时间,以及抑郁症状评分(蒙哥马利-奥斯贝格抑郁量表)。28名年龄和性别相匹配的健康参与者作为对照组。进行了全脑支持向量回归病变-症状映射(SVR-LSM)分析,以调查特定病变位置是否与特定情绪类别的识别准确性相关。
    结果:与对照组相比,卒中患者的总体识别准确性较差,特别是对快乐的认可,悲伤,可怕的面孔。值得注意的是,更多的抑郁中风患者表现出对特定负面情绪的处理增加,因为他们对愤怒的面孔反应明显更快,并且更准确地识别低强度的悲伤面孔。中风后第一天获得的这些效果部分持续到几个月后的随访评估。SVR-LSM分析显示,下额叶和中额叶区域(IFG/MFG)以及脑岛和壳核与中风中的情绪识别缺陷有关。具体来说,识别快乐的面部表情受到影响前岛的病变的影响,壳核,IFG,MFG,眶额叶皮质,和罗兰式管壳。后脑岛的病变,rolandicopulculum,MFG还与恐惧面部表情的识别准确性降低有关,而悲伤面孔的识别缺陷与额叶极点有关,IFG,和MFG损坏。
    结论:PSD症状有助于处理负面情绪刺激,特别是愤怒和悲伤的面部表情。不同情绪类别的识别准确性与情绪相关处理电路中的脑部病变有关,包括脑岛,基底神经节,IFG,和MFG。总之,我们的研究为中风后情绪处理背后的心理社会和神经因素提供了支持,有助于PSD的病理生理学。
    BACKGROUND: Emotion processing deficits are known to accompany depressive symptoms and are often seen in stroke patients. Little is known about the influence of post-stroke depressive (PSD) symptoms and specific brain lesions on altered emotion processing abilities and how these phenomena develop over time. This potential relationship may impact post-stroke rehabilitation of neurological and psychosocial function. To address this scientific gap, we investigated the relationship between PSD symptoms and emotion processing abilities in a longitudinal study design from the first days post-stroke into the early chronic phase.
    METHODS: Twenty-six ischemic stroke patients performed an emotion processing task on videos with emotional faces (\'happy,\' \'sad,\' \'anger,\' \'fear,\' and \'neutral\') at different intensity levels (20%, 40%, 60%, 80%, 100%). Recognition accuracies and response times were measured, as well as scores of depressive symptoms (Montgomery-Åsberg Depression Rating Scale). Twenty-eight healthy participants matched in age and sex were included as a control group. Whole-brain support-vector regression lesion-symptom mapping (SVR-LSM) analyses were performed to investigate whether specific lesion locations were associated with the recognition accuracy of specific emotion categories.
    RESULTS: Stroke patients performed worse in overall recognition accuracy compared to controls, specifically in the recognition of happy, sad, and fearful faces. Notably, more depressed stroke patients showed an increased processing towards specific negative emotions, as they responded significantly faster to angry faces and recognized sad faces of low intensities significantly more accurately. These effects obtained for the first days after stroke partly persisted to follow-up assessment several months later. SVR-LSM analyses revealed that inferior and middle frontal regions (IFG/MFG) and insula and putamen were associated with emotion-recognition deficits in stroke. Specifically, recognizing happy facial expressions was influenced by lesions affecting the anterior insula, putamen, IFG, MFG, orbitofrontal cortex, and rolandic operculum. Lesions in the posterior insula, rolandic operculum, and MFG were also related to reduced recognition accuracy of fearful facial expressions, whereas recognition deficits of sad faces were associated with frontal pole, IFG, and MFG damage.
    CONCLUSIONS: PSD symptoms facilitate processing negative emotional stimuli, specifically angry and sad facial expressions. The recognition accuracy of different emotional categories was linked to brain lesions in emotion-related processing circuits, including insula, basal ganglia, IFG, and MFG. In summary, our study provides support for psychosocial and neural factors underlying emotional processing after stroke, contributing to the pathophysiology of PSD.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    杏仁核的适应功能不良与情感障碍的情绪调节受损有关。实时fMRI神经反馈的最新进展已成功证明了健康和精神病人群中杏仁核活性的调节。与标准神经反馈设计中应用的抽象反馈表示相反,我们提出了一种新的神经反馈范式,使用自然刺激,如人类情绪面孔作为反馈显示,其中面部表情强度的变化(从中性到快乐或从恐惧到中性)与参与者正在进行的双侧杏仁核活动相结合。
    在64名健康参与者上测试了这种实验方法的可行性,这些参与者完成了一次训练并进行了四次神经反馈运行。参与者被分配到四个实验组之一(每组n=16),即,快乐,快乐下来,恐惧,恐惧下来。根据组分配,他们被指示通过上调杏仁核(快乐向上)或下调杏仁核(快乐向下)来“尝试使面部更快乐”,或者通过上调(恐惧向上)或下调(恐惧向下)杏仁核反馈信号来“尝试使面部不那么恐惧”。
    线性混合效应分析显示,恐惧状态下杏仁核活动发生了显着变化,特别是在恐惧下降组中,与第一次相比,在最后两次神经反馈运行中杏仁核明显下调。快乐和快乐的小组在四次运行中没有显示出明显的杏仁核活动变化。我们没有观察到问卷得分和后续行为的显着改善。此外,杏仁核之间依赖于任务的有效连通性变化,梭形面区(FFA),使用动态因果模型检查了内侧眶额皮质(mOFC)。在快乐组(促进作用)中,FFA与杏仁核之间的有效连接显着增加,而在恐惧下降组中则减少。值得注意的是,在第一次训练期间,杏仁核通过mOFC介导的抑制机制下调。
    在本可行性研究中,我们打算解决关键的神经反馈过程,比如自然的面部刺激,参与者参与任务,双向调节,任务一致性,以及它们对学习成功的影响。它表明,这种多功能的情绪面部反馈范式可以针对情感障碍中的偏向情绪处理进行定制。
    UNASSIGNED: Maladaptive functioning of the amygdala has been associated with impaired emotion regulation in affective disorders. Recent advances in real-time fMRI neurofeedback have successfully demonstrated the modulation of amygdala activity in healthy and psychiatric populations. In contrast to an abstract feedback representation applied in standard neurofeedback designs, we proposed a novel neurofeedback paradigm using naturalistic stimuli like human emotional faces as the feedback display where change in the facial expression intensity (from neutral to happy or from fearful to neutral) was coupled with the participant\'s ongoing bilateral amygdala activity.
    UNASSIGNED: The feasibility of this experimental approach was tested on 64 healthy participants who completed a single training session with four neurofeedback runs. Participants were assigned to one of the four experimental groups (n = 16 per group), i.e., happy-up, happy-down, fear-up, fear-down. Depending on the group assignment, they were either instructed to \"try to make the face happier\" by upregulating (happy-up) or downregulating (happy-down) the amygdala or to \"try to make the face less fearful\" by upregulating (fear-up) or downregulating (fear-down) the amygdala feedback signal.
    UNASSIGNED: Linear mixed effect analyses revealed significant amygdala activity changes in the fear condition, specifically in the fear-down group with significant amygdala downregulation in the last two neurofeedback runs as compared to the first run. The happy-up and happy-down groups did not show significant amygdala activity changes over four runs. We did not observe significant improvement in the questionnaire scores and subsequent behavior. Furthermore, task-dependent effective connectivity changes between the amygdala, fusiform face area (FFA), and the medial orbitofrontal cortex (mOFC) were examined using dynamic causal modeling. The effective connectivity between FFA and the amygdala was significantly increased in the happy-up group (facilitatory effect) and decreased in the fear-down group. Notably, the amygdala was downregulated through an inhibitory mechanism mediated by mOFC during the first training run.
    UNASSIGNED: In this feasibility study, we intended to address key neurofeedback processes like naturalistic facial stimuli, participant engagement in the task, bidirectional regulation, task congruence, and their influence on learning success. It demonstrated that such a versatile emotional face feedback paradigm can be tailored to target biased emotion processing in affective disorders.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    面部感知提供了一个很好的例子,说明大脑如何处理细微的视觉差异,并将其转化为身份和情感表达的行为有用表示。虽然大量文献研究了面部表情的时空神经处理,很少有研究使用了一组维度变化的刺激,其中包含微妙的感知变化。在目前的研究中,我们使用了48个短视频,其强度和类别在维度上有所不同(快乐,生气,惊讶)的表达。我们测量了功能磁共振成像和脑电图对这些视频剪辑的反应,并将神经反应模式与基于图像特征和从刺激的行为评级得出的模型的预测进行了比较。在功能磁共振成像中,额下回面部区域(IFG-FA)携带的信息仅与表达强度有关,独立于基于图像的模型。颞上沟(STS),颞下(IT)和枕骨外侧(LO)区域包含有关表达类别和强度的信息。在脑电图中,表达类别和低级图像特征的编码在400ms左右最为明显。表达强度模型没有,然而,在任何脑电图时间点均显着相关。我们的结果显示了IFG-FA在表达编码中的特定作用,并表明它包含表达强度的图像和类别不变表示。
    Face perception provides an excellent example of how the brain processes nuanced visual differences and transforms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expression, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    面孔带有关于个人的关键个人信息,包括他们身份的线索,社会特征,和情绪状态。迄今为止的许多研究都采用了在严格控制的条件下拍摄的面部的静态图像,但现实世界中的面部是动态的,并且在环境条件下经历。研究面部变化的关键尺寸的常见方法是使用面部漫画。然而,这种技术通常又依赖于静态图像,和动态漫画的几个例子依赖于动画图形头模型。这里,我们提出了一种基于主成分分析(PCA)的主动外观模型,用于捕获自然动态面部行为视频中的时空变化模式。我们演示了如何将此技术应用于生成面部行为中生物运动模式的动态反漫画。这种技术可以扩展到讽刺其他面部尺寸,或对动态面的时空变化进行更一般的分析。
    Faces carry key personal information about individuals, including cues to their identity, social traits, and emotional state. Much research to date has employed static images of faces taken under tightly controlled conditions yet faces in the real world are dynamic and experienced under ambient conditions. A common approach to studying key dimensions of facial variation is the use of facial caricatures. However, such techniques have again typically relied on static images, and the few examples of dynamic caricatures have relied on animating graphical head models. Here, we present a principal component analysis (PCA)-based active appearance model for capturing patterns of spatiotemporal variation in videos of natural dynamic facial behaviours. We demonstrate how this technique can be applied to generate dynamic anti-caricatures of biological motion patterns in facial behaviours. This technique could be extended to caricaturing other facial dimensions, or to more general analyses of spatiotemporal variations in dynamic faces.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Individuals with high social anxiety (HSA) show abnormal processing of emotional faces, which may increase their social anxiety. A growing number of event-related potential (ERP) studies have explored the neural mechanisms underlying the static-emotional face processing of HSA individuals. In view of the ecological validity of dynamic faces, this study will further explore the time course of dynamic-emotional face processing in individuals with HSA. To this end, 30 high and 30 low social anxiety (LSA) participants were asked to perform an identification task of dynamic-emotional faces while their brain responses were recorded using an ERP technique. The behavioral results showed the recognition accuracy of dynamic faces was higher than static faces when these faces were happy. For the P100 component, HSA participants showed higher P100 mean amplitudes of dynamic than static faces in the left hemisphere when they viewed happy, but not angry faces. In addition, increased N170 mean amplitudes of dynamic-happy faces were showed. Furthermore, the LPP mean amplitudes of dynamic faces were smaller than those of static faces. In sum, this study could provide a better understanding of the time course of dynamic-emotional face processing in HSA individuals.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    最近提出,经验驱动和基于对象的感知分组都有助于整体面部处理。我们调查了动作作为一种常见的命运感知分组线索是否可以增强错位面孔的整体处理。我们在修改后的完整复合任务中操纵了研究和测试面的对齐和运动(动态和静态),在该任务中,一致性效果被视为整体处理的指标。参与者对两个顺序呈现的复合面的上半部分做出了相同不同的判断。我们观察到,当研究的面孔是动态的,无论测试面是动态的还是静态的,错位-错位的人脸对进行了整体处理。当研究面孔是静态的时,错位-错位的人脸对没有显示出整体处理,也没有倒置的脸。这些结果表明,运动可以促进错位面的整体处理。我们的发现为不同类型的整体面部处理提供了重要的见解,我们深入讨论这些类型以及它们之间的关系。
    It was recently proposed that both experience-driven and object-based perceptual grouping contribute to holistic face processing. We investigated whether motion-as a common fate perceptual grouping cue-could enhance the holistic processing of misaligned faces. We manipulated alignment and motion (dynamic and static) of study and test faces in a modified complete composite task in which the congruency effect was regarded as an indicator of holistic processing. Participants made same-different judgments about the top halves of two sequentially presented composite faces. We observed that when the study faces were dynamic, regardless of whether the test faces were dynamic or static, misaligned-misaligned face pairs were processed holistically. When the study faces were static, misaligned-misaligned face pairs showed no holistic processing, and neither did inverted faces. These results indicate that motion can promote the holistic processing of misaligned faces. Our findings provide important insights into different types of holistic face processing, and we discuss these types as well as their relationships with each other in depth.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    动态面部表情对于灵长类动物的交流至关重要。由于难以跨物种控制面部表情的形状和动态,尚不清楚特定物种的面部表情如何在感知上编码并与面部形状的表示相互作用。虽然流行的神经网络模型预测面部形状和动态的联合编码,面部的神经肌肉控制比面部形状进化得更慢,建议单独编码。为了研究这些替代假设,我们开发了逼真的人类和猴子头,这些头是用猴子和人类的动作捕捉数据进行动画处理的。表达动力学的精确控制是通过贝叶斯机器学习技术实现的。与我们的假设一致,我们发现人类观察者很快就学会了跨物种的表达,其中面部动力学在很大程度上独立于面部形状。这一结果支持了面部表情的视觉处理和运动控制的共同进化,同时它挑战了基于外观的神经网络理论的动态表情识别。
    Dynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural network models predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate these alternative hypotheses, we developed photo-realistic human and monkey heads that were animated with motion capture data from monkeys and humans. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented largely independently of facial shape. This result supports the co-evolution of the visual processing and motor control of facial expressions, while it challenges appearance-based neural network theories of dynamic expression recognition.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    已有充分的文献表明,静态面部处理是整体的。面部包含变体(例如,动议,观点)和不变(种族,性)特征。然而,很少有研究关注整体面部表征是否能容忍人内变化。因此,本研究通过使用完整的复合范式来操纵研究测试的一致性,研究了面部的整体面部表征是否可以容忍人内运动和观点变化。参与者被依次展示了两张脸,并被要求判断脸的上半部分是相同的还是不同的。第一面是静态面或动态面,深度旋转30°,60°,90°。第二个面是不同的前视图静态面(实验1a,研究测试不一致)或与第一面相同(实验1b,研究测试一致)。在实验2中,研究测试的一致性是在受试者内部操纵的,包括倒置的面孔。我们的结果表明,研究测试的一致性显着增强了直立和倒置面的整体处理;这种研究测试的一致性效果和整体处理不受运动和视点变化通过深度旋转的调节。有趣的是,我们发现了移动研究测试一致倒置面的整体处理,但不适用于静态反转面。这些结果告诉我们有关整体面部表征的性质的内容已针对早期和当前的面部处理理论进行了深入讨论。
    It has been well documented that static face processing is holistic. Faces contain variant (e.g., motion, viewpoint) and invariant (race, sex) features. However, little research has focused on whether holistic face representations are tolerant of within-person variations. The present study thus investigated whether holistic face representations of faces are tolerant of within-person motion and viewpoint variations by manipulating study-test consistency using a complete composite paradigm. Participants were shown two faces sequentially and were asked to judge whether the faces\' top halves were identical or different. The first face was a static face or a dynamic face rotated in depth at 30°, 60°, and 90°. The second face was either a different front-view static face (Experiment 1a, study-test inconsistent) or identical to the first face (Experiment 1b, study-test consistent). In Experiment 2, study-test consistency was manipulated within subjects, and inverted faces were included. Our results show that study-test consistency significantly enhanced the holistic processing of upright and inverted faces; this study-test consistency effect and holistic processing were not modulated by motion and viewpoint changes via depth rotation. Interestingly, we found holistic processing for moving study-test consistent inverted faces, but not for static inverted faces. What these results tell us about the nature of holistic face representation is discussed in depth with respect to earlier and current theories on face processing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号