Facial expression analysis

面部表情分析
  • 文章类型: Journal Article
    面部表达分析(FEA)在诊断和治疗早期神经系统疾病(ND)如阿尔茨海默氏症和帕金森氏症中起着至关重要的作用。手动FEA受到专业知识的阻碍,时间,和培训要求,虽然自动方法面临着实际患者数据不可用的困难,高计算量,和不相关的特征提取。为了应对这些挑战,本文提出了一种新的方法:一种有效的,基于深度学习网络(DLN)的轻量级卷积块注意模块(CBAM),以帮助医生诊断ND患者。该方法包括两个阶段:真实ND患者的数据收集,和预处理,涉及人脸检测和用于特征提取和细化的注意力增强DLN。对真实患者数据进行验证的广泛实验展示了引人注目的性能,达到高达73.2%的精度。尽管它的功效,所提出的模型是轻量级的,只占用3MB,使其适合部署在资源受限的移动医疗设备上。此外,该方法比现有的有限元分析方法有了显著的进步,在有效诊断和治疗ND患者方面有着巨大的希望。通过准确识别情绪并提取相关特征,这种方法使医疗专业人员能够进行早期ND检测和管理,克服人工分析和重型模型的挑战。总之,这项研究提出了FEA的重大飞跃,承诺加强ND诊断和护理。这项工作中使用的代码和数据可在以下网址获得:https://github.com/munsif200/Neurological-Health-Care。
    Facial Expression Analysis (FEA) plays a vital role in diagnosing and treating early-stage neurological disorders (NDs) like Alzheimer\'s and Parkinson\'s. Manual FEA is hindered by expertise, time, and training requirements, while automatic methods confront difficulties with real patient data unavailability, high computations, and irrelevant feature extraction. To address these challenges, this paper proposes a novel approach: an efficient, lightweight convolutional block attention module (CBAM) based deep learning network (DLN) to aid doctors in diagnosing ND patients. The method comprises two stages: data collection of real ND patients, and pre-processing, involving face detection and an attention-enhanced DLN for feature extraction and refinement. Extensive experiments with validation on real patient data showcase compelling performance, achieving an accuracy of up to 73.2%. Despite its efficacy, the proposed model is lightweight, occupying only 3MB, making it suitable for deployment on resource-constrained mobile healthcare devices. Moreover, the method exhibits significant advancements over existing FEA approaches, holding tremendous promise in effectively diagnosing and treating ND patients. By accurately recognizing emotions and extracting relevant features, this approach empowers medical professionals in early ND detection and management, overcoming the challenges of manual analysis and heavy models. In conclusion, this research presents a significant leap in FEA, promising to enhance ND diagnosis and care.The code and data used in this work are available at: https://github.com/munsif200/Neurological-Health-Care.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    面部情感表达在人际交往中起着核心作用;这些显示用于预测和影响他人的行为。尽管它们很重要,量化和分析简短的面部情感表达的动态仍然是一个研究不足的方法论挑战。这里,我们提出了一种利用机器学习和网络建模来评估面部表情动态的方法。使用临床访谈的视频记录,我们在96名被诊断患有精神病的人和116名从未患有精神病的成年人的样本中证明了这种方法的实用性.被诊断为精神分裂症的参与者倾向于从中性表达转变为不常见的表达(例如,恐惧,惊喜),而被诊断患有其他精神病的参与者(例如,有精神病的情绪障碍)转向悲伤的表达。该方法在研究正常和改变的情绪表达方面具有广泛的应用,并且可以与远程医疗集成以改善精神病评估和治疗。
    Facial emotion expressions play a central role in interpersonal interactions; these displays are used to predict and influence the behavior of others. Despite their importance, quantifying and analyzing the dynamics of brief facial emotion expressions remains an understudied methodological challenge. Here, we present a method that leverages machine learning and network modeling to assess the dynamics of facial expressions. Using video recordings of clinical interviews, we demonstrate the utility of this approach in a sample of 96 people diagnosed with psychotic disorders and 116 never-psychotic adults. Participants diagnosed with schizophrenia tended to move from neutral expressions to uncommon expressions (e.g., fear, surprise), whereas participants diagnosed with other psychoses (e.g., mood disorders with psychosis) moved toward expressions of sadness. This method has broad applications to the study of normal and altered expressions of emotion and can be integrated with telemedicine to improve psychiatric assessment and treatment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    大学生的焦虑会导致学习成绩差甚至辍学。成人表现焦虑量表(AMAS-C)是一种经过验证的方法,旨在评估大学生的焦虑水平和性质。
    这项研究的目的是为AMAS-C提供基于互联网的替代方案,以自动识别和预测年轻大学生的焦虑。两种焦虑预测方法,一个基于面部情感识别,另一个基于文本情感识别,使用AMAS-C测试焦虑进行描述和验证,谎言和完全焦虑作为地面实况数据。
    第一种方法分析面部表情,确定六种基本情绪(愤怒,厌恶,恐惧,幸福,悲伤,惊讶)和中立的表达,当学生完成技术技能测试时。第二种方法检查分类为积极的帖子中的情绪,社交网络Facebook上的学生个人资料中的负面和中立。两种方法都旨在预测焦虑的存在。
    这两种方法在预测焦虑方面都达到了很高的精度,并且被证明在识别与AMAS-C验证工具相关的焦虑障碍方面是有效的。与基于面部分析的预测(84.21%)相比,基于文本分析的预测在预测焦虑方面的精度(86.84%)略有优势。
    开发的应用程序可以帮助教育工作者,心理学家或相关机构在早期阶段确定那些可能因焦虑症而在大学学业失败的学生。
    UNASSIGNED: Anxiety in university students can lead to poor academic performance and even dropout. The Adult Manifest Anxiety Scale (AMAS-C) is a validated measure designed to assess the level and nature of anxiety in college students.
    UNASSIGNED: The aim of this study is to provide internet-based alternatives to the AMAS-C in the automated identification and prediction of anxiety in young university students. Two anxiety prediction methods, one based on facial emotion recognition and the other on text emotion recognition, are described and validated using the AMAS-C Test Anxiety, Lie and Total Anxiety scales as ground truth data.
    UNASSIGNED: The first method analyses facial expressions, identifying the six basic emotions (anger, disgust, fear, happiness, sadness, surprise) and the neutral expression, while the students complete a technical skills test. The second method examines emotions in posts classified as positive, negative and neutral in the students\' profile on the social network Facebook. Both approaches aim to predict the presence of anxiety.
    UNASSIGNED: Both methods achieved a high level of precision in predicting anxiety and proved to be effective in identifying anxiety disorders in relation to the AMAS-C validation tool. Text analysis-based prediction showed a slight advantage in terms of precision (86.84 %) in predicting anxiety compared to face analysis-based prediction (84.21 %).
    UNASSIGNED: The applications developed can help educators, psychologists or relevant institutions to identify at an early stage those students who are likely to fail academically at university due to an anxiety disorder.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究旨在使用OpenFace分析面部动作编码系统中的动作单位(AU),探索6-11个月婴儿在补充喂养期间的AI辅助情绪评估。当婴儿(n=98)接触不同范围的食物组时;肉类,牛奶,蔬菜,grain,和甜点产品,最喜欢的,不喜欢食物,然后分析视频记录对这些食物组的情绪反应,包括惊喜,悲伤,幸福,恐惧,愤怒,和厌恶。对AU的强度进行时间平均滤波。通过WilcoxonSinged检验将不同食物组的面部表情与中性状态进行比较。大多数食物组与中性情绪状态没有显着差异。与中性相比,婴儿对肉类表现出较高的厌恶反应和对酸奶的愤怒反应。母乳喂养和非母乳喂养的婴儿之间的情绪反应也有所不同。母乳喂养的婴儿表现出强烈的负面情绪,包括恐惧,愤怒,和厌恶,当暴露于某些食物组而非母乳喂养的婴儿对他们喜欢的食物和甜点表现出较低的惊讶和悲伤反应。需要进一步的纵向研究来全面了解婴儿的情绪体验及其与喂养行为和食物接受度的关系。
    This study aims to explore AI-assisted emotion assessment in infants aged 6-11 months during complementary feeding using OpenFace to analyze the Actions Units (AUs) within the Facial Action Coding system. When infants (n = 98) were exposed to a diverse range of food groups; meat, cow-milk, vegetable, grain, and dessert products, favorite, and disliked food, then video recordings were analyzed for emotional responses to these food groups, including surprise, sadness, happiness, fear, anger, and disgust. Time-averaged filtering was performed for the intensity of AUs. Facial expression to different food groups were compared with neutral states by Wilcoxon Singed test. The majority of the food groups did not significantly differ from the neutral emotional state. Infants exhibited high disgust responses to meat and anger reactions to yogurt compared to neutral. Emotional responses also varied between breastfed and non-breastfed infants. Breastfed infants showed heightened negative emotions, including fear, anger, and disgust, when exposed to certain food groups while non-breastfed infants displayed lower surprise and sadness reactions to their favorite foods and desserts. Further longitudinal research is needed to gain a comprehensive understanding of infants\' emotional experiences and their associations with feeding behaviors and food acceptance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Observational Study
    目的:本研究旨在评估计算机辅助面部表情分析在评估儿童术后疼痛中的应用。
    方法:这是一项方法学观察性研究。研究人群包括7-18岁年龄段的患者,他们在大学医院的儿科外科诊所接受了手术。研究样本由83名同意参与并符合样本选择标准的儿童组成。研究人员使用WongBaker面孔疼痛评定量表和视觉模拟量表收集数据。数据是从孩子那里收集的,母亲,护士,一个外部观察者。与疼痛相关的面部动作单位用于机器估计。OpenFace用于分析孩子的面部动作单元,Python用于机器学习算法。组内相关系数用于数据的统计分析。
    结果:机器预测的疼痛评分和儿童的疼痛评分评估,母亲,护士,和观察者进行了比较。最接近儿童自我报告疼痛评分的疼痛评估是机器预测的顺序,母亲,和护士。
    结论:用于儿童疼痛评估的机器学习方法在评估疼痛严重程度方面表现良好。它可以编码儿童疼痛的面部表情,并从视频记录中可靠地测量疼痛相关的面部动作单位。
    结论:本研究中评估的面部表情分析的机器学习方法可以作为一种可扩展的,标准,以及临床护士疼痛评估的有效方法。
    OBJECTIVE: The present study was conducted to evaluate the use of computer-aided facial expression analysis to assess postoperative pain in children.
    METHODS: This was a methodological observational study. The study population consisted of patients in the age group of 7-18 years who underwent surgery in the pediatric surgery clinic of a university hospital. The study sample consisted of 83 children who agreed to participate and met the sample selection criteria. Data were collected by the researcher using the Wong Baker Faces pain rating scale and Visual Analog Scale. Data were collected from the child, mother, nurse, and one external observer. Facial action units associated with pain were used for machine estimation. OpenFace was used to analyze the child\'s facial action units and Python was used for machine learning algorithms. The intraclass correlation coefficient was used for statistical analysis of the data.
    RESULTS: The pain score predicted by the machine and the pain score assessments of the child, mother, nurse, and observer were compared. The pain assessment closest to the self-reported pain score by the child was in the order of machine prediction, mother, and nurse.
    CONCLUSIONS: The machine learning method used in pain assessment in children performed well in estimating pain severity.It can code facial expressions of children\'s pain and reliably measure pain-related facial action units from video recordings.
    CONCLUSIONS: The machine learning method for facial expression analysis assessed in this study can potentially be used as a scalable, standard, and valid pain assessment method for nurses in clinical practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    情感识别是许多使用人类情感反应作为营销沟通的部门的重要问题,技术设备,或人机交互。社交机器人和人工代理的真实面部行为仍然是一个挑战,限制了他们在与人类面对面的二元情境中的情感可信度。一个障碍是缺乏关于人类通常如何在这样的环境中互动的适当训练数据。本文重点收集了60名参与者的面部行为,以创建一种新型的二元情绪反应数据库。为此,我们提出了一种方法,可以通过网络摄像头自动捕获参与者的面部表情,而他们与其他人(面部视频)在情感上的背景下。然后使用三种不同的面部表达分析(FEA)工具分析数据:iMotions,Mini-Xception模型,和Py-FeatFEA工具包。尽管据报道情绪反应是真实的,上述模型之间的比较分析不能与单一的情绪反应预测一致。基于这个结果,需要一个更稳健、更有效的情绪反应预测模型。这项工作与人机交互研究的相关性在于其新颖的方法来开发合成的类似人的生物(虚拟或机器人)的自适应行为,允许他们在与人类的上下文变化的二元情境中模拟人类面部交互行为。本文对于使用人类情感分析的研究人员来说应该很有用,同时决定一种合适的方法来收集二元环境中的面部表情反应。
    Emotion recognition is a significant issue in many sectors that use human emotion reactions as communication for marketing, technological equipment, or human-robot interaction. The realistic facial behavior of social robots and artificial agents is still a challenge, limiting their emotional credibility in dyadic face-to-face situations with humans. One obstacle is the lack of appropriate training data on how humans typically interact in such settings. This article focused on collecting the facial behavior of 60 participants to create a new type of dyadic emotion reaction database. For this purpose, we propose a methodology that automatically captures the facial expressions of participants via webcam while they are engaged with other people (facial videos) in emotionally primed contexts. The data were then analyzed using three different Facial Expression Analysis (FEA) tools: iMotions, the Mini-Xception model, and the Py-Feat FEA toolkit. Although the emotion reactions were reported as genuine, the comparative analysis between the aforementioned models could not agree with a single emotion reaction prediction. Based on this result, a more-robust and -effective model for emotion reaction prediction is needed. The relevance of this work for human-computer interaction studies lies in its novel approach to developing adaptive behaviors for synthetic human-like beings (virtual or robotic), allowing them to simulate human facial interaction behavior in contextually varying dyadic situations with humans. This article should be useful for researchers using human emotion analysis while deciding on a suitable methodology to collect facial expression reactions in a dyadic setting.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)在过去十年中取得了巨大的发展。人工智能在远程医学中的应用可以改革牙科护理的方式,牙科教育,研究,随后的创新可以远程发生。可以开发包括基于深度学习的算法在内的机器学习,以创建口腔健康相关状况的风险评估预测模型。随之而来的并发症,和患者分层。患者可以被授权自我诊断和应用预防措施或自我管理一些早期阶段的牙齿疾病。人工智能在远程医学中的应用对两者都有好处,牙科医生和病人.AI可以实现更好的远程筛查,诊断,记录保存,分类,以及基于智能设备的牙科患者监测。这将消除需要牙医进行常规治疗的基本病例,并使他们能够专注于高度复杂的病例。这也将使牙医能够在无法进入的地区为更多和更贫困的人口提供服务。它在远程医学中的使用可以带来从牙科治疗到预防性个性化方法的范式转变。远程医疗的强大资产可能是通过本文提出的各种渠道路由的强大而全面的反馈机制。本文讨论了人工智能在远程医疗中的应用,并提出了一种反馈机制来提高远程医疗的绩效。
    Artificial intelligence (AI) has grown tremendously in the past decade. The application of AI in teledentistry can reform the way dental care, dental education, research, and subsequent innovations can happen remotely. Machine learning including deep learning-based algorithms can be developed to create predictive models of risk assessment for oral health related conditions, consequent complications, and patient stratification. Patients can be empowered to self-diagnose and apply preventive measures or self-manage some early stages of dental diseases. Applications of AI in teledentistry can be beneficial for both, the dental surgeon and the patient. AI enables better remote screening, diagnosis, record keeping, triaging, and monitoring of dental patients based on smart devices. This will take away rudimentary cases requiring run-of-the-mill treatments from dentists and enable them to concentrate on highly complex cases. This would also enable the dentists to serve a larger and deprived population in inaccessible areas. Its usage in teledentistry can bring a paradigm shift from curative to preventive personalised approach in dentistry. A strong asset to teledentistry could be a robust and comprehensive feedback mechanism routed through various channels proposed in this paper. This paper discusses the application of AI in teledentistry and proposes a feedback mechanism to enhance performance in teledentistry.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文介绍了将于2019年4月举行的选举的主要西班牙政治候选人的分析。分析的重点是面部表情分析(FEA),一种广泛用于神经营销研究的技术。它允许识别非常简短的微表达式,非自愿。它们是无法自愿控制的隐藏情绪的信号。使用iMotions的AFFDEX平台提供的分类算法对每位候选人的最终干预视频进行了后处理。然后我们分析了这些数据。首先,我们已经确定并比较了每个政客表现出的基本情绪。第二,我们将基本情绪与候选人演讲的特定时刻联系起来,确定他们处理的主题,并将它们直接与表达的情感联系起来。第三,我们分析了每个候选人在每种情绪中表现出的差异是否具有统计学意义。在这个意义上,我们应用了非参数卡方拟合优度检验。我们还考虑了方差分析,以测试是否,平均而言,候选人之间存在差异。最后,我们已经检查了西班牙主要媒体关于辩论评估的不同调查提供的结果与我们的实证分析中获得的结果是否一致。已经观察到负面情绪的优势。在面部表情中表达的情感与消息的言语内容之间发现了一些不一致之处。此外,从统计分析中得到的证据证实,在基本情绪方面,各种候选人之间观察到的差异,平均而言,具有统计学意义。在这个意义上,本文为分析公众人物的传播提供了方法论上的贡献,这可以帮助政治家提高他们的信息识别和评估表达情绪的强度的有效性。
    This article presents the analysis of the main Spanish political candidates for the elections to be held on April 2019. The analysis focuses on the Facial Expression Analysis (FEA), a technique widely used in neuromarketing research. It allows to identify the micro-expressions that are very brief, involuntary. They are signals of hidden emotions that cannot be controlled voluntarily. The video with the final interventions of every candidate has been post-processed using the classification algorithms given by the iMotions\'s AFFDEX platform. We have then analyzed these data. Firstly, we have identified and compare the basic emotions showed by each politician. Second, we have associated the basic emotions with specific moments of the candidate\'s speech, identifying the topics they address and relating them directly to the expressed emotion. Third, we have analyzed whether the differences shown by each candidate in every emotion are statistically significant. In this sense, we have applied the non-parametric chi-squared goodness-of-fit test. We have also considered the ANOVA analysis in order to test whether, on average, there are differences between the candidates. Finally, we have checked if there is consistency between the results provided by different surveys from the main media in Spain regarding the evaluation of the debate and those obtained in our empirical analysis. A predominance of negative emotions has been observed. Some inconsistencies were found between the emotion expressed in the facial expression and the verbal content of the message. Also, evidences got from statistical analysis confirm that the differences observed between the various candidates with respect to the basic emotions, on average, are statistically significant. In this sense, this article provides a methodological contribution to the analysis of the public figures\' communication, which could help politicians to improve the effectiveness of their messages identifying and evaluating the intensity of the expressed emotions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    数字化的增长,软件应用,和计算能力扩大了收集和分析感官数据的工具的种类。随着这些变化的继续发生,需要检查感官专业人员所需的新技能。这项研究的目的是回答以下问题:(a)感官专业人员如何感知在感官评估工作中利用面部表情分析的机会?(b)感官专业人员在使用面部表情分析时描述了他们需要的哪些技能?通过半结构性主题访谈,采访了来自各种食品公司和大学的22位感官专业人员,以从面部表情识别数据中绘制发展意图,并描述所需的既定技能。参与者的面部表情首先是在感官评估任务中由气味样本引起的。对评估进行视频记录以表征面部表情软件响应(FaceReader™)。对参与者进行了访谈,了解他们对软件产生的数据分析的看法。研究结果表明,使用面部表情分析如何包含个人和特定领域的观点。可识别性,联想,反射率,可靠性,适宜性被认为是一种个人观点。从特定领域的角度来看,专业人士认为收到的数据有价值,只有当他们有技能来解释和利用它。不仅需要增加IT培训,数学,统计数据,和解决问题,但也在与自我管理和道德责任相关的技能。
    The increase in digitalization, software applications, and computing power has widened the variety of tools with which to collect and analyze sensory data. As these changes continue to take place, examining new skills required among sensory professionals is needed. The aim with this study was to answer the following questions: (a) How did sensory professionals perceive the opportunities to utilize facial expression analysis in sensory evaluation work? (b) What skills did the sensory professionals describe they needed when utilizing facial expression analysis? Twenty-two sensory professionals from various food companies and universities were interviewed by using semistructural thematic interviews to map development intentions from facial expression recognition data as well as to describe the established skills that were needed. Participants\' facial expressions were first elicited by an odor sample during a sensory evaluation task. The evaluation was video recorded to characterize a facial expression software response (FaceReader™). The participants were interviewed regarding their opinions of the data analysis the software produced. The study findings demonstrate how using facial expression analysis contains personal and field-specific perspectives. Recognizability, associativity, reflectivity, reliability, and suitability were perceived as a personal perspective. From the field-specific perspective, professionals considered the received data valuable only if they had skills to interpret and utilize it. There is a need for an increase in training not only in IT, mathematics, statistics, and problem-solving, but also in skills related to self-management and ethical responsibility.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号