natural language processing (nlp)

自然语言处理 (NLP)
  • 文章类型: Journal Article
    边缘化社区的成员在分享种族主义的个人经历时,是否在社交媒体上保持沉默?在这里,我们研究算法的作用,人类,以及抑制种族歧视披露的平台准则。在对来自社区社交媒体平台的实际帖子的实地研究中,我们发现,当用户谈论他们作为种族主义目标的经历时,他们的帖子被主要在线平台的五种广泛使用的审核算法不成比例地标记为有毒,包括最新的大型语言模型。我们表明,人类用户也不成比例地标记了这些披露以进行删除。接下来,在后续实验中,我们证明,仅仅目睹这种压制会影响美国黑人如何看待社区及其在社区中的地位。最后,为了应对在线空间公平和包容性的这些挑战,我们引入了一种缓解策略:一种指南重构干预措施,可有效减少整个政治领域的沉默行为.
    Are members of marginalized communities silenced on social media when they share personal experiences of racism? Here, we investigate the role of algorithms, humans, and platform guidelines in suppressing disclosures of racial discrimination. In a field study of actual posts from a neighborhood-based social media platform, we find that when users talk about their experiences as targets of racism, their posts are disproportionately flagged for removal as toxic by five widely used moderation algorithms from major online platforms, including the most recent large language models. We show that human users disproportionately flag these disclosures for removal as well. Next, in a follow-up experiment, we demonstrate that merely witnessing such suppression negatively influences how Black Americans view the community and their place in it. Finally, to address these challenges to equity and inclusion in online spaces, we introduce a mitigation strategy: a guideline-reframing intervention that is effective at reducing silencing behavior across the political spectrum.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:比较基于大型语言模型(LLM)的Gemini和生成预训练变压器(GPT)在数据挖掘中的性能,并根据自由文本PET/CT报告生成结构化报告,用于用户定义的任务后的乳腺癌。
    方法:乳腺癌患者(平均年龄,50岁±11[SD];所有女性)在2005年7月至2023年10月期间连续接受18F-FDGPET/CT随访的患者被回顾性纳入研究。共有来自10名患者的20份报告用于训练双子座和GPT的用户定义的文本提示,生成结构化PET/CT报告。比较了自然语言处理(NLP)生成的结构化报告和核医学医师注释的结构化报告的数据提取准确性和进度决策能力。统计方法,包括卡方检验,McNemar检验和配对样本t检验,被用于研究。
    结果:使用两种NLP技术生成131例患者的结构化PET/CT报告,包括双子座和GPT。总的来说,就原发性病变大小而言,GPT在数据挖掘中表现出优于Gemini的优势(89.6%vs.53.8%,p<0.001)和转移性病变(96.3%vs89.6%,p<0.001)。此外,GPT在报告的进展决策(p<0.001)和语义相似性(F1得分0.930vs0.907,p<0.001)方面优于双子座。
    结论:GPT在基于自由文本PET/CT报告生成结构化报告方面优于Gemini,这是潜在的应用于临床实践。
    方法:当前研究中使用和/或分析的数据可根据相应的作者的合理要求获得。
    OBJECTIVE: To compare the performance of large language model (LLM) based Gemini and Generative Pre-trained Transformers (GPTs) in data mining and generating structured reports based on free-text PET/CT reports for breast cancer after user-defined tasks.
    METHODS: Breast cancer patients (mean age, 50 years ± 11 [SD]; all female) who underwent consecutive 18F-FDG PET/CT for follow-up between July 2005 and October 2023 were retrospectively included in the study. A total of twenty reports from 10 patients were used to train user-defined text prompts for Gemini and GPTs, by which structured PET/CT reports were generated. The natural language processing (NLP) generated structured reports and the structured reports annotated by nuclear medicine physicians were compared in terms of data extraction accuracy and capacity of progress decision-making. Statistical methods, including chi-square test, McNemar test and paired samples t-test, were employed in the study.
    RESULTS: The structured PET/CT reports for 131 patients were generated by using the two NLP techniques, including Gemini and GPTs. In general, GPTs exhibited superiority over Gemini in data mining in terms of primary lesion size (89.6% vs. 53.8%, p < 0.001) and metastatic lesions (96.3% vs 89.6%, p < 0.001). Moreover, GPTs outperformed Gemini in making decision for progress (p < 0.001) and semantic similarity (F1 score 0.930 vs 0.907, p < 0.001) for reports.
    CONCLUSIONS: GPTs outperformed Gemini in generating structured reports based on free-text PET/CT reports, which is potentially applied in clinical practice.
    METHODS: The data used and/or analyzed during the current study are available from the corresponding author on reasonable request.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    扩展现实(XR)模拟在教育环境中变得越来越普遍,尤其是在医学教育中。推进XR设备以增强这些模拟是一个蓬勃发展的研究领域。本研究旨在了解小说的价值,在与模拟全息患者交互期间的非可穿戴混合现实(MR)显示器,特别是在病史方面。北卡罗来纳大学教堂山分校的21名一年级医学生参加了虚拟患者(VP)模拟。在李克特五点量表上,学生们压倒性地同意这样的说法,即模拟有助于确保他们按照与病史相关的学习目标取得进展.然而,他们发现,目前,模拟只能部分纠正错误或提供明确的反馈。这一发现表明,新颖的硬件解决方案可以帮助学生参与活动,但是底层软件可能需要调整以获得足够的教学有效性。
    Extended reality (XR) simulations are becoming increasingly common in educational settings, particularly in medical education. Advancing XR devices to enhance these simulations is a booming field of research. This study seeks to understand the value of a novel, non-wearable mixed reality (MR) display during interactions with a simulated holographic patient, specifically in taking a medical history. Twenty-one first-year medical students at the University of North Carolina at Chapel Hill participated in the virtual patient (VP) simulations. On a five-point Likert scale, students overwhelmingly agreed with the statement that the simulations helped ensure they were progressing along learning objectives related to taking a patient history. However, they found that, at present, the simulations can only partially correct mistakes or provide clear feedback. This finding demonstrates that the novel hardware solution can help students engage in the activity, but the underlying software may need adjustment to attain sufficient pedagogical validity.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    电子健康记录(EHR)包含大量非结构化患者数据,使医生做出明智的决定具有挑战性。在本文中,我们引入了自然语言处理(NLP)方法来提取疗法,诊断,以及慢性狼疮病患者的动态EHR症状。我们的目标是展示一个全面的管道的努力,其中基于规则的系统与文本分割相结合,基于变压器的主题分析和临床本体,以增强文本预处理并自动识别规则。我们的方法应用于56名患者的子队列,总共有750个EHR用意大利语写,分别达到97%和90%以上的准确性和F分数,在三个提取的域中。这项工作有可能与EHR系统集成以自动化信息提取,尽量减少人为干预,并在慢性狼疮疾病领域提供个性化的数字解决方案。
    Electronic Health Records (EHRs) contain a wealth of unstructured patient data, making it challenging for physicians to do informed decisions. In this paper, we introduce a Natural Language Processing (NLP) approach for the extraction of therapies, diagnosis, and symptoms from ambulatory EHRs of patients with chronic Lupus disease. We aim to demonstrate the effort of a comprehensive pipeline where a rule-based system is combined with text segmentation, transformer-based topic analysis and clinical ontology, in order to enhance text preprocessing and automate rules\' identification. Our approach is applied on a sub-cohort of 56 patients, with a total of 750 EHRs written in Italian language, achieving an Accuracy and an F-score over 97% and 90% respectively, in the three extracted domains. This work has the potential to be integrated with EHR systems to automate information extraction, minimizing the human intervention, and providing personalized digital solutions in the chronic Lupus disease domain.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    神经语言学评估在神经学检查中起着至关重要的作用,揭示了与发育障碍和获得性神经系统疾病相关的广泛的语言和沟通障碍。然而,全面的神经语言学评估既费时费力,又需要其他任务的宝贵资源。为了赋予临床医生权力,医疗保健提供者,和研究人员,我们开发了开放大脑AI(OBAI)。该计算平台的目的是双重的。首先,它旨在提供先进的人工智能工具来促进口语和书面语言分析,自动化分析过程,并减少与耗时任务相关的工作量。该平台目前包含英语多语言工具,丹麦语,荷兰人,芬兰语,法语,德语,希腊语,意大利语,挪威语,波兰语,葡萄牙语,罗马尼亚语,俄语,西班牙语,和瑞典人。这些工具涉及(I)音频转录的模型,(ii)自动翻译,(三)语法错误纠正,(iv)转录为国际音标,(v)可读性评分,(vi)语音,形态学,语法,语义度量(例如,计数和比例),和词汇措施。第二,它旨在支持临床医生与“OBAICompanion,“一个促进语言处理的AI语言助手,比如结构化,总结,和编辑文本。OBAI还提供用于自动化拼写和语音评分的工具。本文回顾了OBAI的底层架构和应用程序,并展示了OBAI如何帮助专业人员专注于更高价值的活动,如治疗干预。
    Neurolinguistic assessments play a vital role in neurological examinations, revealing a wide range of language and communication impairments associated with developmental disorders and acquired neurological conditions. Yet, a thorough neurolinguistic assessment is time-consuming and laborious and takes valuable resources from other tasks. To empower clinicians, healthcare providers, and researchers, we have developed Open Brain AI (OBAI). The aim of this computational platform is twofold. First, it aims to provide advanced AI tools to facilitate spoken and written language analysis, automate the analysis process, and reduce the workload associated with time-consuming tasks. The platform currently incorporates multilingual tools for English, Danish, Dutch, Finnish, French, German, Greek, Italian, Norwegian, Polish, Portuguese, Romanian, Russian, Spanish, and Swedish. The tools involve models for (i) audio transcription, (ii) automatic translation, (iii) grammar error correction, (iv) transcription to the International Phonetic Alphabet, (v) readability scoring, (vi) phonology, morphology, syntax, semantic measures (e.g., counts and proportions), and lexical measures. Second, it aims to support clinicians in conducting their research and automating everyday tasks with \"OBAI Companion,\" an AI language assistant that facilitates language processing, such as structuring, summarizing, and editing texts. OBAI also provides tools for automating spelling and phonology scoring. This paper reviews OBAI\'s underlying architectures and applications and shows how OBAI can help professionals focus on higher-value activities, such as therapeutic interventions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:医疗决策对于有效治疗至关重要,特别是在精神病学中,诊断通常依赖于主观的患者报告和缺乏高特异性症状。人工智能(AI)特别是像GPT这样的大型语言模型(LLM),已成为提高精神病学诊断准确性的有前途的工具。这项比较研究探讨了几种人工智能模型的诊断能力,包括Aya,GPT-3.5,GPT-4,GPT-3.5临床助理(CA),Nemotron,和NemotronCA,使用DSM-5的临床病例。
    方法:我们从DSM-5临床病例书中收集了20例临床病例,涵盖了广泛的精神病诊断。四种先进的AI模型(GPT-3.5Turbo,GPT-4,Aya,Nemotron)使用提示进行测试,以引发详细的诊断和推理。模型的性能是根据推理的准确性和质量进行评估的,使用检索增强生成(RAG)方法进行额外的分析,用于访问DSM-5文本的模型。
    结果:AI模型显示出不同的诊断准确性,GPT-3.5和GPT-4在准确性和推理质量方面的表现明显优于Aya和Nemotron。虽然模型与特定疾病如循环胸腺和破坏性情绪失调障碍作斗争,其他人表现出色,特别是在诊断精神病和双相情感障碍方面。统计分析强调了准确性和推理方面的显著差异,强调GPT模型的优越性。
    结论:人工智能在精神病学中的应用为诊断准确性提供了潜在的改善。GPT模型的卓越性能可以归因于其先进的自然语言处理能力和对不同文本数据的广泛训练。能够更有效地解释精神病学语言。然而,像Aya和Nemotron这样的模型显示了推理的局限性,这表明他们的培训和应用需要进一步完善。
    结论:AI在加强精神病诊断方面具有重要的前景。某些模型在准确解释复杂的临床描述方面表现出很高的潜力。未来的研究应该集中在扩展数据集和整合多模式数据上,以进一步增强人工智能在精神病学中的诊断能力。
    BACKGROUND: Medical decision-making is crucial for effective treatment, especially in psychiatry where diagnosis often relies on subjective patient reports and a lack of high-specificity symptoms. Artificial intelligence (AI), particularly Large Language Models (LLMs) like GPT, has emerged as a promising tool to enhance diagnostic accuracy in psychiatry. This comparative study explores the diagnostic capabilities of several AI models, including Aya, GPT-3.5, GPT-4, GPT-3.5 clinical assistant (CA), Nemotron, and Nemotron CA, using clinical cases from the DSM-5.
    METHODS: We curated 20 clinical cases from the DSM-5 Clinical Cases book, covering a wide range of psychiatric diagnoses. Four advanced AI models (GPT-3.5 Turbo, GPT-4, Aya, Nemotron) were tested using prompts to elicit detailed diagnoses and reasoning. The models\' performances were evaluated based on accuracy and quality of reasoning, with additional analysis using the Retrieval Augmented Generation (RAG) methodology for models accessing the DSM-5 text.
    RESULTS: The AI models showed varied diagnostic accuracy, with GPT-3.5 and GPT-4 performing notably better than Aya and Nemotron in terms of both accuracy and reasoning quality. While models struggled with specific disorders such as cyclothymic and disruptive mood dysregulation disorders, others excelled, particularly in diagnosing psychotic and bipolar disorders. Statistical analysis highlighted significant differences in accuracy and reasoning, emphasizing the superiority of the GPT models.
    CONCLUSIONS: The application of AI in psychiatry offers potential improvements in diagnostic accuracy. The superior performance of the GPT models can be attributed to their advanced natural language processing capabilities and extensive training on diverse text data, enabling more effective interpretation of psychiatric language. However, models like Aya and Nemotron showed limitations in reasoning, indicating a need for further refinement in their training and application.
    CONCLUSIONS: AI holds significant promise for enhancing psychiatric diagnostics, with certain models demonstrating high potential in interpreting complex clinical descriptions accurately. Future research should focus on expanding the dataset and integrating multimodal data to further enhance the diagnostic capabilities of AI in psychiatry.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在当前的NLP文学中,印度次大陆的语言较少。为了缩小这个差距,我们提供了IndicDialogue数据集,其中包含10种主要印度语的字幕和对话:印地语,孟加拉语,马拉地语,泰卢固语,泰米尔语,乌尔都语,Odia,Sindhi,尼泊尔人,和阿萨姆语。该数据集来自OpenSubtitles.org,字幕预处理以删除不相关的标签,时间戳,方括号,和链接,确保在JSONL文件中保留相关对话。Indicalogue数据集包括7750个原始字幕文件(SRT),11个JSONL文件,6,853,518个对话,和42,188,569字。它旨在作为低资源语言的语言模型预训练的基础,支持广泛的下游任务,包括词嵌入,主题建模,谈话综合,神经机器翻译,和文本摘要。
    The Languages of the Indian subcontinent are less represented in current NLP literature. To mitigate this gap, we present the IndicDialogue dataset, which contains subtitles and dialogues in 10 major Indic languages: Hindi, Bengali, Marathi, Telugu, Tamil, Urdu, Odia, Sindhi, Nepali, and Assamese. This dataset is sourced from OpenSubtitles.org, with subtitles pre-processed to remove irrelevant tags, timestamps, square brackets, and links, ensuring the retention of relevant dialogues in JSONL files. The IndicDialogue dataset comprises 7750 raw subtitle files (SRT), 11 JSONL files, 6,853,518 dialogues, and 42,188,569 words. It is designed to serve as a foundation for language model pre-training for low-resource languages, enabling a wide range of downstream tasks including word embeddings, topic modeling, conversation synthesis, neural machine translation, and text summarization.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:尽管急性呼吸窘迫综合征(ARDS)的重要性和患病率,它的检测仍然是高度可变和不一致的。在这项工作中,我们的目标是开发一种算法(ARDSFlag),以根据柏林定义自动诊断ARDS。我们还旨在开发一种可视化工具,帮助临床医生有效评估ARDS标准。
    方法:ARDSFlag应用机器学习(ML)和自然语言处理(NLP)技术通过在电子健康记录(EHR)系统中整合结构化和非结构化数据来评估柏林标准。该研究队列包括重症监护医学信息集市III(MIMIC-III)数据库中的19,534名ICU入院。输出是ARDS诊断,发病时间,和严重性。
    结果:ARDSFlag包括使用大型训练集训练的单独文本分类器,以发现放射学报告中的双侧浸润(准确度为91.9%±0.5%)和放射学报告中的心力衰竭/液体超负荷(准确度为86.1%±0.5%)和超声心动图注释(准确度为98.4%±0.3%)。一套300例的测试,两组临床医生盲目独立标记ARDS,显示ARDSFlag产生的总体准确度为89.0%(特异性=91.7%,召回率=80.3%,检测ARDS病例的准确率为75.0%)。
    结论:据我们所知,这是第一项专注于开发自动化ARDS检测方法的研究。一些研究已经开发并使用其他方法来回答其他研究问题。期望,与这些方法相比,ARDSFlag在所有精度度量方面都能产生明显更高的性能。
    BACKGROUND: Despite the significance and prevalence of acute respiratory distress syndrome (ARDS), its detection remains highly variable and inconsistent. In this work, we aim to develop an algorithm (ARDSFlag) to automate the diagnosis of ARDS based on the Berlin definition. We also aim to develop a visualization tool that helps clinicians efficiently assess ARDS criteria.
    METHODS: ARDSFlag applies machine learning (ML) and natural language processing (NLP) techniques to evaluate Berlin criteria by incorporating structured and unstructured data in an electronic health record (EHR) system. The study cohort includes 19,534 ICU admissions in the Medical Information Mart for Intensive Care III (MIMIC-III) database. The output is the ARDS diagnosis, onset time, and severity.
    RESULTS: ARDSFlag includes separate text classifiers trained using large training sets to find evidence of bilateral infiltrates in radiology reports (accuracy of 91.9%±0.5%) and heart failure/fluid overload in radiology reports (accuracy 86.1%±0.5%) and echocardiogram notes (accuracy 98.4%±0.3%). A test set of 300 cases, which was blindly and independently labeled for ARDS by two groups of clinicians, shows that ARDSFlag generates an overall accuracy of 89.0% (specificity = 91.7%, recall = 80.3%, and precision = 75.0%) in detecting ARDS cases.
    CONCLUSIONS: To our best knowledge, this is the first study to focus on developing a method to automate the detection of ARDS. Some studies have developed and used other methods to answer other research questions. Expectedly, ARDSFlag generates a significantly higher performance in all accuracy measures compared to those methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景临床试验匹配,对推进医学研究至关重要,涉及对潜在参与者的详细筛选,以确保符合特定的试验要求.由于符合条件的患者数量众多以及不同资格标准的复杂性,研究人员面临挑战。传统的手工流程,既耗时又容易出错,往往导致错失机会。最近,大型语言模型(LLM),特别是生成式预训练变压器(GPT),已经成为令人印象深刻和有影响力的工具。利用来自人工智能(AI)和自然语言处理(NLP)的此类工具可以通过针对既定标准的自动患者筛查来提高该过程的准确性和效率。方法利用来自国家NLP临床挑战(n2c2)2018年挑战的数据,我们使用了202份纵向患者记录。这些记录由医疗专业人员注释,并根据13种选择标准进行评估,包括各种健康评估。我们的方法涉及将医学文档嵌入到矢量数据库中以确定相关文档部分,然后使用LLM(OpenAI的GPT-3.5Turbo和GPT-4)与结构化和思想链提示技术一起进行系统的文档评估标准。还检查了错误分类的标准以识别分类挑战。结果本研究使用GPT-3.5Turbo获得了0.81的准确性,0.80的敏感性,0.82的特异性和0.79的微小F1评分,使用GPT-4的准确度为0.87,灵敏度为0.85,特异性为0.89,微F1评分为0.86。值得注意的是,地面真相中的一些标准似乎贴错了标签,由于网站上的标签生成指南不足,我们无法进一步探讨这个问题。结论我们的发现强调了AI和NLP技术的潜力,包括LLM,在临床试验匹配过程中。该研究证明了在识别合格患者和最大限度地减少虚假夹杂物方面的强大能力。这种自动化系统有望减轻研究人员的工作量,并提高临床试验的入学率。从而加快了临床研究的进程,提高了临床研究的整体可行性。需要进一步的工作来确定这种方法在实际临床数据上实施时的潜力。
    Background Clinical trial matching, essential for advancing medical research, involves detailed screening of potential participants to ensure alignment with specific trial requirements. Research staff face challenges due to the high volume of eligible patients and the complexity of varying eligibility criteria. The traditional manual process, both time-consuming and error-prone, often leads to missed opportunities. Recently, large language models (LLMs), specifically generative pre-trained transformers (GPTs), have become impressive and impactful tools. Utilizing such tools from artificial intelligence (AI) and natural language processing (NLP) may enhance the accuracy and efficiency of this process through automated patient screening against established criteria. Methods Utilizing data from the National NLP Clinical Challenges (n2c2) 2018 Challenge, we utilized 202 longitudinal patient records. These records were annotated by medical professionals and evaluated against 13 selection criteria encompassing various health assessments. Our approach involved embedding medical documents into a vector database to determine relevant document sections and then using an LLM (OpenAI\'s GPT-3.5 Turbo and GPT-4) in tandem with structured and chain-of-thought prompting techniques for systematic document assessment against the criteria. Misclassified criteria were also examined to identify classification challenges. Results This study achieved an accuracy of 0.81, sensitivity of 0.80, specificity of 0.82, and a micro F1 score of 0.79 using GPT-3.5 Turbo, and an accuracy of 0.87, sensitivity of 0.85, specificity of 0.89, and micro F1 score of 0.86 using GPT-4. Notably, some criteria in the ground truth appeared mislabeled, an issue we couldn\'t explore further due to insufficient label generation guidelines on the website. Conclusion Our findings underscore the potential of AI and NLP technologies, including LLMs, in the clinical trial matching process. The study demonstrated strong capabilities in identifying eligible patients and minimizing false inclusions. Such automated systems promise to alleviate the workload of research staff and improve clinical trial enrollment, thus accelerating the process and enhancing the overall feasibility of clinical research. Further work is needed to determine the potential of this approach when implemented on real clinical data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    精神病的临床高风险(CHR)个体经历微妙的情绪障碍,传统上难以评估,但是自然语言处理(NLP)方法可能为这些症状提供新的见解。我们预测,与对照组相比,CHR个体会表达更多的负面情绪和更少的情绪语言。我们还检查了与症状学的关联。
    参与者包括49名CHR个体和42名健康对照者,他们完成了半结构化的叙事访谈。使用语言查询和单词计数(LIWC)对访谈笔录进行了分析,以评估语言的情感语调(语调-否定与肯定语言的比率)并计算所使用的肯定/否定单词。参与者还完成了临床症状评估,以确定CHR状态和表征症状(即,阳性和阴性症状域)。
    与健康对照组相比,CHR组的负面情绪调更多(t=2.676,p=.009),这与更严重的阳性症状有关(r2=.323,p=.013)。阳性和阴性单词的百分比在组间没有差异(p>.05)。
    提供可访问的语言分析,对情感功能障碍和精神病风险症状的生态有效见解。自然语言处理分析了CHR的语言差异,这些差异捕获了比选择的单词更细微的语言倾向。
    UNASSIGNED: Individuals at clinical high risk (CHR) for psychosis experience subtle emotional disturbances that are traditionally difficult to assess, but natural language processing (NLP) methods may provide novel insight into these symptoms. We predicted that CHR individuals would express more negative emotionality and less emotional language when compared to controls. We also examined associations with symptomatology.
    UNASSIGNED: Participants included 49 CHR individuals and 42 healthy controls who completed a semi-structured narrative interview. Interview transcripts were analyzed using Linguistic Inquiry and Word Count (LIWC) to assess the emotional tone of the language (tone -the ratio of negative to positive language) and count positive/negative words used. Participants also completed clinical symptom assessments to determine CHR status and characterize symptoms (i.e., positive and negative symptom domains).
    UNASSIGNED: The CHR group had more negative emotional tone compared to healthy controls (t=2.676, p=.009), which related to more severe positive symptoms (r2=.323, p=.013). The percentages of positive and negative words did not differ between groups (p\'s>.05).
    UNASSIGNED: Language analyses provided accessible, ecologically valid insight into affective dysfunction and psychosis risk symptoms. Natural language processing analyses unmasked differences in language for CHR that captured language tendencies that were more nuanced than the words that are chosen.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号