automated detection

自动检测
  • 文章类型: Journal Article
    背景:新生儿癫痫发作在诊断上具有挑战性,并且主要是仅在心电图上。多通道视频连续脑电图(cEEG)是调查的黄金标准,然而,在非工作时间内获得神经生理学支持可能会受到限制。自动发作检测算法(SDA)旨在检测EEG数据的变化,转换成用户友好的癫痫发作概率趋势。这项研究的目的是评估重症监护环境中Persyst新生儿SDA的诊断准确性。
    方法:2019年5月至2022年12月在大奥蒙德街医院(GOSH)重症监护期间接受cEEG的新生儿的单中心回顾性服务评估研究。校正胎龄<44周的新生儿,谁有一个cEEG记录持续时间>60分钟,在重症监护病房住院期间,包括在研究中。为所有病例(检测到癫痫发作)和对照(无癫痫发作)创建一小时的cEEG剪辑,并由Persyst新生儿SDA进行分析。cEEG记录的专家神经生理学报告被用作诊断比较的金标准。使用每个记录中的最高发作概率创建受试者工作特征(ROC)曲线。确定了敏感性和特异性的最佳发作概率阈值。
    结果:资格筛选产生49例,和49个无癫痫控制。符合研究条件的患者的癫痫发作患病率,约为19%,死亡率为35%。最常见的癫痫发作病因是缺氧缺血性损伤(35%),其次是先天性代谢错误(18%)。曲线下的ROC面积为0.94,最佳概率阈值为0.4和0.6。应用0.6的阈值,产生80%的灵敏度和98%的特异性。
    结论:Persyst新生儿SDA在识别新生儿癫痫发作方面表现出很高的诊断准确性;与成人人群中标准PersystSDA的准确性相当,其他新生儿SDA,和振幅积分脑电图(aEEG)。癫痫的过度诊断是有风险的,特别是来自cEEG记录的假象。为了全面检查其临床效用,需要进一步调查Persyst新生儿SDA的准确性,以及在更大的患者队列中确认最佳癫痫发作概率阈值。
    BACKGROUND: Neonatal seizures are diagnostically challenging and predominantly electrographic-only. Multichannel video continuous electroencephalography (cEEG) is the gold standard investigation, however, out-of-hours access to neurophysiology support can be limited. Automated seizure detection algorithms (SDAs) are designed to detect changes in EEG data, translated into user-friendly seizure probability trends. The aim of this study was to evaluate the diagnostic accuracy of the Persyst neonatal SDA in an intensive care setting.
    METHODS: Single-centre retrospective service evaluation study in neonates undergoing cEEG during intensive care admission to Great Ormond Street Hospital (GOSH) between May 2019 and December 2022. Neonates with <44 weeks corrected gestational age, who had a cEEG recording duration >60 minutes, whilst inpatient in intensive care, were included in the study. One-hour cEEG clips were created for all cases (seizures detected) and controls (seizure-free) and analysed by the Persyst neonatal SDA. Expert neurophysiology reports of the cEEG recordings were used as the gold standard for diagnostic comparison. A receiver operating characteristic (ROC) curve was created using the highest seizure probability in each recording. Optimal seizure probability thresholds for sensitivity and specificity were identified.
    RESULTS: Eligibility screening produced 49 cases, and 49 seizure-free controls. Seizure prevalence within those patients eligible for the study, was approximately 19% with 35% mortality. The most common case seizure aetiology was hypoxic ischaemic injury (35%) followed by inborn errors of metabolism (18%). The ROC area under the curve was 0.94 with optimal probability thresholds 0.4 and 0.6. Applying a threshold of 0.6, produced 80% sensitivity and 98% specificity.
    CONCLUSIONS: The Persyst neonatal SDA demonstrates high diagnostic accuracy in identifying neonatal seizures; comparable to the accuracy of the standard Persyst SDA in adult populations, other neonatal SDAs, and amplitude integrated EEG (aEEG). Overdiagnosis of seizures is a risk, particularly from cEEG recording artefact. To fully examine its clinical utility, further investigation of the Persyst neonatal SDA\'s accuracy is required, as well as confirming the optimal seizure probability thresholds in a larger patient cohort.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目前食用油中霉菌毒素快速检测方法的预处理步骤不仅限制了检测效率,还会产生有机废液污染环境。在这项工作中,建立了一种免预处理、环保的食用油快速检测方法。该方法不需要预处理操作,直接加入油样可实现自动定量检测。根据目标分子的极性,通过调节反应液中表面活性剂的含量,实现花生油中AFB1和玉米油中ZEN的定量检测。回收率在96.5%-110.7%之间,标准偏差<10.4%,AFB1的检出限为0.17μg/kg,ZEN的检出限为4.91μg/kg。该方法实现了全链条检测的全自动化,即采样结果,适用于食用油批次样品的现场检测。
    Pretreatment steps of current rapid detection methods for mycotoxins in edible oils not only restrict detection efficiency, but also produce organic waste liquid to pollute environment. In this work, a pretreatment-free and eco-friendly rapid detection method for edible oil is established. This proposed method does not require pretreatment operation, and automated quantitative detection could be achieved by directly adding oil samples. According to polarity of target molecules, the content of surfactant in reaction solutions could be adjusted to achieve the quantitative detection of AFB1 in peanut oil and ZEN in corn oil. The recoveries are between 96.5%-110.7% with standard deviation <10.4%, and the limit of detection is 0.17 μg/kg for AFB1 and 4.91 μg/kg for ZEN. This method realizes full automation of the whole chain detection, i.e. sample in-result out, and is suitable for the on-site detection of batches of edible oils samples.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:学习进行斜视手术是眼科医生手术训练的一个重要方面。手术步骤的自动分类策略可以提高培训课程的有效性和对居民绩效的有效评估。为此,我们旨在开发和验证一种深度学习(DL)模型,用于自动检测视频中的斜视手术步骤。
    方法:在本研究中,我们收集了上海儿童医院的479个斜视手术视频,上海交通大学医学院附属,从2017年7月到2021年10月。根据国际眼科理事会的眼科手术能力评估规则(ICO-OSCAR:斜视),将视频手动切成八个斜视手术步骤的3345个剪辑。视频数据集按眼睛水平随机分为训练(60%),验证(20%)和测试数据集(20%)。我们评估了两种混合DL算法:基于递归神经网络(RNN)和基于变压器的模型。评估指标包括:准确性,接收器工作特性曲线下的面积,精度,召回和F1得分。
    结果:DL模型识别斜视手术视频剪辑中的步骤,使用基于Transformer的模型获得了1.00(95%CI1.00-1.00)的宏观平均AUC,使用基于RNN的模型获得了0.98(95%CI0.97-1.00)。分别。与基于RNN的模型相比,基于Transformer的模型具有更高的准确性(0.96与0.83,p<0.001)。在检测斜视手术的不同步骤时,基于Transformer的模型的预测能力优于RNN。基于Transformer的模型的精度介于0.90和1之间,基于RNN的模型的精度介于0.75到0.94之间。f1分数对于基于Transformer的模型介于0.93和1之间,对于基于RNN的模型介于0.78到0.92之间。
    结论:DL模型可以高精度地自动识别斜视手术的视频步骤,并且基于Transformer的算法在对视频帧的时空特征进行建模时表现出出色的性能。
    BACKGROUND: Learning to perform strabismus surgery is an essential aspect of ophthalmologists\' surgical training. Automated classification strategy for surgical steps can improve the effectiveness of training curricula and the efficient evaluation of residents\' performance. To this end, we aimed to develop and validate a deep learning (DL) model for automated detecting strabismus surgery steps in the videos.
    METHODS: In this study, we gathered 479 strabismus surgery videos from Shanghai Children\'s Hospital, affiliated to Shanghai Jiao Tong University School of Medicine, spanning July 2017 to October 2021. The videos were manually cut into 3345 clips of the eight strabismus surgical steps based on the International Council of Ophthalmology\'s Ophthalmology Surgical Competency Assessment Rubrics (ICO-OSCAR: strabismus). The videos dataset was randomly split by eye-level into a training (60%), validation (20%) and testing dataset (20%). We evaluated two hybrid DL algorithms: a Recurrent Neural Network (RNN) based and a Transformer-based model. The evaluation metrics included: accuracy, area under the receiver operating characteristic curve, precision, recall and F1-score.
    RESULTS: DL models identified the steps in video clips of strabismus surgery achieved macro-average AUC of 1.00 (95% CI 1.00-1.00) with Transformer-based model and 0.98 (95% CI 0.97-1.00) with RNN-based model, respectively. The Transformer-based model yielded a higher accuracy compared with RNN-based models (0.96 vs. 0.83, p < 0.001). In detecting different steps of strabismus surgery, the predictive ability of the Transformer-based model was better than that of the RNN. Precision ranged between 0.90 and 1 for the Transformer-based model and 0.75 to 0.94 for the RNN-based model. The f1-score ranged between 0.93 and 1 for the Transformer-based model and 0.78 to 0.92 for the RNN-based model.
    CONCLUSIONS: The DL models can automate identify video steps of strabismus surgery with high accuracy and Transformer-based algorithms show excellent performance when modeling spatiotemporal features of video frames.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    宫颈听诊很简单,诊断吞咽困难的非侵入性方法,尽管该方法的可靠性在很大程度上取决于评估者的主观性和经验。最近开发的自动检测吞咽声音的方法有助于吞咽困难的粗略自动诊断。尽管尚未建立在实际临床条件下专门针对吞咽声音的特殊特征模式的可靠检测方法。我们研究了一种自动检测吞咽声音的新方法,该方法基于声学特征提取基本统计数据和动态特征:梅尔频率倒谱系数和梅尔频率幅度系数,采用支持向量机和多层感知器相结合的集成学习模型。对所提出方法的有效性进行了评价,基于与74名高龄吞咽困难患者的视频透视吞咽研究同步的吞咽声音数据库,表现突出。它实现了大约0.92的F1微平均值和95.20%的精度。方法,在当前的临床记录数据库中被证明是有效的,提示宫颈听诊的客观性有了显著的进步。然而,在其他数据库中验证其有效性对于确认其广泛适用性和潜在影响至关重要。
    Cervical auscultation is a simple, noninvasive method for diagnosing dysphagia, although the reliability of the method largely depends on the subjectivity and experience of the evaluator. Recently developed methods for the automatic detection of swallowing sounds facilitate a rough automatic diagnosis of dysphagia, although a reliable method of detection specialized in the peculiar feature patterns of swallowing sounds in actual clinical conditions has not been established. We investigated a novel approach for automatically detecting swallowing sounds by a method wherein basic statistics and dynamic features were extracted based on acoustic features: Mel Frequency Cepstral Coefficients and Mel Frequency Magnitude Coefficients, and an ensemble learning model combining Support Vector Machine and Multi-Layer Perceptron were applied. The evaluation of the effectiveness of the proposed method, based on a swallowing-sounds database synchronized to a video fluorographic swallowing study compiled from 74 advanced-age patients with dysphagia, demonstrated an outstanding performance. It achieved an F1-micro average of approximately 0.92 and an accuracy of 95.20%. The method, proven effective in the current clinical recording database, suggests a significant advancement in the objectivity of cervical auscultation. However, validating its efficacy in other databases is crucial for confirming its broad applicability and potential impact.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    自主传感器提供了在人类无法直接观察的空间和时间尺度上观察生物的机会。通过使用深度学习方法处理来自自主传感器的大数据流,研究人员可以做出新颖而重要的自然历史发现。在这项研究中,我们将自动声学监测与深度学习模型相结合,以观察濒临灭绝的内华达山脉黄腿蛙(Ranasierrae)的繁殖相关活动,当前调查无法衡量的行为。通过部署廉价的水听器并开发深度学习模型来识别与繁殖相关的发声,我们发现了三种未记录的R.sierrae发声类型,并发现了夜间繁殖相关的发声活动的意外时间模式。这项研究证明了自主传感器数据和深度学习的结合如何为物种自然史提供新的启示。特别是在人类观察有限或不可能的时间或地点。
    AbstractAutonomous sensors provide opportunities to observe organisms across spatial and temporal scales that humans cannot directly observe. By processing large data streams from autonomous sensors with deep learning methods, researchers can make novel and important natural history discoveries. In this study, we combine automated acoustic monitoring with deep learning models to observe breeding-associated activity in the endangered Sierra Nevada yellow-legged frog (Rana sierrae), a behavior that current surveys do not measure. By deploying inexpensive hydrophones and developing a deep learning model to recognize breeding-associated vocalizations, we discover three undocumented R. sierrae vocalization types and find an unexpected temporal pattern of nocturnal breeding-associated vocal activity. This study exemplifies how the combination of autonomous sensor data and deep learning can shed new light on species\' natural history, especially during times or in locations where human observation is limited or impossible.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:局灶性皮质发育不良(FCD)是耐药性局灶性癫痫的常见病因。FCD的视觉识别通常是耗时的并且取决于个人经验。在这里,我们提出了一种利用多模态数据和3D卷积神经网络(CNN)的自动化II型FCD检测方法。
    方法:收集82例FCD患者的MRI和正电子发射断层扫描(PET)数据,包括55(67.1%)的组织病理学,和27(32.9%)放射科诊断的患者。从T1加权图像中提取了三种类型的形态特征图和三种类型的组织图。这些地图,T1和PET图像形成CNN的输入。对包含62名患者的训练集进行了五倍交叉验证,选择表现最好的模型在20例患者的测试集上检测FCD.此外,进行消融实验以估计PET数据和CNN的值.
    结果:在验证集上,在90.3%的病例中检测到FCD,每个患者平均1.7个可能的病变。测试装置的灵敏度为90.0%,每个患者有1.85个可能的病变。没有PET数据,灵敏度下降到80.0%,在测试集上,平均病变数量增加到2.05。如果用人工神经网络取代CNN,灵敏度下降到85.0%,平均病变数增加到4.65。
    结论:基于多模态数据,以高灵敏度和少量假阳性结果自动检测FCD是可行的。PET数据和CNN可以提高自动检测的性能。
    OBJECTIVE: Focal cortical dysplasia (FCD) is a common etiology of drug-resistant focal epilepsy. Visual identification of FCD is usually time-consuming and depends on personal experience. Herein, we propose an automated type II FCD detection approach utilizing multi-modal data and 3D convolutional neural network (CNN).
    METHODS: MRI and positron emission tomography (PET) data of 82 patients with FCD were collected, including 55 (67.1%) histopathologically, and 27 (32.9%) radiologically diagnosed patients. Three types of morphometric feature maps and three types of tissue maps were extracted from the T1-weighted images. These maps, T1, and PET images formed the inputs for CNN. Five-fold cross-validations were carried out on the training set containing 62 patients, and the model behaving best was chosen to detect FCD on the test set of 20 patients. Furthermore, ablation experiments were performed to estimate the value of PET data and CNN.
    RESULTS: On the validation set, FCD was detected in 90.3% of the cases, with an average of 1.7 possible lesions per patient. The sensitivity on the test set was 90.0%, with 1.85 possible lesions per patient. Without the PET data, the sensitivity decreased to 80.0%, and the average lesion number increased to 2.05 on the test set. If an artificial neural network replaced the CNN, the sensitivity decreased to 85.0%, and the average lesion number increased to 4.65.
    CONCLUSIONS: Automated detection of FCD with high sensitivity and few false-positive findings is feasible based on multi-modal data. PET data and CNN could improve the performance of automated detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    糖尿病视网膜病变(DR),视网膜静脉阻塞(RVO),年龄相关性黄斑变性(AMD)构成了重大的全球健康挑战,常导致视力障碍和失明。自动检测这些条件至关重要,特别是在服务不足的农村地区,获得眼科服务的机会有限。尽管人工智能取得了显著进步,尤其是卷积神经网络(CNN),它们的复杂性会使解释变得困难。在这项研究中,我们收集了一个数据集,该数据集包含从接受荧光素眼底血管造影(FFA)检查的8110例患者中获得的15,089张彩色眼底照片(CFP).主要目标是构建将CNN与注意力机制合并的集成模型。这些模型是为分层多标签分类任务设计的,专注于DR的检测,RVO,AMD,和其他眼底条件。此外,我们的方法扩展到DR的详细分类,RVO,和AMD根据各自的子类。我们采用了一种方法,该方法需要将从FFA结果获得的诊断信息转换为CFP。我们的研究重点是评估模型仅基于CFP实现精确诊断的能力。值得注意的是,我们的模型展示了不同眼底状况的改善,ConvNeXt-base+注意模型因其卓越的性能而脱颖而出。ConvNeXt-base+注意力模型取得了显著的指标,包括0.943的受试者工作特征曲线下面积(AUC)、0.870的相关F1评分和0.778的Cohenκ用于DR检测。对于RVO,AUC为0.960,F1评分为0.854,Cohenκ为0.819。此外,在AMD检测中,该模型的AUC为0.959,F1评分为0.727,Cohen'sκ为0.686.令人印象深刻的是,该模型展示了对RVO和AMD进行亚分类的熟练程度,展示了值得称赞的灵敏度和特异性。此外,我们的模型通过可视化眼底图像上的注意力权重来增强可解释性,帮助识别疾病发现。这些结果强调了我们的模型在推进DR检测方面的重大影响,RVO,和AMD,提供改善患者预后和积极影响医疗保健前景的潜力。
    Diabetic retinopathy (DR), retinal vein occlusion (RVO), and age-related macular degeneration (AMD) pose significant global health challenges, often resulting in vision impairment and blindness. Automatic detection of these conditions is crucial, particularly in underserved rural areas with limited access to ophthalmic services. Despite remarkable advancements in artificial intelligence, especially convolutional neural networks (CNNs), their complexity can make interpretation difficult. In this study, we curated a dataset consisting of 15,089 color fundus photographs (CFPs) obtained from 8110 patients who underwent fundus fluorescein angiography (FFA) examination. The primary objective was to construct integrated models that merge CNNs with an attention mechanism. These models were designed for a hierarchical multilabel classification task, focusing on the detection of DR, RVO, AMD, and other fundus conditions. Furthermore, our approach extended to the detailed classification of DR, RVO, and AMD according to their respective subclasses. We employed a methodology that entails the translation of diagnostic information obtained from FFA results into CFPs. Our investigation focused on evaluating the models\' ability to achieve precise diagnoses solely based on CFPs. Remarkably, our models showcased improvements across diverse fundus conditions, with the ConvNeXt-base + attention model standing out for its exceptional performance. The ConvNeXt-base + attention model achieved remarkable metrics, including an area under the receiver operating characteristic curve (AUC) of 0.943, a referable F1 score of 0.870, and a Cohen\'s kappa of 0.778 for DR detection. For RVO, it attained an AUC of 0.960, a referable F1 score of 0.854, and a Cohen\'s kappa of 0.819. Furthermore, in AMD detection, the model achieved an AUC of 0.959, an F1 score of 0.727, and a Cohen\'s kappa of 0.686. Impressively, the model demonstrated proficiency in subclassifying RVO and AMD, showcasing commendable sensitivity and specificity. Moreover, our models enhanced interpretability by visualizing attention weights on fundus images, aiding in the identification of disease findings. These outcomes underscore the substantial impact of our models in advancing the detection of DR, RVO, and AMD, offering the potential for improved patient outcomes and positively influencing the healthcare landscape.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    颅内出血需要立即诊断,以优化患者管理和结果,CT是紧急情况下的首选模式。我们旨在评估第一个扫描仪集成的人工智能算法在常规临床环境中检测脑出血的性能。这项回顾性研究包括435次连续的非对比头部CT扫描。在所有情况下,自动脑出血检测均作为单独的重建工作进行计算。放射学报告(RR)始终由放射学住院医师进行,并由高级放射科医师定稿。此外,一个由两名放射科医生组成的小组回顾性地审查了数据集,采取额外的信息,如临床记录,当然,并考虑到最终诊断。这种共识阅读是一种参考。为了诊断准确性进行了统计。在432/435例(99%)的患者中成功进行了脑出血检测。392例(90.7%)患者的AI算法与参考标准一致。在52例阳性病例中发现了1例假阴性病例。然而,39个阳性检测结果被证明是假阳性。诊断性能计算为98.1%的灵敏度,特异性为89.7%,阳性预测值为56.7%,阴性预测值(NPV)为99.7%。执行扫描仪集成的AI检测脑出血是可行且可靠的。诊断准确性具有高特异性和非常高的阴性预测值和灵敏度。然而,许多假阳性结果导致相对中等的阳性预测值.
    Intracranial hemorrhages require an immediate diagnosis to optimize patient management and outcomes, and CT is the modality of choice in the emergency setting. We aimed to evaluate the performance of the first scanner-integrated artificial intelligence algorithm to detect brain hemorrhages in a routine clinical setting. This retrospective study includes 435 consecutive non-contrast head CT scans. Automatic brain hemorrhage detection was calculated as a separate reconstruction job in all cases. The radiological report (RR) was always conducted by a radiology resident and finalized by a senior radiologist. Additionally, a team of two radiologists reviewed the datasets retrospectively, taking additional information like the clinical record, course, and final diagnosis into account. This consensus reading served as a reference. Statistics were carried out for diagnostic accuracy. Brain hemorrhage detection was executed successfully in 432/435 (99%) of patient cases. The AI algorithm and reference standard were consistent in 392 (90.7%) cases. One false-negative case was identified within the 52 positive cases. However, 39 positive detections turned out to be false positives. The diagnostic performance was calculated as a sensitivity of 98.1%, specificity of 89.7%, positive predictive value of 56.7%, and negative predictive value (NPV) of 99.7%. The execution of scanner-integrated AI detection of brain hemorrhages is feasible and robust. The diagnostic accuracy has a high specificity and a very high negative predictive value and sensitivity. However, many false-positive findings resulted in a relatively moderate positive predictive value.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    评估患者的自杀风险具有挑战性,尤其是那些否认自杀念头的人.初级保健提供者在筛查自杀风险方面意见不一致。患者的演讲可能会提供更客观的,关于他们潜在自杀意念的基于语言的线索。文献中缺乏文本分析来检测抑郁症中的自杀风险。
    本研究旨在确定在使用自然语言处理(NLP)和机器学习(ML)的抑郁症临床访谈中是否可以通过语言特征检测到自杀意念。
    这项横断面研究在2020年10月至2022年5月之间招募了305名参与者(平均年龄53.0,SD11.77岁;女性:n=176,57%),其中197人患有终生抑郁症,108人健康。这项研究是正在进行的以病例对照设计表征抑郁症的研究的一部分。在这项研究中,236名参与者无自杀倾向,而56岁和13岁的自杀风险低和高,分别。采用汉密尔顿抑郁量表(HAMD)的结构化访谈指南来评估自杀风险和抑郁严重程度。根据自杀相关问题(H11)对自杀风险进行临床医生评级。对访谈进行转录,并使用语言查询和单词计数(LIWC)将参与者的口头回答中的单词翻译成心理上有意义的类别。
    序数逻辑回归显示参与者对HAMD问题的回答中与自杀相关的显著语言特征。在谈论工作和活动时使用愤怒词的增加导致自杀风险最高(比值比[OR]2.91,95%CI1.22-8.55;P=.02)。随机森林模型表明,对H11的直接反应的文本分析在识别具有高自杀风险的个体(AUC0.76-0.89;P<.001)和检测一般的自杀风险方面是有效的。包括低自杀风险和高自杀风险(AUC0.83-0.92;P<.001)。更重要的是,即使没有患者透露自杀意念,也可以通过令人满意的表现来检测自杀风险。根据对疑病问题的答复,对ML模型进行训练以识别具有高自杀风险的个体(AUC0.76;P<.001)。
    这项研究考察了使用NLP和ML分析来自临床访谈的文本以进行自杀检测的观点,它有可能为自杀意念检测提供更准确和特异性的标记。这些发现可能为开发自动化检测的自杀风险的高性能评估铺平道路。包括基于在线聊天机器人的通用筛查访谈。
    UNASSIGNED: Assessing patients\' suicide risk is challenging, especially among those who deny suicidal ideation. Primary care providers have poor agreement in screening suicide risk. Patients\' speech may provide more objective, language-based clues about their underlying suicidal ideation. Text analysis to detect suicide risk in depression is lacking in the literature.
    UNASSIGNED: This study aimed to determine whether suicidal ideation can be detected via language features in clinical interviews for depression using natural language processing (NLP) and machine learning (ML).
    UNASSIGNED: This cross-sectional study recruited 305 participants between October 2020 and May 2022 (mean age 53.0, SD 11.77 years; female: n=176, 57%), of which 197 had lifetime depression and 108 were healthy. This study was part of ongoing research on characterizing depression with a case-control design. In this study, 236 participants were nonsuicidal, while 56 and 13 had low and high suicide risks, respectively. The structured interview guide for the Hamilton Depression Rating Scale (HAMD) was adopted to assess suicide risk and depression severity. Suicide risk was clinician rated based on a suicide-related question (H11). The interviews were transcribed and the words in participants\' verbal responses were translated into psychologically meaningful categories using Linguistic Inquiry and Word Count (LIWC).
    UNASSIGNED: Ordinal logistic regression revealed significant suicide-related language features in participants\' responses to the HAMD questions. Increased use of anger words when talking about work and activities posed the highest suicide risk (odds ratio [OR] 2.91, 95% CI 1.22-8.55; P=.02). Random forest models demonstrated that text analysis of the direct responses to H11 was effective in identifying individuals with high suicide risk (AUC 0.76-0.89; P<.001) and detecting suicide risk in general, including both low and high suicide risk (AUC 0.83-0.92; P<.001). More importantly, suicide risk can be detected with satisfactory performance even without patients\' disclosure of suicidal ideation. Based on the response to the question on hypochondriasis, ML models were trained to identify individuals with high suicide risk (AUC 0.76; P<.001).
    UNASSIGNED: This study examined the perspective of using NLP and ML to analyze the texts from clinical interviews for suicidality detection, which has the potential to provide more accurate and specific markers for suicidal ideation detection. The findings may pave the way for developing high-performance assessment of suicide risk for automated detection, including online chatbot-based interviews for universal screening.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着机器人手术附件的使用越来越多,神经外科中的人工智能和增强现实,通过各种程序获取的数字图像和视频的自动分析成为越来越感兴趣的主题。虽然已经开发并实施了几种计算机视觉(CV)方法来分析手术场景,很少有研究致力于神经外科。
    在这项工作中,我们提供了系统的文献综述,重点介绍了基于术中图像和视频的专门应用于神经外科手术分析的CV方法。此外,我们为神经外科CV模型的未来发展提供建议.
    我们在多个数据库中进行了系统的文献检索,直到2023年1月17日,包括WebofScience,PubMed,IEEEXplore,Embase,和SpringerLink。
    我们确定了17项在神经外科视频/图像上采用CV算法的研究。CV最常见的应用是工具和神经解剖结构检测或表征,在较小程度上,手术工作流程分析。卷积神经网络(CNN)是CV模型最常用的架构(65%),展示了在工具检测和分割方面的卓越性能。特别是,掩模循环CNN在不同模式下表现出最稳健的表现结果。
    我们的系统评价表明,已经报道了可以有效检测和区分工具的CV模型,手术阶段,神经解剖结构,以及在复杂的神经外科场景中的关键事件,准确率超过95%。自动工具识别有助于客观表征和评估手术性能,在神经外科培训和术中安全管理方面具有潜在的应用。
    UNASSIGNED: With increasing use of robotic surgical adjuncts, artificial intelligence and augmented reality in neurosurgery, the automated analysis of digital images and videos acquired over various procedures becomes a subject of increased interest. While several computer vision (CV) methods have been developed and implemented for analyzing surgical scenes, few studies have been dedicated to neurosurgery.
    UNASSIGNED: In this work, we present a systematic literature review focusing on CV methodologies specifically applied to the analysis of neurosurgical procedures based on intra-operative images and videos. Additionally, we provide recommendations for the future developments of CV models in neurosurgery.
    UNASSIGNED: We conducted a systematic literature search in multiple databases until January 17, 2023, including Web of Science, PubMed, IEEE Xplore, Embase, and SpringerLink.
    UNASSIGNED: We identified 17 studies employing CV algorithms on neurosurgical videos/images. The most common applications of CV were tool and neuroanatomical structure detection or characterization, and to a lesser extent, surgical workflow analysis. Convolutional neural networks (CNN) were the most frequently utilized architecture for CV models (65%), demonstrating superior performances in tool detection and segmentation. In particular, mask recurrent-CNN manifested most robust performance outcomes across different modalities.
    UNASSIGNED: Our systematic review demonstrates that CV models have been reported that can effectively detect and differentiate tools, surgical phases, neuroanatomical structures, as well as critical events in complex neurosurgical scenes with accuracies above 95%. Automated tool recognition contributes to objective characterization and assessment of surgical performance, with potential applications in neurosurgical training and intra-operative safety management.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号