computer vision system

计算机视觉系统
  • 文章类型: Journal Article
    颜色特征是绿茶品质的关键指标,特别是在针形绿茶中,并主要通过主观感官分析进行评估。因此,出现了一个目标的必要性,精确,和有效的评估方法。在这项研究中,157个样本的885张图像,通过计算机视觉技术获得,基于图像的颜色特征来预测感官评价结果。三种机器学习方法,随机森林(RF),支持向量机(SVM)和基于决策树的AdaBoost(DT-AdaBoost)进行颜色质量评价模型的构建。值得注意的是,DT-Adaboost模型在评价茶叶品质方面显示出巨大的应用潜力,在用于验证模型准确性的266个样本中,正确判别率(CDR)为98.50%,相对偏差百分比(RPD)为14.827。该结果表明,计算机视觉与机器学习模型的集成为评估针状绿茶的颜色质量提供了一种有效的方法。
    Color characteristics are a crucial indicator of green tea quality, particularly in needle-shaped green tea, and are predominantly evaluated through subjective sensory analysis. Thus, the necessity arises for an objective, precise, and efficient assessment methodology. In this study, 885 images from 157 samples, obtained through computer vision technology, were used to predict sensory evaluation results based on the color features of the images. Three machine learning methods, Random Forest (RF), Support Vector Machine (SVM) and Decision Tree-based AdaBoost (DT-AdaBoost), were carried out to construct the color quality evaluation model. Notably, the DT-Adaboost model shows significant potential for application in evaluating tea quality, with a correct discrimination rate (CDR) of 98.50% and a relative percent deviation (RPD) of 14.827 in the 266 samples used to verify the accuracy of the model. This result indicates that the integration of computer vision with machine learning models presents an effective approach for assessing the color quality of needle-shaped green tea.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    糖尿病视网膜病变(DR),一种威胁视力的糖尿病眼部并发症,是劳动年龄人口失明的主要原因之一。血脂异常是DR发展或恶化的潜在危险因素。流行病学研究中的证据相互矛盾。非诺贝特,抗高血脂药,具有脂质调节和多效性(非脂质)作用,可以减少微血管事件的发生率。
    相关研究是通过过去20年的PubMed/MEDLINE搜索确定的,使用广义术语“糖尿病视网膜病变”和特定术语“非诺贝特”和“血脂异常”。进一步审查了这些研究中引用的参考文献,以汇编这篇小型综述。这些关键的调查经过了细致的审查和综合,侧重于方法学方法和临床结果。此外,我们在表格中提供了开创性研究的主要发现,以增强理解和比较。
    越来越多的证据表明非诺贝特治疗由于其对血-视网膜屏障的可能的保护作用而减慢DR的进展。非诺贝特对DR进展和发展的保护属性可大致分为两类:脂质调节效应和非脂质相关(多效性)效应。脂质修饰作用是通过过氧化物酶体增殖物激活受体-α激活介导的,而多效效应涉及血清C反应蛋白水平的降低,纤维蛋白原,和促炎标志物,和改善流动介导的扩张。在DR患者中,非诺贝特的调脂作用主要包括降低脂蛋白相关磷脂酶A2水平和上调载脂蛋白A1水平.这些变化有助于非诺贝特的抗炎和抗血管生成作用。非诺贝特引发多种多效性效应,包括抗凋亡,抗氧化剂,抗炎,和抗血管生成特性,以及这些影响的间接后果。两项随机对照试验-非诺贝特干预和降低糖尿病事件和控制糖尿病心血管风险的行动研究-指出非诺贝特治疗可防止DR进展,与血脂水平无关。
    非诺贝特,一种有效降低DR进展的口服降血脂药,可能会减少发生威胁视力的并发症并需要侵入性治疗的患者数量。尽管它对DR进展具有良好的保护作用,在DR治疗中,非诺贝特治疗尚未获得广泛的临床接受。正在进行和未来的临床试验可能会阐明非诺贝特治疗在DR管理中的作用。
    UNASSIGNED: Diabetic retinopathy (DR), a sight-threatening ocular complication of diabetes mellitus, is one of the main causes of blindness in the working-age population. Dyslipidemia is a potential risk factor for the development or worsening of DR, with conflicting evidence in epidemiological studies. Fenofibrate, an antihyperlipidemic agent, has lipid-modifying and pleiotropic (non-lipid) effects that may lessen the incidence of microvascular events.
    UNASSIGNED: Relevant studies were identified through a PubMed/MEDLINE search spanning the last 20 years, using the broad term \"diabetic retinopathy\" and specific terms \"fenofibrate\" and \"dyslipidemia\". References cited in these studies were further examined to compile this mini-review. These pivotal investigations underwent meticulous scrutiny and synthesis, focusing on methodological approaches and clinical outcomes. Furthermore, we provided the main findings of the seminal studies in a table to enhance comprehension and comparison.
    UNASSIGNED: Growing evidence indicates that fenofibrate treatment slows DR advancement owing to its possible protective effects on the blood-retinal barrier. The protective attributes of fenofibrate against DR progression and development can be broadly classified into two categories: lipid-modifying effects and non-lipid-related (pleiotropic) effects. The lipid-modifying effect is mediated through peroxisome proliferator-activated receptor-α activation, while the pleiotropic effects involve the reduction in serum levels of C-reactive protein, fibrinogen, and pro-inflammatory markers, and improvement in flow-mediated dilatation. In patients with DR, the lipid-modifying effects of fenofibrate primarily involve a reduction in lipoprotein-associated phospholipase A2 levels and the upregulation of apolipoprotein A1 levels. These changes contribute to the anti-inflammatory and anti-angiogenic effects of fenofibrate. Fenofibrate elicits a diverse array of pleiotropic effects, including anti-apoptotic, antioxidant, anti-inflammatory, and anti-angiogenic properties, along with the indirect consequences of these effects. Two randomized controlled trials-the Fenofibrate Intervention and Event Lowering in Diabetes and Action to Control Cardiovascular Risk in Diabetes studies-noted that fenofibrate treatment protected against DR progression, independent of serum lipid levels.
    UNASSIGNED: Fenofibrate, an oral antihyperlipidemic agent that is effective in decreasing DR progression, may reduce the number of patients who develop vision-threatening complications and require invasive treatment. Despite its proven protection against DR progression, fenofibrate treatment has not yet gained wide clinical acceptance in DR management. Ongoing and future clinical trials may clarify the role of fenofibrate treatment in DR management.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    血管内皮生长因子(VEGF)是参与视网膜屏障破坏的主要物质。VEGF过度表达可引起糖尿病性黄斑水肿(DME)。黄斑激光光凝术是DME的标准治疗方法;然而,最近,玻璃体内注射抗VEGF已超过激光治疗。我们的目的是评估玻璃体内注射阿柏西普或雷珠单抗治疗初治DME的疗效。
    这个单中心,回顾性,介入,对比研究纳入了在Al-Azhar大学医院玻璃体内注射阿柏西普2mg/0.05mL或雷珠单抗0.5mg/0.05mL的未治疗DME导致视力障碍的眼睛,2023年3月至2024年1月之间的埃及。收集基线和注射后1、3和6个月的人口统计学数据和完整的眼科检查结果,包括以最小分辨率角(logMAR)表示法的对数表示的最佳矫正远距视力(BCDVA),裂隙灯生物显微镜,扩张眼底镜检查,和使用谱域光学相干层析成像测量的中心子场厚度(CST)。
    总的来说,将96例中位(四分位距[IQR])年龄为57(10)(范围:20-74)岁,男女比例为1:2.7的患者的96只眼分配到两组中的一组,年龄相当,性别,糖尿病持续时间,并存在其他合并症(均P>0.05)。基线糖尿病视网膜病变状态或DME类型组间差异无统计学意义(均P>0.05)。在这两组中,中位数(IQR)BCDVA从基线时的0.7(0.8)logMAR显着改善至注射后6个月时的0.4(0.1)logMAR(均P=0.001),在所有随访中,组间差异无统计学意义(均P>0.05)。阿柏西普组的中位数(IQR)CST从基线时的347(166)µm显着降低至注射后6个月时的180(233)µm,雷珠单抗组从基线时的360(180)µm下降到注射后6个月时的190(224)µm(均P=0.001),在所有随访中,组间差异无统计学意义(均P>0.05)。两组均无严重不良反应记录。
    雷珠单抗和阿柏西普在短期随访中对未治疗DME患者的解剖和功能结果同样有效,两种药物之间的注射计数没有显着差异。更大的前景,随机化,需要进行随访时间较长的双盲试验,以确认我们的初步结果.
    UNASSIGNED: Vascular endothelial growth factor (VEGF) is the primary substance involved in retinal barrier breach. VEGF overexpression may cause diabetic macular edema (DME). Laser photocoagulation of the macula is the standard treatment for DME; however, recently, intravitreal anti-VEGF injections have surpassed laser treatment. Our aim was to evaluate the efficacy of intravitreal injections of aflibercept or ranibizumab for managing treatment-naive DME.
    UNASSIGNED: This single-center, retrospective, interventional, comparative study included eyes with visual impairment due to treatment-naive DME that underwent intravitreal injection of either aflibercept 2 mg/0.05 mL or ranibizumab 0.5 mg/0.05 mL at Al-Azhar University Hospitals, Egypt between March 2023 and January 2024. Demographic data and full ophthalmological examination results at baseline and 1, 3, and 6 months post-injection were collected, including the best-corrected distance visual acuity (BCDVA) in logarithm of the minimum angle of resolution (logMAR) notation, slit-lamp biomicroscopy, dilated fundoscopy, and central subfield thickness (CST) measured using spectral-domain optical coherence tomography.
    UNASSIGNED: Overall, the 96 eyes of 96 patients with a median (interquartile range [IQR]) age of 57 (10) (range: 20-74) years and a male-to-female ratio of 1:2.7 were allocated to one of two groups with comparable age, sex, diabetes mellitus duration, and presence of other comorbidities (all P >0.05). There was no statistically significant difference in baseline diabetic retinopathy status or DME type between groups (both P >0.05). In both groups, the median (IQR) BCDVA significantly improved from 0.7 (0.8) logMAR at baseline to 0.4 (0.1) logMAR at 6 months post-injection (both P = 0.001), with no statistically significant difference between groups at all follow-up visits (all P >0.05). The median (IQR) CST significantly decreased in the aflibercept group from 347 (166) µm at baseline to 180 (233) µm at 6 months post-injection, and it decreased in the ranibizumab group from 360 (180) µm at baseline to 190 (224) µm at 6 months post-injection (both P = 0.001), with no statistically significant differences between groups at all follow-up visits (all P >0.05). No serious adverse effects were documented in either group.
    UNASSIGNED: Ranibizumab and aflibercept were equally effective in achieving the desired anatomical and functional results in patients with treatment-naïve DME in short-term follow-up without significant differences in injection counts between both drugs. Larger prospective, randomized, double-blinded trials with longer follow-up periods are needed to confirm our preliminary results.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    美国验光协会定义了计算机视觉综合症(CVS),也被称为数字眼疲劳,作为一组与眼睛和视觉相关的问题,这些问题是由于长时间的计算机导致的,平板电脑,电子阅读器和手机使用\“。我们的目标是创建一个结构良好的,有效,和可靠的问卷来确定CVS的患病率,并分析视觉,眼表,使用新颖而聪明的自我评估问卷进行CVS和眼外后遗症。
    这个多中心,观察,横截面,描述性,描述性基于调查的,在线研究包括来自15所大学的6853名医学生的完整在线回答。所有参与者都对更新做出了回应,在线,CVS问卷的第四版(CVS-F4),具有较高的效度和可靠性。根据源自CVS-F4的五个基本诊断标准(5DC)诊断CVS。符合5DC的受访者被认为是CVS病例。然后将5DC转换为新颖的五个问题的自我评估问卷,称为CVS-Smart。
    在10000名被邀请的医学生中,8006对CVS-F4调查做出了回应(80%的回应率),8006名受访者中有6853人提供了完整的在线回复(完成率为85.6%)。研究受访者的CVS总体患病率为58.78%(n=4028);女性(65.87%)的CVS患病率高于男性(48.06%)。在CVS组中,最常见的视觉,眼表,眼外症状是眼睛疲劳,干眼症,和颈/肩/背痛74.50%(n=3001),58.27%(n=2347),和80.52%(n=3244)的CVS病例,分别。值得注意的是,75.92%(3058/4028)的CVS病例参与了强制计算机系统使用计划。多因素logistic回归分析显示,5DC的两个最具统计学意义的诊断标准是过去12个月内每月≥2次症状/发作(比值比[OR]=204177.2;P<0.0001)和与屏幕使用相关的症状/发作(OR=16047.34;P<0.0001)。CVS-Smart证明了Cronbach的α可靠性系数为0.860,Guttman分半系数为0.805,具有完善的内容和构造效度。CVS-Smart评分为7-10分表明存在CVS。
    视觉,眼表,CVS的眼外诊断标准构成了CVS-Smart的基本组成部分。CVS-Smart是一部小说,有效,可靠,用于确定CVS诊断和患病率的主观工具,可能为快速定期评估和预测提供工具。具有积极CVS-Smart结果的个人应考虑改变他们的生活方式和屏幕风格,并寻求眼科医生和/或验光师的帮助。较高的机构当局应考虑修订《授权计算机系统使用计划》,以避免CVS在大学生中的长期后果。进一步的研究必须将CVS-Smart与CVS的其他可用指标进行比较,比如CVS问卷,确定其测试-重测可靠性,并证明其广泛使用的合理性。
    UNASSIGNED: The American Optometric Association defines computer vision syndrome (CVS), also known as digital eye strain, as \"a group of eye- and vision-related problems that result from prolonged computer, tablet, e-reader and cell phone use\". We aimed to create a well-structured, valid, and reliable questionnaire to determine the prevalence of CVS, and to analyze the visual, ocular surface, and extraocular sequelae of CVS using a novel and smart self-assessment questionnaire.
    UNASSIGNED: This multicenter, observational, cross-sectional, descriptive, survey-based, online study included 6853 complete online responses of medical students from 15 universities. All participants responded to the updated, online, fourth version of the CVS questionnaire (CVS-F4), which has high validity and reliability. CVS was diagnosed according to five basic diagnostic criteria (5DC) derived from the CVS-F4. Respondents who fulfilled the 5DC were considered CVS cases. The 5DC were then converted into a novel five-question self-assessment questionnaire designated as the CVS-Smart.
    UNASSIGNED: Of 10 000 invited medical students, 8006 responded to the CVS-F4 survey (80% response rate), while 6853 of the 8006 respondents provided complete online responses (85.6% completion rate). The overall CVS prevalence was 58.78% (n = 4028) among the study respondents; CVS prevalence was higher among women (65.87%) than among men (48.06%). Within the CVS group, the most common visual, ocular surface, and extraocular complaints were eye strain, dry eye, and neck/shoulder/back pain in 74.50% (n = 3001), 58.27% (n = 2347), and 80.52% (n = 3244) of CVS cases, respectively. Notably, 75.92% (3058/4028) of CVS cases were involved in the Mandated Computer System Use Program. Multivariate logistic regression analysis revealed that the two most statistically significant diagnostic criteria of the 5DC were ≥2 symptoms/attacks per month over the last 12 months (odds ratio [OR] = 204177.2; P <0.0001) and symptoms/attacks associated with screen use (OR = 16047.34; P <0.0001). The CVS-Smart demonstrated a Cronbach\'s alpha reliability coefficient of 0.860, Guttman split-half coefficient of 0.805, with perfect content and construct validity. A CVS-Smart score of 7-10 points indicated the presence of CVS.
    UNASSIGNED: The visual, ocular surface, and extraocular diagnostic criteria for CVS constituted the basic components of CVS-Smart. CVS-Smart is a novel, valid, reliable, subjective instrument for determining CVS diagnosis and prevalence and may provide a tool for rapid periodic assessment and prognostication. Individuals with positive CVS-Smart results should consider modifying their lifestyles and screen styles and seeking the help of ophthalmologists and/or optometrists. Higher institutional authorities should consider revising the Mandated Computer System Use Program to avoid the long-term consequences of CVS among university students. Further research must compare CVS-Smart with other available metrics for CVS, such as the CVS questionnaire, to determine its test-retest reliability and to justify its widespread use.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    该研究项目的重点是创建和评估一种创新的计算机视觉系统,该系统旨在识别正在接受正畸治疗的个体的牙齿不规则性。
    为了建立计算机视觉系统,收集了牙齿图像的全面数据集,包括各种正畸病例。系统的算法被训练来识别指示常见牙齿异常的模式,比如咬合不正,间距问题,和错位。对算法进行了严格的测试和改进,以提高其准确性和可靠性。
    使用40名患者的牙科记录和图像进行了系统的验证。计算机视觉系统的性能是根据经验丰富的正畸医生的评估进行评估的。结果表明,系统的自动检测和正畸医生的评估之间的一致性,表明它作为一种有价值的诊断工具的潜力。
    总而言之,这种新型计算机视觉系统的开发和验证在自动检测正畸患者牙齿异常的能力方面显示出有希望的结果。
    UNASSIGNED: The research project focuses on the creation and assessment of an innovative computer vision system designed to identify dental irregularities in individuals undergoing orthodontic treatment.
    UNASSIGNED: To establish the computer vision system, a comprehensive dataset of dental images was collected, encompassing various orthodontic cases. The system\'s algorithm was trained to recognize patterns indicative of common dental anomalies, such as malocclusions, spacing issues, and misalignments. Rigorous testing and refinement of the algorithm were conducted to enhance its accuracy and reliability.
    UNASSIGNED: The validation of the system was carried out using the dental records and images of the 40 patients. The computer vision system\'s performance was evaluated against assessments made by experienced orthodontists. The results demonstrated a commendable level of concurrence between the system\'s automated detections and the orthodontists\' evaluations, suggesting its potential as a valuable diagnostic tool.
    UNASSIGNED: In conclusion, the development and validation of this novel computer vision system exhibit promising outcomes in its ability to automatically detect dental anomalies in orthodontic patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    微创手术的广泛使用以手术视频的形式产生了大量潜在有用的数据。然而,原始视频片段通常是非结构化和无标签的,从而限制了它的使用。我们开发了一种新颖的计算机视觉算法,用于在机器人辅助的根治性前列腺切除术(RARP)期间自动识别和标记手术步骤。
    来自RARP的手术视频由一组图像注释者在2位泌尿科肿瘤学家的监督下手动注释。标记全长手术视频以识别手术的所有步骤。然后利用这些手动注释的视频来训练计算机视觉算法以执行RARP手术视频的自动视频注释。通过与作为参考标准的人工注释进行比较来确定自动视频注释的准确性。
    总共474个全长RARP视频(中位数149分钟;IQR81分钟)被手动注释为手术步骤。其中,292个案例作为算法开发的训练数据集,69例用于内部验证,和113用作评估算法准确性的单独测试队列.支持人工智能的自动视频分析和手动人类视频注释之间的一致性为92.8%。膀胱尿道吻合步骤的算法准确性最高(97.3%),最终检查和提取步骤的算法准确性最低(76.8%)。
    我们开发了一种全自动人工智能工具,用于对RARP手术视频进行注释。自动手术视频分析在外科医生视频审查中具有直接的实际应用,外科培训和教育,质量和安全基准,医疗账单和文件,和手术室物流。
    UNASSIGNED: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP).
    UNASSIGNED: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard.
    UNASSIGNED: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%).
    UNASSIGNED: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    释放基于深度学习的计算机视觉分类系统的巨大潜力需要大型数据集用于模型训练。自然语言处理(NLP)-涉及数据集标记的自动化-代表了实现这一目标的潜在途径。然而,用于数据集标记的NLP的许多方面仍未得到验证。放射科专家手动标记超过5,000个MRI头部报告,以开发基于深度学习的神经放射学NLP报告分类器。我们的结果表明,二进制标签(正常与异常)显示出较高的准确率,即使仅使用两个MRI序列(T2加权和基于扩散加权成像的序列),而不是在检查中使用所有序列。同时,对多个疾病类别进行更具体标记的准确性是可变的,并且取决于类别.最后,结果模型性能被证明取决于原始标签商的专业知识,与非专家相比,表现更差专家贴标员。
    Unlocking the vast potential of deep learning-based computer vision classification systems necessitates large data sets for model training. Natural Language Processing (NLP)-involving automation of dataset labelling-represents a potential avenue to achieve this. However, many aspects of NLP for dataset labelling remain unvalidated. Expert radiologists manually labelled over 5,000 MRI head reports in order to develop a deep learning-based neuroradiology NLP report classifier. Our results demonstrate that binary labels (normal vs. abnormal) showed high rates of accuracy, even when only two MRI sequences (T2-weighted and those based on diffusion weighted imaging) were employed as opposed to all sequences in an examination. Meanwhile, the accuracy of more specific labelling for multiple disease categories was variable and dependent on the category. Finally, resultant model performance was shown to be dependent on the expertise of the original labeller, with worse performance seen with non-expert vs. expert labellers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Multicenter Study
    背景:基于计算机断层扫描(CT)成像和人工智能(AI)的分析有助于诊断和预测COVID-19的严重程度。然而,尚未充分探索基于AI的CT量化肺炎在评估COVID-19患者中的潜力.本研究旨在探讨基于AI的COVID-19肺炎CT量化技术在预测肺部残留病变患者的关键预后和临床特征方面的潜力。
    方法:这项回顾性队列研究包括来自四家医院的1200例COVID-19住院患者。关键结果的发生率(需要高流量氧气或有创机械通气的支持或死亡)和住院期间的并发症(细菌感染,肾功能衰竭,心力衰竭,血栓栓塞,和肝功能障碍)在具有高/低百分比肺部病变的肺炎组之间进行了比较,基于AI的CT量化。此外,198例患者入院后3个月接受CT扫描,分析肺部残留病灶的预后因素。
    结果:肺部病变百分比高的肺炎组(N=400)住院期间的危重结局和并发症发生率高于低百分比组(N=800)。多变量分析表明,基于AI的肺炎CT量化与关键结果独立相关(校正比值比[aOR]10.5,95%置信区间[CI]5.59-19.7),以及氧气需求(aOR6.35,95%CI4.60-8.76),IMV要求(aOR7.73,95%CI2.52-23.7),和死亡率(aOR6.46,95%CI1.87-22.3)。在随访CT扫描的患者中(N=198),多变量分析显示,入院时肺部病变百分比高的肺炎组(aOR4.74,95%CI2.36-9.52),年龄较大(AOR2.53,95%CI1.16-5.51),女性(aOR2.41,95%CI1.13-5.11),和高血压病史(aOR2.22,95%CI1.09-4.50)独立预测了持续性残留肺部病变。
    结论:基于AI的肺炎CT量化提供了超出医生定性评估的有价值的信息,能够预测COVID-19患者的关键结局和残留肺部病变。
    BACKGROUND: Computed tomography (CT) imaging and artificial intelligence (AI)-based analyses have aided in the diagnosis and prediction of the severity of COVID-19. However, the potential of AI-based CT quantification of pneumonia in assessing patients with COVID-19 has not yet been fully explored. This study aimed to investigate the potential of AI-based CT quantification of COVID-19 pneumonia to predict the critical outcomes and clinical characteristics of patients with residual lung lesions.
    METHODS: This retrospective cohort study included 1,200 hospitalized patients with COVID-19 from four hospitals. The incidence of critical outcomes (requiring the support of high-flow oxygen or invasive mechanical ventilation or death) and complications during hospitalization (bacterial infection, renal failure, heart failure, thromboembolism, and liver dysfunction) was compared between the groups of pneumonia with high/low-percentage lung lesions, based on AI-based CT quantification. Additionally, 198 patients underwent CT scans 3 months after admission to analyze prognostic factors for residual lung lesions.
    RESULTS: The pneumonia group with a high percentage of lung lesions (N = 400) had a higher incidence of critical outcomes and complications during hospitalization than the low percentage group (N = 800). Multivariable analysis demonstrated that AI-based CT quantification of pneumonia was independently associated with critical outcomes (adjusted odds ratio [aOR] 10.5, 95% confidence interval [CI] 5.59-19.7), as well as with oxygen requirement (aOR 6.35, 95% CI 4.60-8.76), IMV requirement (aOR 7.73, 95% CI 2.52-23.7), and mortality rate (aOR 6.46, 95% CI 1.87-22.3). Among patients with follow-up CT scans (N = 198), the multivariable analysis revealed that the pneumonia group with a high percentage of lung lesions on admission (aOR 4.74, 95% CI 2.36-9.52), older age (aOR 2.53, 95% CI 1.16-5.51), female sex (aOR 2.41, 95% CI 1.13-5.11), and medical history of hypertension (aOR 2.22, 95% CI 1.09-4.50) independently predicted persistent residual lung lesions.
    CONCLUSIONS: AI-based CT quantification of pneumonia provides valuable information beyond qualitative evaluation by physicians, enabling the prediction of critical outcomes and residual lung lesions in patients with COVID-19.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    牛肉产品的质量取决于樱桃红色的存在,因为对褐色色调的任何偏差都表明质量下降。现有的研究通常分别分析单个颜色通道,建立可接受的范围。相比之下,我们提出的方法涉及使用白盒机器学习技术对牛肉颜色变化进行多变量分析。我们的建议包括三个阶段。(1)我们使用计算机视觉系统(CVS)来捕获牛肉片的颜色,在特别设计的舱室中实施颜色校正预处理步骤。(2)我们研究了三种颜色空间(RGB,HSV,和CIELab*)(3)我们评估了三个白盒分类器的性能(决策树,逻辑回归,和多元正态分布),用于预测新鲜和非新鲜牛肉的颜色。这些模型显示出很高的准确性,并能够全面了解预测过程。我们的结果确认,与独立分析每个通道的常规做法相比,进行多变量分析可产生更出色的牛肉颜色预测结果。
    The quality of beef products relies on the presence of a cherry red color, as any deviation toward brownish tones indicates a loss in quality. Existing studies typically analyze individual color channels separately, establishing acceptable ranges. In contrast, our proposed approach involves conducting a multivariate analysis of beef color changes using white-box machine learning techniques. Our proposal encompasses three phases. (1) We employed a Computer Vision System (CVS) to capture the color of beef pieces, implementing a color correction pre-processing step within a specially designed cabin. (2) We examined the differences among three color spaces (RGB, HSV, and CIELab*) (3) We evaluated the performance of three white-box classifiers (decision tree, logistic regression, and multivariate normal distributions) for predicting color in both fresh and non-fresh beef. These models demonstrated high accuracy and enabled a comprehensive understanding of the prediction process. Our results affirm that conducting a multivariate analysis yields superior beef color prediction outcomes compared to the conventional practice of analyzing each channel independently.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    保水能力(WHC)在获得优质猪肉中起着重要作用。该属性通常通过压肉并测量由样品排出并被滤纸吸收的水的量来估计。在这项工作中,我们使用名为U-Net的深度学习(DL)架构从使用压力方法获得的猪肉样品的滤纸图像中估计持水量(WHC)。我们评估了U-Net分割WHC图像不同区域的能力,由于图像比U-Net的传统输入大小大得多,当我们改变输入大小时,我们还评估了它的性能。结果表明,U-Net可以很好地分割WHC图像的外部和内部区域,即使这些区域的外观差异是微妙的。
    Water holding capacity (WHC) plays an important role when obtaining a high-quality pork meat. This attribute is usually estimated by pressing the meat and measuring the amount of water expelled by the sample and absorbed by a filter paper. In this work, we used the Deep Learning (DL) architecture named U-Net to estimate water holding capacity (WHC) from filter paper images of pork samples obtained using the press method. We evaluated the ability of the U-Net to segment the different regions of the WHC images and, since the images are much larger than the traditional input size of the U-Net, we also evaluated its performance when we change the input size. Results show that U-Net can be used to segment the external and internal areas of the WHC images with great precision, even though the difference in the appearance of these areas is subtle.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号