automatic extraction

  • 文章类型: Journal Article
    为了快速获得水稻植株表型性状,本研究提出了六种水稻表型特征的计算过程(例如,冠部直径,茎的周长,植物高度,表面积,volume,和投影叶面积)使用地面激光扫描(TLS)数据,并提出了水稻植株分耕数的提取方法。具体来说,第一次,我们设计并开发了一种基于PyQt5框架和Open3D库的三层体系结构的水稻植株自动表型提取工具。结果表明,测量值与提取值之间的线性确定系数(R2)在所选的四个验证特征中具有更好的可靠性。冠径均方根误差(RMSE),茎的周长,植物高度稳定在厘米水平,耕种者的数量低至1.63。冠径的相对均方根误差(RRMSE),植物高度,分till数保持在10%以内,茎周长为18.29%。此外,用户友好的自动提取工具可以有效地提取水稻植株的表型特征,为快速获取水稻植株点云的表型性状特征提供了方便的工具。然而,更多水稻植物样本数据支持的表型特征提取结果的比较和验证,以及精度算法的改进,仍然是我们未来研究的重点。该研究可为利用三维点云提取作物表型提供参考。
    To quickly obtain rice plant phenotypic traits, this study put forward the computational process of six rice phenotype features (e.g., crown diameter, perimeter of stem, plant height, surface area, volume, and projected leaf area) using terrestrial laser scanning (TLS) data, and proposed the extraction method for the tiller number of rice plants. Specifically, for the first time, we designed and developed an automated phenotype extraction tool for rice plants with a three-layer architecture based on the PyQt5 framework and Open3D library. The results show that the linear coefficients of determination (R2) between the measured values and the extracted values marked a better reliability among the selected four verification features. The root mean square error (RMSE) of crown diameter, perimeter of stem, and plant height is stable at the centimeter level, and that of the tiller number is as low as 1.63. The relative root mean squared error (RRMSE) of crown diameter, plant height, and tiller number stays within 10%, and that of perimeter of stem is 18.29%. In addition, the user-friendly automatic extraction tool can efficiently extract the phenotypic features of rice plant, and provide a convenient tool for quickly gaining phenotypic trait features of rice plant point clouds. However, the comparison and verification of phenotype feature extraction results supported by more rice plant sample data, as well as the improvement of accuracy algorithms, remain as the focus of our future research. The study can offer a reference for crop phenotype extraction using 3D point clouds.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在各种情况下,人类遗骸的识别至关重要。主要的鉴定方法之一是DNA。从人类遗骸中提取DNA可能很困难,特别是在遗骸暴露于环境条件和其他侮辱的情况下。一些研究试图通过应用不同的方法来改善提取。ForensicGEMUniversal(MicroGem)是一种单管DNA提取方法和一种温度驱动方法,相对于以前的技术可能有一些优势,其中,降低污染的风险,不需要专用设备,或几个步骤来执行。这项研究的目的是评估,第一次,与自动提取(AE)相比,应用MicroGem方案的DNA提取效率和STR谱的质量以及对该方案的修改。我们的结果表明AE和MicroGem表现相似,尽管根据MicroGem的修改而具有可变性,当用Microcon浓缩DNA时,提高DNA产量和STR图谱质量。这些发现证明了这种方法从人类遗骸中提取DNA的效率,同时还提供了一种适用于各种法医场景的简单快速技术。
    The identification of human remains is of utmost importance in a variety of scenarios. One of the primary identification methods is DNA. DNA extraction from human remains could be difficult, particularly in situations where the remains have been exposed to environmental conditions and other insults. Several studies tried to improve extraction by applying different approaches. ForensicGEM Universal (MicroGem) is a single-tube approach to DNA extraction and a temperature-driven method that could have some advantages with respect to previous techniques, among them, reducing the risk of contamination, not requiring specialized equipment, or several steps to perform. The aim of this study was to assess, for the first time, the efficiency of DNA extraction and quality of STR profiles applying the MicroGem protocol and modifications of this protocol from tooth samples in comparison with automatic extraction (AE). Our results indicated that AE and MicroGem performed similar, though with variability depending on the MicroGem modifications, increasing the DNA yield and STR profile quality when DNA is concentrated with Microcon. These findings demonstrated the efficiency of this methodology for DNA extraction from human remains while also providing a simple and quick technique suitable to apply in a variety of forensic scenarios.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    广泛使用的农业温室对于设施农业的发展至关重要,因为它们不仅在食品和蔬菜供应方面具有巨大的能力,以及它们对环境和气候的影响。因此,获取农业温室的空间分布对农业生产至关重要,政策制定,甚至环境保护。遥感技术已广泛用于温室提取,主要在小区域或局部区域,而大规模和高分辨率(〜1-m)温室提取仍然缺乏。在这项研究中,重要农业省份的农业温室(山东,中国)采用GoogleEarth的高分辨率遥感影像与深度学习算法相结合的方法提取,精度较高(对于测试集的平均交集为94.04%)。结果表明,农业大棚面积为1755.3km2,占全省的1.11%,占耕地的2.31%。农业大棚空间密度图也表明,山东省设施农业具有明显的区域聚集特征,在环境和经济上都很脆弱。本研究的结果对未来的农业规划和环境管理是有用和有意义的。
    Widely used agricultural greenhouses are critical in the development of facility agriculture because of not only their huge capacity in food and vegetable supplies, but also their environmental and climatic effects. Therefore, it is important to obtain the spatial distribution of agricultural greenhouses for agricultural production, policy making, and even environmental protection. Remote sensing technologies have been widely used in greenhouse extraction mainly in small or local regions, while large-scale and high-resolution (~ 1-m) greenhouse extraction is still lacking. In this study, agricultural greenhouses in an important agricultural province (Shandong, China) are extracted by the combination of high-resolution remote sensing images from Google Earth and deep learning algorithm with high accuracy (94.04% for mean intersection over union over test set). The results demonstrated that the agricultural greenhouses cover an area of 1755.3 km2, accounting for 1.11% of the total province and 2.31% of total cultivated land. The spatial density map of agricultural greenhouses also suggested that the facility agriculture in Shandong has obviously regional aggregation characteristics, which is vulnerable in both environment and economy. The results of this study are useful and meaningful for future agriculture planning and environmental management.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Greenhouses are an important part of modern facility-based agriculture. While creating well-being for human society, greenhouses also bring negative impacts such as air pollution, soil pollution, and water pollution. Therefore, it is of great significance to obtain information such as the area and quantity of greenhouses. It is still a challenging task to find a low-cost, high-efficiency, and easy-to-use method for the dual extraction of greenhouse area and quantity on a large scale. In this study, relatively easy-to-obtain high-resolution Google Earth remote sensing images are used as the experimental data source, and an area and quantity simultaneous extraction framework (AQSEF) is constructed to extract both the area and quantity of greenhouses. The AQSEF uses UNet and YOLO v5 series networks as core operators to complete model training and prediction, and main components such as SWP, OSW&NMS and GCA complete data postprocessing. To evaluate the feasibility of our method, we take Beijing, China, as the research area and select multiple accuracy evaluation indicators in the two branches for accuracy verification. The results show that the mIoU, OA, Kappa, Recall and Precision with the best performance model in the area extraction branch can reach 0.931, 0.987, 0.867, 0.91 and 0.914, respectively. Additionally, the Recall, Precision, AP@0.5 and mAP@0.5: 0.95 values of the best performance model are 0.781, 0.891, 0.812 and 0.509, respectively, in the extraction of the quantity of greenhouses. Finally, in Beijing, the area covered by greenhouses is approximately 85.443 km2, and the quantity of greenhouses is approximately 155,464. With the proposed method, the time consumed for area extraction and quantity extraction is 6.73 h and 12.97 h, respectively. The experimental results show that AQSEF helps to overcome the spatiotemporal diversity of greenhouses and quickly and accurately map a high-spatial-resolution greenhouse distribution product within the research area.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    近年来,从光学相干断层扫描血管造影(OCTA)图像中提取中央凹无血管区(FAZ)已在许多研究中使用,因为它与各种眼科疾病有关。在这项研究中,我们研究了使用KannoSitamaMacro(KSM)创建的深度学习数据集的实用性,使用扫频源OCTA自动提取FAZ的程序。测试数据包括20名健康志愿者的40只眼睛。对于培训和验证,我们使用了257名患者的257只眼。使用KSM提取视网膜表面图像的FAZ,并创建了用于FAZ提取的数据集。基于该数据集,我们使用典型的U-Net进行了训练测试。两名审查员手动提取测试数据的FAZ,结果被用作黄金标准来比较审查员之间的Jaccard系数,在每个考官和U网之间。考官1和考官2之间的Jaccard系数为0.931,考官1和U-Net之间的Jaccard系数为0.951,审查员2和U-Net之间的0.933。考官1和U-Net之间的Jaccard系数明显优于考官1和考官2之间的Jaccard系数(p<0.001)。这些数据表明,KSM生成的数据集与,如果不是比,使用手动方法的考官之间的协议。KSM可能有助于减少深度学习中的注释负担。
    The extraction of the foveal avascular zone (FAZ) from optical coherence tomography angiography (OCTA) images has been used in many studies in recent years due to its association with various ophthalmic diseases. In this study, we investigated the utility of a dataset for deep learning created using Kanno Saitama Macro (KSM), a program that automatically extracts the FAZ using swept-source OCTA. The test data included 40 eyes of 20 healthy volunteers. For training and validation, we used 257 eyes from 257 patients. The FAZ of the retinal surface image was extracted using KSM, and a dataset for FAZ extraction was created. Based on that dataset, we conducted a training test using a typical U-Net. Two examiners manually extracted the FAZ of the test data, and the results were used as gold standards to compare the Jaccard coefficients between examiners, and between each examiner and the U-Net. The Jaccard coefficient was 0.931 between examiner 1 and examiner 2, 0.951 between examiner 1 and the U-Net, and 0.933 between examiner 2 and the U-Net. The Jaccard coefficients were significantly better between examiner 1 and the U-Net than between examiner 1 and examiner 2 (p < 0.001). These data indicated that the dataset generated by KSM was as good as, if not better than, the agreement between examiners using the manual method. KSM may contribute to reducing the burden of annotation in deep learning.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:从大型电子病历数据库中可靠且可解释的自动提取临床表型仍然是一个挑战,尤其是英语以外的语言。
    目的:我们的目标是从全身性疾病的电子健康记录中自动端到端提取相似患者的队列。
    方法:我们的多步骤算法包括命名实体识别步骤,使用医学主题词本体的多标签分类,以及患者相似性的计算。在先验注释的表型上选择相似患者的队列。选择了六种表型的临床意义:P1,骨质疏松症;P2,系统性红斑狼疮中的肾炎;P3,系统性硬化症中的间质性肺病;P4,肺部感染;P5,产科抗磷脂综合征;和P6,大动脉炎。我们使用了151个临床笔记的训练集和256个临床笔记的独立验证集,带有注释的表型,两者都是从巴黎援助委员会数据仓库中提取的。我们评估了每个表型的3名患者最接近索引患者的精确度,精确度为3,召回率和平均精确度。
    结果:对于P1-P4,3的精度范围为0.85(95%CI0.75-0.95)至0.99(95%CI0.98-1),召回范围从0.53(95%CI0.50-0.55)到0.83(95%CI0.81-0.84),平均精确度为0.58(95%CI0.54-0.62)至0.88(95%CI0.85-0.90)。由于表型数量有限,无法分析P5-P6表型。
    结论:使用接近临床推理的方法,我们构建了可扩展且可解释的端到端算法,用于提取相似患者的队列.
    BACKGROUND: Reliable and interpretable automatic extraction of clinical phenotypes from large electronic medical record databases remains a challenge, especially in a language other than English.
    OBJECTIVE: We aimed to provide an automated end-to-end extraction of cohorts of similar patients from electronic health records for systemic diseases.
    METHODS: Our multistep algorithm includes a named-entity recognition step, a multilabel classification using medical subject headings ontology, and the computation of patient similarity. A selection of cohorts of similar patients on a priori annotated phenotypes was performed. Six phenotypes were selected for their clinical significance: P1, osteoporosis; P2, nephritis in systemic erythematosus lupus; P3, interstitial lung disease in systemic sclerosis; P4, lung infection; P5, obstetric antiphospholipid syndrome; and P6, Takayasu arteritis. We used a training set of 151 clinical notes and an independent validation set of 256 clinical notes, with annotated phenotypes, both extracted from the Assistance Publique-Hôpitaux de Paris data warehouse. We evaluated the precision of the 3 patients closest to the index patient for each phenotype with precision-at-3 and recall and average precision.
    RESULTS: For P1-P4, the precision-at-3 ranged from 0.85 (95% CI 0.75-0.95) to 0.99 (95% CI 0.98-1), the recall ranged from 0.53 (95% CI 0.50-0.55) to 0.83 (95% CI 0.81-0.84), and the average precision ranged from 0.58 (95% CI 0.54-0.62) to 0.88 (95% CI 0.85-0.90). P5-P6 phenotypes could not be analyzed due to the limited number of phenotypes.
    CONCLUSIONS: Using a method close to clinical reasoning, we built a scalable and interpretable end-to-end algorithm for extracting cohorts of similar patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:垂体腺瘤(PAs)的诊断报告是复杂且未标准化的。我们旨在确定陈述中最常用的元素及其在现实世界临床实践中的组合模式和变化,以促进标准化诊断记录和建立高效的元素提取流程为最终目标。
    方法:纳入了2012年至2020年的患者病历,其中包括前三个诊断中的PA。在诊断文本中手动标记元素后,我们获得了元素类型和训练集,据此,基于分词模型“Jieba”构建了信息提取模型,以提取其余诊断文本中包含的信息。
    结果:从3770份医疗记录的4010份文本中,总共576份不同的诊断陈述被纳入分析。与PA相关的前十个诊断要素是组织病理学,肿瘤位置,内分泌状态,肿瘤大小,侵入性,复发,诊断确认,Knosp等级,残余肿瘤,和折射。自动提取模型在第二轮中实现了所有十个元素的F1分数达到100%,在由532个额外的诊断文本组成的测试集中达到了97.3-100.0%。肿瘤位置,内分泌状态,组织病理学,肿瘤大小是最常用的因素,由上述要素组成的诊断是最常见的。内分泌状态具有最大的表达变异性,其次是Knosp等级。在所有的术语中,肿瘤大小丢失的百分比最高(21%).在主要诊断为PA的陈述中,18.6%没有肿瘤大小的信息,而对于那些有其他诊断的人来说,这一百分比上升到48%(P<0.001)。
    结论:在现实世界的临床实践中,PA诊断声明的标准化并不令人满意。这项研究可以帮助标准化PA诊断的结构化模式,并为研究友好性奠定基础,高质量的临床信息提取。
    Diagnostic statements for pituitary adenomas (PAs) are complex and unstandardized. We aimed to determine the most commonly used elements contained in the statements and their combination patterns and variations in real-world clinical practice, with the ultimate goal of promoting standardized diagnostic recording and establishing an efficient element extraction process.
    Patient medical records from 2012 to 2020 that included PA among the first three diagnoses were included. After manually labeling the elements in the diagnostic texts, we obtained element types and training sets, according to which an information extraction model was constructed based on the word segmentation model \"Jieba\" to extract information contained in the remaining diagnostic texts.
    A total of 576 different diagnostic statements from 4010 texts of 3770 medical records were enrolled in the analysis. The first ten diagnostic elements related to PA were histopathology, tumor location, endocrine status, tumor size, invasiveness, recurrence, diagnostic confirmation, Knosp grade, residual tumor, and refractoriness. The automated extraction model achieved F1-scores that reached 100% for all ten elements in the second round and 97.3-100.0% in the test set consisting of an additional 532 diagnostic texts. Tumor location, endocrine status, histopathology, and tumor size were the most commonly used elements, and diagnoses composed of the above elements were the most frequent. Endocrine status had the greatest expression variability, followed by Knosp grade. Among all the terms, the percentage of loss of tumor size was among the highest (21%). Among statements where the principal diagnoses were PAs, 18.6% did not have information on tumor size, while for those with other diagnoses, this percentage rose to 48% (P < 0.001).
    Standardization of the diagnostic statement for PAs is unsatisfactory in real-world clinical practice. This study could help standardize a structured pattern for PA diagnosis and establish a foundation for research-friendly, high-quality clinical information extraction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    遥感技术具有信息获取速度快的优点,短周期,和广泛的检测范围。它经常用于地面资源监视任务。然而,传统的遥感图像分割技术不能充分利用图像丰富的空间信息,工作量太大,精度不够高。为了解决这些问题,这项研究进行了大气校准,波段组合,图像融合,Landsat8号卫星遥感数据等数据增强方法,提高数据质量。此外,将深度学习应用于遥感图像块分割。提出了一种基于卷积块注意模块的非对称卷积-CBAM(AC-CBAM)模块。该优化模块采用集成注意力和滑动窗口预测方法,有效提高了分割精度。在测试数据的实验中,mIoU,mAcc,本研究中的aAcc达到97.34%,98.66%,98.67%,分别,比DNLNet(95.9%)高1.44%。本研究的AC-CBAM模块为深度学习实现遥感陆地信息提取的自动化提供了参考。我们的AC-CBAM模块的实验代码可以在https://github.com/LinB203/remotesense上找到。
    Remote sensing technology has the advantages of fast information acquisition, short cycle, and a wide detection range. It is frequently used in surface resource monitoring tasks. However, traditional remote sensing image segmentation technology cannot make full use of the rich spatial information of the image, the workload is too large, and the accuracy is not high enough. To address these problems, this study carried out atmospheric calibration, band combination, image fusion, and other data enhancement methods for Landsat 8 satellite remote sensing data to improve the data quality. In addition, deep learning is applied to remote-sensing image block segmentation. An asymmetric convolution-CBAM (AC-CBAM) module based on the convolutional block attention module is proposed. This optimization module of the integrated attention and sliding window prediction method is adopted to effectively improve the segmentation accuracy. In the experiment of test data, the mIoU, mAcc, and aAcc in this study reached 97.34%, 98.66%, and 98.67%, respectively, which is 1.44% higher than that of DNLNet (95.9%). The AC-CBAM module of this research provides a reference for deep learning to realize the automation of remote sensing land information extraction. The experimental code of our AC-CBAM module can be found at https://github.com/LinB203/remotesense.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    This study aimed to determine the optimal image reconstruction method for preoperative computed tomography (CT) angiography for pulmonary segmentectomy. This study enrolled 20 patients who underwent contrast-enhanced CT examination for pulmonary segmentectomy. The optimal image reconstruction algorithm among four different reconstruction algorithms (filtered back projection, hybrid iterative reconstruction, model- based iterative reconstruction, and deep learning reconstruction [DLR]) was investigated by assessing the CT numbers, vessel extraction ratios, and misclassification ratios. The vessel extraction ratios for main and subsegment branches reconstructed using DLR were significantly higher than those using other reconstruction algorithms (96.7% and 90.8% for pulmonary artery and vein, respectively). The misclassification ratios at the right upper lobe pulmonary vessels (V1 and V2) were especially high because they were close to the superior vena cava, and their CT numbers were similar in all four reconstructions. In conclusion, the DLR allows a high extraction rate of pulmonary blood vessels and a low misclassification rate of automatic extraction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    OBJECTIVE: Current conventional algorithms used for 3-dimensional simulation in virtual hepatectomy still have difficulties distinguishing the portal vein (PV) and hepatic vein (HV). The accuracy of these algorithms was compared with a new deep-learning based algorithm (DLA) using artificial intelligence.
    METHODS: A total of 110 living liver donor candidates until 2017, and 46 donor candidates until 2019 were allocated to the training group and validation groups for the DLA, respectively. All PV or HV branches were labeled based on Couinaud\'s segment classification and the Brisbane 2000 Terminology by hepato-biliary surgeons. Misclassified and missing branches were compared between a conventional tracking-based algorithm (TA) and DLA in the validation group.
    RESULTS: The sensitivity, specificity, and Dice coefficient for the PV were 0.58, 0.98, and 0.69 using the TA; and 0.84, 0.97, and 0.90 using the DLA (P < .001, excluding specificity); and for the HV, 0.81, 087, and 0.83 using the TA; and 0.93, 0.94 and 0.94 using the DLA (P < .001 to P = .001). The DLA exhibited greater accuracy than the TA.
    CONCLUSIONS: Compared with the TA, artificial intelligence enhanced the accuracy of extraction of the PV and HVs in computed tomography.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号