Pre-processing

预处理
  • 文章类型: Journal Article
    背景:来自多个来源的光谱数据可以集成到多块融合化学计量模型中,例如顺序正交化偏最小二乘(SO-PLS),改进样本质量特征的预测。预处理技术通常用于减轻无关的可变性,与响应变量无关。然而,当处理大量块时,选择合适的预处理方法和识别信息数据块变得越来越复杂和耗时。在这项工作中解决的问题是有效的预处理,选择,以及SO-PLS中目标应用程序的数据块排序。
    结果:我们介绍PROSAC-SO-PLS方法,它采用预处理集成与面向响应的顺序交替校准(PROSAC)。该方法识别最佳预处理数据块及其用于特定SO-PLS应用的顺序次序。该方法采用逐步前向选择策略,在快速革兰氏施密特过程的推动下,根据块在最小化预测误差方面的有效性来确定块的优先级,如最低预测残差所示。为了验证我们方法的有效性,我们展示了三个经验近红外(NIR)数据集的结果。对单块预处理数据集和仅依赖于PROSAC的方法进行了偏最小二乘(PLS)回归的比较分析。PROSAC-SO-PLS方法始终优于这些方法,产生显著较低的预测误差。在所分析的8个响应变量中的7个中,预测的均方根误差(RMSEP)的降低范围从5%至25%证明了这一点。
    结论:PROSAC-SO-PLS方法为NIR数据建模中的集成预处理提供了一种通用且有效的技术。它使SO-PLS的使用最小化对预处理序列或块顺序的关注,并且有效地管理大量数据块。这一创新显著简化了数据预处理和模型构建过程,提高化学计量模型的准确性和效率。
    BACKGROUND: Spectral data from multiple sources can be integrated into multi-block fusion chemometric models, such as sequentially orthogonalized partial-least squares (SO-PLS), to improve the prediction of sample quality features. Pre-processing techniques are often applied to mitigate extraneous variability, unrelated to the response variables. However, the selection of suitable pre-processing methods and identification of informative data blocks becomes increasingly complex and time-consuming when dealing with a large number of blocks. The problem addressed in this work is the efficient pre-processing, selection, and ordering of data blocks for targeted applications in SO-PLS.
    RESULTS: We introduce the PROSAC-SO-PLS methodology, which employs pre-processing ensembles with response-oriented sequential alternation calibration (PROSAC). This approach identifies the best pre-processed data blocks and their sequential order for specific SO-PLS applications. The method uses a stepwise forward selection strategy, facilitated by the rapid Gram-Schmidt process, to prioritize blocks based on their effectiveness in minimizing prediction error, as indicated by the lowest prediction residuals. To validate the efficacy of our approach, we showcase the outcomes of three empirical near-infrared (NIR) datasets. Comparative analyses were performed against partial-least-squares (PLS) regressions on single-block pre-processed datasets and a methodology relying solely on PROSAC. The PROSAC-SO-PLS approach consistently outperformed these methods, yielding significantly lower prediction errors. This has been evidenced by a reduction in the root-mean-squared error of prediction (RMSEP) ranging from 5 to 25 % across seven out of the eight response variables analyzed.
    CONCLUSIONS: The PROSAC-SO-PLS methodology offers a versatile and efficient technique for ensemble pre-processing in NIR data modeling. It enables the use of SO-PLS minimizing concerns about pre-processing sequence or block order and effectively manages a large number of data blocks. This innovation significantly streamlines the data pre-processing and model-building processes, enhancing the accuracy and efficiency of chemometric models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    肺癌(LC)仍然是中国的主要死亡原因,主要是由于晚期诊断。本研究旨在评估使用基于等离子体的近红外光谱(NIRS)进行LC早期诊断的有效性。共收集了171份血浆样本,包括73名健康对照(HC),73LC,和25个肺良性肿瘤(B)。利用NIRS测量样品的光谱。预处理方法,包括定心和缩放,标准正态变量,乘法散射校正,Savitzky-Golay平滑,Savitzky-Golay一阶导数,并应用基线校正。随后,4种机器学习(ML)算法,包括偏最小二乘(PLS),支持向量机(SVM),梯度增压机,和随机森林,用于使用训练集数据开发诊断模型。然后,使用测试集样本评估每个模型的预测性能.该研究进行了如下5个比较:LC和HC,LC和B,B和HC,患病组(D)和HC,还有LC,B和HC。在5个比较中,SVM通过一定的预处理方法始终产生最佳性能,在LC和HC的比较中实现1.0(κ:1.0)的总体准确度,B和HC,以及D和HC。预处理被认为是开发ML模型的关键步骤。有趣的是,PLS在5个比较中表现出显著的稳定性和相对较高的预测性能,即使它没有达到像SVM那样的最高结果。然而,这些算法都不能有效区分B和LC。这些发现表明,基于等离子体的NIRS与ML算法的结合是一种快速,非侵入性,有效,LC早期诊断的经济方法。
    Lung cancer (LC) continues to be a leading death cause in China, primarily due to late diagnosis. This study aimed to evaluate the effectiveness of using plasma-based near-infrared spectroscopy (NIRS) for LC early diagnosis. A total of 171 plasma samples were collected, including 73 healthy controls (HC), 73 LC, and 25 benign lung tumors (B). NIRS was utilized to measure the spectra of samples. Pre-processing methods, including centering and scaling, standard normal variate, multiplicative scatter correction, Savitzky-Golay smoothing, Savitzky-Golay first derivative, and baseline correction were applied. Subsequently, 4 machine learning (ML) algorithms, including partial least squares (PLS), support vector machines (SVM), gradient boosting machine, and random forest, were utilized to develop diagnostic models using train set data. Then, the predictive performance of each model was evaluated using test set samples. The study was conducted in 5 comparisons as follows: LC and HC, LC and B, B and HC, the diseased group (D) and HC, as well as LC, B and HC. Among the 5 comparisons, SVM consistently generated the best performance with a certain pre-processing method, achieving overall accuracy of 1.0 (kappa: 1.0) in the comparisons of LC and HC, B and HC, as well as D and HC. Pre-processing was identified as a crucial step in developing ML models. Interestingly, PLS demonstrated remarkable stability and relatively high predictive performance across the 5 comparisons, even though it did not achieve the top results like SVM. However, none of these algorithms were able to effectively distinguish B from LC. These findings indicate that the combination of plasma-based NIRS with ML algorithms is a rapid, non-invasive, effective, and economical method for LC early diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    神经营销是一个新兴的研究领域,旨在了解消费者在选择购买哪种产品时的决策过程。这些信息受到希望通过了解给消费者留下积极或消极印象来改善其营销策略的企业的高度追捧。它有可能通过使公司能够提供引人入胜的体验来彻底改变营销行业,创造更有效的广告,避免错误的营销策略,并最终为企业节省数百万美元。因此,良好的文献是必要的,以捕捉当前的研究状况在这个重要的部门。在这篇文章中,我们对基于脑电图的神经营销进行了系统综述。我们的目标是阐明研究趋势,技术范围,以及这个领域的潜在机会。我们回顾了来自有效数据库的最新出版物,并将神经营销中的热门研究课题分为五个集群,以介绍该领域的当前研究趋势。我们还讨论了在做出购买决策时被激活的大脑区域及其与神经营销应用的相关性。这篇文章提供了适当的营销刺激插图,可以引起消费者的真实印象,用于处理和分析记录的大脑数据的技术,以及当前用于解释数据的策略。最后,我们为即将到来的研究人员提供建议,以帮助他们将来更有效地研究该领域的可能性。
    Neuromarketing is an emerging research field that aims to understand consumers\' decision-making processes when choosing which product to buy. This information is highly sought after by businesses looking to improve their marketing strategies by understanding what leaves a positive or negative impression on consumers. It has the potential to revolutionize the marketing industry by enabling companies to offer engaging experiences, create more effective advertisements, avoid the wrong marketing strategies, and ultimately save millions of dollars for businesses. Therefore, good documentation is necessary to capture the current research situation in this vital sector. In this article, we present a systematic review of EEG-based Neuromarketing. We aim to shed light on the research trends, technical scopes, and potential opportunities in this field. We reviewed recent publications from valid databases and divided the popular research topics in Neuromarketing into five clusters to present the current research trend in this field. We also discuss the brain regions that are activated when making purchase decisions and their relevance to Neuromarketing applications. The article provides appropriate illustrations of marketing stimuli that can elicit authentic impressions from consumers\' minds, the techniques used to process and analyze recorded brain data, and the current strategies employed to interpret the data. Finally, we offer recommendations to upcoming researchers to help them investigate the possibilities in this area more efficiently in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    颅内EEG网络的研究已用于揭示耐药癫痫患者的癫痫发作发生器。植入颅内脑电图以捕获癫痫网络,形成癫痫发作开始和传播基质的脑组织的集合。发作间颅内脑电图测量基线时的脑活动,在此状态下计算的网络可以揭示异常的脑组织,而不需要癫痫发作记录。颅内EEG网络分析需要选择参考并应用功能连通性的统计度量。这些技术选择的方法在不同的研究中差异很大,这些技术选择对下游分析的影响知之甚少。我们的目的是研究不同的重新参考和连接方法对连接结果以及对耐药癫痫患者癫痫发作发作区偏侧能力的影响。我们将48个预处理管道应用于125名耐药癫痫患者的队列,这些患者记录了两个癫痫中心的发作间颅内脑电图,以生成颅内脑电图功能连接网络。将24种跨时域和频域的功能连通性测量与普通平均重新参考或双极重新参考相结合。我们应用了一种无监督聚类算法来识别预处理管道组。我们对每种预处理方法进行了三种质量测试:(i)引入虚假相关性;(ii)对不完全空间采样的鲁棒性;(iii)使临床医生定义的癫痫发作发作区侧向化的能力。出现了三组类似的预处理管道:常见的平均重新参考管道,双极重新引用管道和基于相对熵的连通性管道。相对熵和普通平均重参考网络比双极重参考和其他连通性方法对不完全电极采样更稳健(弗里德曼检验,邓恩-西达克检验P<0.0001)。在非相邻通道处的双极重新参考减少的杂散相关性比普通平均重新参考更好(来自机器ref=-0.36的Δ平均值与-0.22),而在相邻通道中则更差(来自机器ref=-0.14的Δ平均值与-0.40)。在颞叶癫痫患者中,基于相对熵的网络测量使癫痫发作半球的偏侧性优于其他测量(Benjamini-Hochberg校正P<0.05,Cohen\sd:0.60-0.76)。最后,我们提供了一个界面,用户可以快速评估颅内脑电预处理选择,以选择适合特定研究问题的最佳预处理方法.预处理方法的选择影响下游网络分析。在高度相关的方法中选择单个方法可以减少处理中的冗余。在多个质量测试中,相对熵优于其他连通性方法。我们为研究人员提供了一种方法和接口,以优化其用于推导颅内脑电图脑网络的预处理方法。
    Studies of intracranial EEG networks have been used to reveal seizure generators in patients with drug-resistant epilepsy. Intracranial EEG is implanted to capture the epileptic network, the collection of brain tissue that forms a substrate for seizures to start and spread. Interictal intracranial EEG measures brain activity at baseline, and networks computed during this state can reveal aberrant brain tissue without requiring seizure recordings. Intracranial EEG network analyses require choosing a reference and applying statistical measures of functional connectivity. Approaches to these technical choices vary widely across studies, and the impact of these technical choices on downstream analyses is poorly understood. Our objective was to examine the effects of different re-referencing and connectivity approaches on connectivity results and on the ability to lateralize the seizure onset zone in patients with drug-resistant epilepsy. We applied 48 pre-processing pipelines to a cohort of 125 patients with drug-resistant epilepsy recorded with interictal intracranial EEG across two epilepsy centres to generate intracranial EEG functional connectivity networks. Twenty-four functional connectivity measures across time and frequency domains were applied in combination with common average re-referencing or bipolar re-referencing. We applied an unsupervised clustering algorithm to identify groups of pre-processing pipelines. We subjected each pre-processing approach to three quality tests: (i) the introduction of spurious correlations; (ii) robustness to incomplete spatial sampling; and (iii) the ability to lateralize the clinician-defined seizure onset zone. Three groups of similar pre-processing pipelines emerged: common average re-referencing pipelines, bipolar re-referencing pipelines and relative entropy-based connectivity pipelines. Relative entropy and common average re-referencing networks were more robust to incomplete electrode sampling than bipolar re-referencing and other connectivity methods (Friedman test, Dunn-Šidák test P < 0.0001). Bipolar re-referencing reduced spurious correlations at non-adjacent channels better than common average re-referencing (Δ mean from machine ref = -0.36 versus -0.22) and worse in adjacent channels (Δ mean from machine ref = -0.14 versus -0.40). Relative entropy-based network measures lateralized the seizure onset hemisphere better than other measures in patients with temporal lobe epilepsy (Benjamini-Hochberg-corrected P < 0.05, Cohen\'s d: 0.60-0.76). Finally, we present an interface where users can rapidly evaluate intracranial EEG pre-processing choices to select the optimal pre-processing methods tailored to specific research questions. The choice of pre-processing methods affects downstream network analyses. Choosing a single method among highly correlated approaches can reduce redundancy in processing. Relative entropy outperforms other connectivity methods in multiple quality tests. We present a method and interface for researchers to optimize their pre-processing methods for deriving intracranial EEG brain networks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:确定与卷积神经网络(CNN)中的实值输入和实值卷积相比,复值输入和复值卷积对于GABA编辑的磁共振波谱(MRS)数据的频率和相位校正(FPC)的意义。
    方法:进行了使用模拟数据的消融研究,以确定最有效的输入(实数或复数)和卷积类型(实数或复数),以使用CNN预测GABA编辑的MEGA-PRESS数据中的频率和相移。随后使用模拟和体内数据将最佳CNN模型与最近提出的两种用于GABA编辑的MRS的FPC的深度学习(DL)方法进行了比较。所有方法都使用相同的实验设置进行训练,并使用GABA峰的信噪比(SNR)和线宽进行评估,胆碱神器,并通过视觉评估重建的最终差异光谱。使用Wilcoxon符号秩检验评估统计显著性和效应大小。
    结果:消融研究表明,在我们的模型输入张量中,对实通道和虚通道表示的输入使用复值,复杂的卷积对FPC最有效。总的来说,在使用模拟数据的比较研究中,我们的CC-CNN模型(接收具有复杂卷积的复杂值输入)优于其他模型,如平均绝对误差所评估。
    结论:我们的结果表明,GABA编辑的MRSFPC的最佳CNN配置使用复值输入和复杂卷积。总的来说,该模型优于现有的DL模型。
    OBJECTIVE: To determine the significance of complex-valued inputs and complex-valued convolutions compared to real-valued inputs and real-valued convolutions in convolutional neural networks (CNNs) for frequency and phase correction (FPC) of GABA-edited magnetic resonance spectroscopy (MRS) data.
    METHODS: An ablation study using simulated data was performed to determine the most effective input (real or complex) and convolution type (real or complex) to predict frequency and phase shifts in GABA-edited MEGA-PRESS data using CNNs. The best CNN model was subsequently compared using both simulated and in vivo data to two recently proposed deep learning (DL) methods for FPC of GABA-edited MRS. All methods were trained using the same experimental setup and evaluated using the signal-to-noise ratio (SNR) and linewidth of the GABA peak, choline artifact, and by visually assessing the reconstructed final difference spectrum. Statistical significance was assessed using the Wilcoxon signed rank test.
    RESULTS: The ablation study showed that using complex values for the input represented by real and imaginary channels in our model input tensor, with complex convolutions was most effective for FPC. Overall, in the comparative study using simulated data, our CC-CNN model (that received complex-valued inputs with complex convolutions) outperformed the other models as evaluated by the mean absolute error.
    CONCLUSIONS: Our results indicate that the optimal CNN configuration for GABA-edited MRS FPC uses a complex-valued input and complex convolutions. Overall, this model outperformed existing DL models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着我们日常生活中电子产品的数量不断增加,其中大多数都需要某种人类互动,这需要创新,方便的输入法。现有技术(SotA)基于超声的手势识别(HGR)系统在鲁棒性和准确性方面存在限制。这项研究提出了一种新颖的基于机器学习(ML)的端到端解决方案,用于低成本的微机电(MEMS)系统超声换能器进行手势识别。与以前的方法相比,我们的ML模型直接处理原始回声样本,而不是使用预处理的数据。因此,在这项工作中提出的处理流程将其留给ML模型从回声数据中提取重要信息。这种方法的成功证明如下。四个MEMS超声换能器被放置在三个不同的几何布置中。对于每种安排,不同类型的ML模型进行了优化,并在使用所提供的自定义硬件(HW)获取的数据集上进行了基准测试:卷积神经网络(CNN),门控经常性单位(GRU),长短期记忆(LSTM),视觉变压器(ViT),和交叉关注多尺度视觉变压器(CrossViT)。最后提到的三个ML模型的准确率达到了88%以上。本研究论文中描述的最重要的创新之处在于,我们能够证明,对于具有成本效益和低功耗的MEMS超声换能器阵列的几种布置,几乎不需要预处理即可获得超声HGR的高精度。甚至可以省略计算密集的傅立叶变换。所提出的方法进一步与使用其他传感器类型的HGR系统进行比较,如视觉,WiFi,雷达,和国家的最先进的基于超声的HGR系统。通过紧凑的模型直接处理传感器信号使超声波手势识别成为一种真正的低成本和节能的输入方法。
    As the number of electronic gadgets in our daily lives is increasing and most of them require some kind of human interaction, this demands innovative, convenient input methods. There are limitations to state-of-the-art (SotA) ultrasound-based hand gesture recognition (HGR) systems in terms of robustness and accuracy. This research presents a novel machine learning (ML)-based end-to-end solution for hand gesture recognition with low-cost micro-electromechanical (MEMS) system ultrasonic transducers. In contrast to prior methods, our ML model processes the raw echo samples directly instead of using pre-processed data. Consequently, the processing flow presented in this work leaves it to the ML model to extract the important information from the echo data. The success of this approach is demonstrated as follows. Four MEMS ultrasonic transducers are placed in three different geometrical arrangements. For each arrangement, different types of ML models are optimized and benchmarked on datasets acquired with the presented custom hardware (HW): convolutional neural networks (CNNs), gated recurrent units (GRUs), long short-term memory (LSTM), vision transformer (ViT), and cross-attention multi-scale vision transformer (CrossViT). The three last-mentioned ML models reached more than 88% accuracy. The most important innovation described in this research paper is that we were able to demonstrate that little pre-processing is necessary to obtain high accuracy in ultrasonic HGR for several arrangements of cost-effective and low-power MEMS ultrasonic transducer arrays. Even the computationally intensive Fourier transform can be omitted. The presented approach is further compared to HGR systems using other sensor types such as vision, WiFi, radar, and state-of-the-art ultrasound-based HGR systems. Direct processing of the sensor signals by a compact model makes ultrasonic hand gesture recognition a true low-cost and power-efficient input method.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    黑色素瘤是全球范围内最具侵袭性和最普遍的皮肤癌形式,在男性和皮肤白皙的个体中发病率较高。黑色素瘤的早期检测对于成功治疗和预防转移至关重要。在这种情况下,深度学习方法,以他们执行自动化和详细分析的能力而著称,提取黑色素瘤特异性特征,出现了。这些方法擅长进行大规模分析,优化时间,提供准确的诊断,与常规诊断方法相比,有助于及时治疗。本研究提供了一种方法来评估基于AlexNet的卷积神经网络(CNN)在识别早期黑色素瘤中的有效性。该模型是在10,605张皮肤镜图像的平衡数据集上训练的,在修改后的数据集上,潜在的阻碍因素,已检测到并删除,从而可以评估脱毛如何影响模型的整体性能。要执行脱毛,我们提出了一种结合不同滤波技术的形态学算法进行比较:傅立叶,小波,平均模糊,和低通滤波器。该模型通过10倍交叉验证和准确性指标进行评估,召回,精度,和F1得分。结果表明,所提出的模型在我们实现小波滤波器和脱毛算法的数据集中表现最佳。它的精度为91.30%,87%的召回,精度为95.19%,F1得分为90.91%。
    Melanoma is the most aggressive and prevalent form of skin cancer globally, with a higher incidence in men and individuals with fair skin. Early detection of melanoma is essential for the successful treatment and prevention of metastasis. In this context, deep learning methods, distinguished by their ability to perform automated and detailed analysis, extracting melanoma-specific features, have emerged. These approaches excel in performing large-scale analysis, optimizing time, and providing accurate diagnoses, contributing to timely treatments compared to conventional diagnostic methods. The present study offers a methodology to assess the effectiveness of an AlexNet-based convolutional neural network (CNN) in identifying early-stage melanomas. The model is trained on a balanced dataset of 10,605 dermoscopic images, and on modified datasets where hair, a potential obstructive factor, was detected and removed allowing for an assessment of how hair removal affects the model\'s overall performance. To perform hair removal, we propose a morphological algorithm combined with different filtering techniques for comparison: Fourier, Wavelet, average blur, and low-pass filters. The model is evaluated through 10-fold cross-validation and the metrics of accuracy, recall, precision, and the F1 score. The results demonstrate that the proposed model performs the best for the dataset where we implemented both a Wavelet filter and hair removal algorithm. It has an accuracy of 91.30%, a recall of 87%, a precision of 95.19%, and an F1 score of 90.91%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在这项研究中,为了实现同类型三种光谱仪器之间全纤维素近红外分析模型的共享,以84个纸浆样品及其全纤维素含量为研究对象。10种预处理方法的影响,如一阶导数(D1),二阶导数(D2),乘法散射校正(MSC),标准正态变量变换(SNV),自动缩放,归一化,平均居中和成对组合,讨论了通过筛选具有一致和稳定信号的波长(SWCSS)选择的稳定波长的传输效果。结果表明,采用自动定标预处理方法后的SWCSS算法选取的波长所建立的模型对两个目标样本的分析效果最好。预测均方根误差(RMSEP)从模型转换前的2.4769和2.3119分别降低到1.2563和1.2384。与全谱模型相比,AIC值从3209.83降至942.82。因此,该自动定标预处理方法结合SWCSS算法可以显著提高模型传递的准确性和效率,为SWCSS算法在近红外光谱(NIRS)快速测定纸浆性质中的应用提供帮助。
    In this study, in order to realize the sharing of the near-infrared analysis model of holocellulose between three spectral instruments of the same type, 84 pulp samples and their content of holocellulose were taken as the research objects. The effects of 10 pre-processing methods, such as 1st derivative (D1st), 2nd derivative (D2nd), multiplicative scatter correction (MSC), standard normal variable transformation (SNV), autoscaling, normalization, mean centering and pairwise combination, on the transfer effect of the stable wavelength selected by screening wavelengths with consistent and stable signals (SWCSS) were discussed. The results showed that the model established by the wavelength selected by the SWCSS algorithm after the autoscaling pre-processing method had the best analysis effect on the two target samples. Root mean square error of prediction (RMSEP) decreased from 2.4769 and 2.3119 before the model transfer to 1.2563 and 1.2384, respectively. Compared with the full-spectrum model, the value of AIC decreased from 3209.83 to 942.82. Therefore, the autoscaling pre-processing method combined with SWCSS algorithm can significantly improve the accuracy and efficiency of model transfer and provide help for the application of SWCSS algorithm in the rapid determination of pulp properties by near-infrared spectroscopy (NIRS).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    这项研究调查了ComBat协调对从不同扫描仪上获取的磁共振图像(MRI)中提取的放射学特征的可重复性的影响,使用各种数据采集参数和使用专用MRI体模的多个图像预处理技术。作为TCIARIDER数据库的一部分,使用了四个扫描仪来获取非解剖体模的MRI。在快速自旋回波反转恢复(IR)序列中,采用了几个反转持续时间,包括50、100、250、500、750、1000、1500、2000、2500和3000ms。此外,3D快速破坏梯度召回回波(FSPGR)序列用于研究几个翻转角(FA):2,5,10,15,20,25和30度。手动分割了19个体模隔室。使用不同的方法对每个图像进行预处理:Bin离散化,小波滤波器,高斯拉普拉斯算子,对数,正方形,平方根,和梯度。总的来说,92第一次-,第二-,并提取高阶统计影像组学特征。ComBat协调也应用于提取的放射学特征。最后,实施了组内相关系数(ICC)和Kruskal-Wallis's(KW)测试以评估放射学特征的稳健性。对于各种扫描仪,KW测试中的非重要特征数量介于0-5和29-74之间,31-91和37-92三次测试,FAs的0-33到34-90,ComBat协调前后的IRs为3-68至65-89,使用不同的图像预处理技术,分别。对于各种扫描仪,具有ICC的功能数量超过90%,范围在0-8和6-60之间,11-75和17-80进行三次测试,FAs的3-83到9-84,ComBat协调前后的IRs为3-49至3-63,使用不同的图像预处理技术,分别。各种扫描仪的使用,IRs,和FAs对放射学特征有很大影响。然而,大多数扫描仪鲁棒特征对IR和FA也是鲁棒的。在MR图像中的有效参数中,一个扫描仪中的几个测试对放射学特征的影响可以忽略不计。使用各种图像预处理的不同扫描仪和采集参数可能在很大程度上影响放射学特征。ComBat协调可能会显着影响MRI影像特征的可重复性。
    This study investigated the impact of ComBat harmonization on the reproducibility of radiomic features extracted from magnetic resonance images (MRI) acquired on different scanners, using various data acquisition parameters and multiple image pre-processing techniques using a dedicated MRI phantom. Four scanners were used to acquire an MRI of a nonanatomic phantom as part of the TCIA RIDER database. In fast spin-echo inversion recovery (IR) sequences, several inversion durations were employed, including 50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, and 3000 ms. In addition, a 3D fast spoiled gradient recalled echo (FSPGR) sequence was used to investigate several flip angles (FA): 2, 5, 10, 15, 20, 25, and 30 degrees. Nineteen phantom compartments were manually segmented. Different approaches were used to pre-process each image: Bin discretization, Wavelet filter, Laplacian of Gaussian, logarithm, square, square root, and gradient. Overall, 92 first-, second-, and higher-order statistical radiomic features were extracted. ComBat harmonization was also applied to the extracted radiomic features. Finally, the Intraclass Correlation Coefficient (ICC) and Kruskal-Wallis\'s (KW) tests were implemented to assess the robustness of radiomic features. The number of non-significant features in the KW test ranged between 0-5 and 29-74 for various scanners, 31-91 and 37-92 for three times tests, 0-33 to 34-90 for FAs, and 3-68 to 65-89 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The number of features with ICC over 90% ranged between 0-8 and 6-60 for various scanners, 11-75 and 17-80 for three times tests, 3-83 to 9-84 for FAs, and 3-49 to 3-63 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The use of various scanners, IRs, and FAs has a great impact on radiomic features. However, the majority of scanner-robust features is also robust to IR and FA. Among the effective parameters in MR images, several tests in one scanner have a negligible impact on radiomic features. Different scanners and acquisition parameters using various image pre-processing might affect radiomic features to a large extent. ComBat harmonization might significantly impact the reproducibility of MRI radiomic features.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:糖尿病性视网膜病变(DR)是一种严重的眼部并发症,可导致永久性视力损害。随着DR患者数量的增加,DR诊断的治疗延迟也是如此。为了弥合这个差距,需要一个有效的DR筛查系统来帮助临床医生。尽管近年来已经部署了许多人工智能(AI)筛查系统,准确性仍然是一个可以改进的指标。
    方法:在深度学习模型中实现了枚举式预处理方法,以获得更好的DR严重程度分级精度。将所提出的方法与各种预训练模型进行比较,并列出了必要的性能指标。本文还对深度网络模型中使用的各种优化算法进行了比较分析,并概述了结果。
    结果:在MESSIDOR数据集上进行实验结果以评估性能。实验结果表明,与其他组合相比,枚举管道组合K1-K2-K3-DFNN-LOA显示出更好的结果。与各种优化算法和预训练模型相比,所提出的模型具有更好的性能和最大的精度,精度,召回,F1得分,宏观平均指标为97.60%,94.60%,98.40%,94.60%,和0.97。
    结论:本研究的重点是开发和实施彩色眼底照片DR筛查系统。这种基于人工智能的系统提供了增强DR诊断的有效性和可接近性的可能性。
    BACKGROUND: Diabetic retinopathy (DR) is a serious eye complication that results in permanent vision damage. As the number of patients suffering from DR increases, so does the delay in treatment for DR diagnosis. To bridge this gap, an efficient DR screening system that assists clinicians is required. Although many artificial intelligence (AI) screening systems have been deployed in recent years, accuracy remains a metric that can be improved.
    METHODS: An enumerative pre-processing approach is implemented in the deep learning model to attain better accuracies for DR severity grading. The proposed approach is compared with various pre-trained models, and the necessary performance metrics were tabulated. This paper also presents the comparative analysis of various optimization algorithms that are utilized in the deep network model, and the results were outlined.
    RESULTS: The experimental results are carried out on the MESSIDOR dataset to assess the performance. The experimental results show that an enumerative pipeline combination K1-K2-K3-DFNN-LOA shows better results when compared with other combinations. When compared with various optimization algorithms and pre-trained models, the proposed model has better performance with maximum accuracy, precision, recall, F1 score, and macro-averaged metric of 97.60%, 94.60%, 98.40%, 94.60%, and 0.97, respectively.
    CONCLUSIONS: This study focussed on developing and implementing a DR screening system on color fundus photographs. This artificial intelligence-based system offers the possibility to enhance the efficacy and approachability of DR diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号