Pre-processing

预处理
  • 文章类型: Journal Article
    肺癌(LC)仍然是中国的主要死亡原因,主要是由于晚期诊断。本研究旨在评估使用基于等离子体的近红外光谱(NIRS)进行LC早期诊断的有效性。共收集了171份血浆样本,包括73名健康对照(HC),73LC,和25个肺良性肿瘤(B)。利用NIRS测量样品的光谱。预处理方法,包括定心和缩放,标准正态变量,乘法散射校正,Savitzky-Golay平滑,Savitzky-Golay一阶导数,并应用基线校正。随后,4种机器学习(ML)算法,包括偏最小二乘(PLS),支持向量机(SVM),梯度增压机,和随机森林,用于使用训练集数据开发诊断模型。然后,使用测试集样本评估每个模型的预测性能.该研究进行了如下5个比较:LC和HC,LC和B,B和HC,患病组(D)和HC,还有LC,B和HC。在5个比较中,SVM通过一定的预处理方法始终产生最佳性能,在LC和HC的比较中实现1.0(κ:1.0)的总体准确度,B和HC,以及D和HC。预处理被认为是开发ML模型的关键步骤。有趣的是,PLS在5个比较中表现出显著的稳定性和相对较高的预测性能,即使它没有达到像SVM那样的最高结果。然而,这些算法都不能有效区分B和LC。这些发现表明,基于等离子体的NIRS与ML算法的结合是一种快速,非侵入性,有效,LC早期诊断的经济方法。
    Lung cancer (LC) continues to be a leading death cause in China, primarily due to late diagnosis. This study aimed to evaluate the effectiveness of using plasma-based near-infrared spectroscopy (NIRS) for LC early diagnosis. A total of 171 plasma samples were collected, including 73 healthy controls (HC), 73 LC, and 25 benign lung tumors (B). NIRS was utilized to measure the spectra of samples. Pre-processing methods, including centering and scaling, standard normal variate, multiplicative scatter correction, Savitzky-Golay smoothing, Savitzky-Golay first derivative, and baseline correction were applied. Subsequently, 4 machine learning (ML) algorithms, including partial least squares (PLS), support vector machines (SVM), gradient boosting machine, and random forest, were utilized to develop diagnostic models using train set data. Then, the predictive performance of each model was evaluated using test set samples. The study was conducted in 5 comparisons as follows: LC and HC, LC and B, B and HC, the diseased group (D) and HC, as well as LC, B and HC. Among the 5 comparisons, SVM consistently generated the best performance with a certain pre-processing method, achieving overall accuracy of 1.0 (kappa: 1.0) in the comparisons of LC and HC, B and HC, as well as D and HC. Pre-processing was identified as a crucial step in developing ML models. Interestingly, PLS demonstrated remarkable stability and relatively high predictive performance across the 5 comparisons, even though it did not achieve the top results like SVM. However, none of these algorithms were able to effectively distinguish B from LC. These findings indicate that the combination of plasma-based NIRS with ML algorithms is a rapid, non-invasive, effective, and economical method for LC early diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在这项研究中,为了实现同类型三种光谱仪器之间全纤维素近红外分析模型的共享,以84个纸浆样品及其全纤维素含量为研究对象。10种预处理方法的影响,如一阶导数(D1),二阶导数(D2),乘法散射校正(MSC),标准正态变量变换(SNV),自动缩放,归一化,平均居中和成对组合,讨论了通过筛选具有一致和稳定信号的波长(SWCSS)选择的稳定波长的传输效果。结果表明,采用自动定标预处理方法后的SWCSS算法选取的波长所建立的模型对两个目标样本的分析效果最好。预测均方根误差(RMSEP)从模型转换前的2.4769和2.3119分别降低到1.2563和1.2384。与全谱模型相比,AIC值从3209.83降至942.82。因此,该自动定标预处理方法结合SWCSS算法可以显著提高模型传递的准确性和效率,为SWCSS算法在近红外光谱(NIRS)快速测定纸浆性质中的应用提供帮助。
    In this study, in order to realize the sharing of the near-infrared analysis model of holocellulose between three spectral instruments of the same type, 84 pulp samples and their content of holocellulose were taken as the research objects. The effects of 10 pre-processing methods, such as 1st derivative (D1st), 2nd derivative (D2nd), multiplicative scatter correction (MSC), standard normal variable transformation (SNV), autoscaling, normalization, mean centering and pairwise combination, on the transfer effect of the stable wavelength selected by screening wavelengths with consistent and stable signals (SWCSS) were discussed. The results showed that the model established by the wavelength selected by the SWCSS algorithm after the autoscaling pre-processing method had the best analysis effect on the two target samples. Root mean square error of prediction (RMSEP) decreased from 2.4769 and 2.3119 before the model transfer to 1.2563 and 1.2384, respectively. Compared with the full-spectrum model, the value of AIC decreased from 3209.83 to 942.82. Therefore, the autoscaling pre-processing method combined with SWCSS algorithm can significantly improve the accuracy and efficiency of model transfer and provide help for the application of SWCSS algorithm in the rapid determination of pulp properties by near-infrared spectroscopy (NIRS).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:光谱和样品之间等离子体特性的变化导致激光诱导击穿光谱(LIBS)中显着的信号不确定性和基体效应。为了解决这个问题,直接补偿等离子体性质的变化被认为是非常可取的。然而,由于光谱参数不准确,因此可靠地补偿总数密度变化具有挑战性。对于可靠的补偿,在我们最近的工作中提出了一种总数密度补偿(TNDC)方法,但由于其严格的假设,其适用性仅限于简单的样本。在这项研究中,我们提出了一种新的预处理方法,即扩展TNDC(ETNDC),在更复杂的铀测定分析任务中减少信号不确定性和基体效应。
    结果:ETNDC用来自所有主要元素的光谱线的加权组合来反映总数量密度变化,并将温度和电子密度补偿纳入加权系数。该方法是在黄色滤饼样品上进行评估的,并结合回归模型进行铀测定。使用典型的验证集和行组合,验证样品中UII417.159nm的平均相对标准偏差(RSD)从4.92%下降到2.27%,预测均方根误差(RMSEP)和预测结果的平均RSD从4.81%下降到1.93%,从1.92%下降到1.56%,分别。此外,10个验证集和216行组合的结果表明,ETNDC在平均性能和鲁棒性方面优于基线方法。
    结论:第一次,ETNDC明确解决了温度和电子密度变化,同时补偿了总数密度变化,其中通过使用浓度信息拟合相关量来避免不准确的光谱参数。该方法证明了铀测定中信号重复性和分析性能的有效和可靠的改善,促进LIBS技术的准确量化。
    BACKGROUND: Variations in plasma properties among spectra and samples lead to significant signal uncertainty and matrix effects in laser-induced breakdown spectroscopy (LIBS). To address this issue, direct compensation for plasma property variations is considered highly desirable. However, reliably compensating for the total number density variation is challenging due to inaccurate spectroscopic parameters. For reliable compensation, a total number density compensation (TNDC) method was presented in our recent work, but its applicability is limited to simple samples because of its strict assumptions. In this study, we propose a new pre-processing method, namely extended TNDC (ETNDC), to reduce signal uncertainty and matrix effects in the more complex analytical task of uranium determination.
    RESULTS: ETNDC reflects the total number density variation with a weighted combination of spectral lines from all major elements and incorporates temperature and electron density compensation into the weighting coefficients. The method is evaluated on yellow cake samples and combined with regression models for uranium determination. Using the typical validation set and line combination, the mean relative standard deviation (RSD) of U II 417.159 nm in validation samples decreases from 4.92% to 2.27%, and the root mean square error of prediction (RMSEP) and the mean RSD of prediction results decrease from 4.81% to 1.93% and from 1.92% to 1.56%, respectively. Furthermore, the results of 10 validation sets and 216 line combinations show that ETNDC outperforms baseline methods in terms of average performance and robustness.
    CONCLUSIONS: For the first time, ETNDC explicitly addresses the temperature and electron density variations while compensating for the total number density variation, where the inaccurate spectroscopic parameters are avoided by fitting related quantities using concentration information. The method demonstrates effective and robust improvement in signal repeatability and analytical performance in uranium determination, facilitating accurate quantification of the LIBS technique.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    先前的表面增强拉曼光谱(SERS)研究表明,在分析之前必须进行预处理。预处理通常还用于去除自发荧光背景和最小化数据波动性的双重目的。该方法允许更准确地比较光谱特性和相对SERS峰强度。然而,因为有这么多不同种类的样本,可能需要很长时间,并且不能保证所选择的方法将与特定类型的样本一起使用。因此,这项研究采用了一种称为多层感知器(MLP)的深度学习技术,以简化前列腺癌(PC)患者血浆SERS样本的预处理。以及提高使用SERS技术诊断的敏感性和特异性。首先,在差异光谱中可以观察到峰值强度的显著变化,促进PC和正常组的分化。第二,数据分析分三个不同阶段进行(原始数据,去荧光数据,和归一化数据)使用主成分分析和线性判别分析(PCA-LDA),以及PCA-多层感知器(PCA-MLP)。最后,当使用PCA-LDA分析SERS数据时,各阶段的分类准确率存在显著差异(三个不同阶段的分类准确率为76.90%,85.60%,95.20%,分别)。然而,当PCA-MLP用于SERS数据分析时,分类精度始终保持较高和稳定(三个不同阶段的分类精度为92.00%,92.40%,96.70%,分别)。PCA-MLP对特定SERS数据进行分类的实验结果表明,直接分析原始数据可以简化实验过程,提高SERS分析的效率。
    Prior surface-enhanced Raman spectroscopy (SERS) research has shown that pre-processing is necessary before analysis. Pre-processing also typically serves the dual purposes of removing the auto-fluorescence background and minimizing data volatility. This method allows for a more accurate comparison of spectral traits and relative SERS peak strength. However, because there are so many different kinds of samples, it can take a long time, and there is no assurance that the approach chosen will work well with a particular kind of sample. Therefore, this study employed a deep learning technique called multi-layer perceptron (MLP) to simplify the pre-processing of blood plasma SERS samples in patients with prostate cancer (PC), as well as to enhance the sensitivity and specificity of diagnosis using SERS technology. First of all, significant variations in peak intensity can be observed in the difference spectra, facilitating differentiation between PC and normal groups. Second, the data analysis was carried out in three different stages (raw data, defluorescenced data, and normalized data) using principal component analysis and linear discriminant analysis (PCA-LDA), as well as PCA-multi-layer perceptron (PCA-MLP). Finally, when SERS data was analyzed using PCA-LDA, there were significant differences in classification accuracy across each stage (The classification accuracy of three different stages were 76.90%, 85.60%, 95.20%, respectively). However, when PCA-MLP was utilized for SERS data analysis, the classification accuracy remained consistently high and stable (The classification accuracy of three different stages were 92.00%, 92.40%, 96.70%, respectively). The experimental results of PCA-MLP for classifying specific SERS data indicate that analyzing raw data directly can simplify the experimental process and enhance the efficacy of SERS analysis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:青光眼可导致人的视力不可逆失明。由于早期没有症状,从眼底医学图像中准确分割出视盘(OD)和视杯(OC)对青光眼的筛查和预防尤为重要。近年来,目前OD和OC分割的主流方法是卷积神经网络(CNN)。然而,大多数现有的CNN方法分别分割OD和OC,忽略OC总是包含在OD区域内的先验信息,这使得大多数方法的分割精度不够高。
    方法:本文提出了一种新的编码器-解码器分段结构,称为RSAP-Net,用于OD和OC的联合分割。我们首先设计了一个有效的U形分割网络作为骨干。考虑OD和OC之间的空间重叠关系,提出了一种新的残差空间注意路径来连接编码器-解码器以保留更多的特征信息。为了进一步提高分割性能,已经设计了一种称为MSRCR-PT(多尺度Retinex颜色恢复和极性变换)的预处理方法。它结合了多尺度Retinex颜色恢复算法和极坐标变换,这可以帮助RSAP-Net产生更精细的视盘和光学杯的边界。
    结果:实验结果表明,我们的方法在Drishti-GS1标准数据集上实现了出色的分割性能。在OD和OC分割效果中,F1得分分别为0.9752和0.9012。BLE是6.33像素和11.97像素,分别。
    结论:本文提出了一种用于视盘和视杯联合分割的新框架,称为RSAP-Net。该框架主要由U型分割骨架和残差空间注意路径模块组成。为OD/OC分割任务设计称为MSRCR-PT的预处理方法可以提高分割性能。该方法在公开可用的Drishti-GS1标准数据集上进行了评估,并被证明是有效的。
    BACKGROUND: Glaucoma can cause irreversible blindness to people\'s eyesight. Since there are no symptoms in its early stage, it is particularly important to accurately segment the optic disc (OD) and optic cup (OC) from fundus medical images for the screening and prevention of glaucoma. In recent years, the mainstream method of OD and OC segmentation is convolution neural network (CNN). However, most existing CNN methods segment OD and OC separately and ignore the a priori information that OC is always contained inside the OD region, which makes the segmentation accuracy of most methods not high enough.
    METHODS: This paper proposes a new encoder-decoder segmentation structure, called RSAP-Net, for joint segmentation of OD and OC. We first designed an efficient U-shaped segmentation network as the backbone. Considering the spatial overlap relationship between OD and OC, a new Residual spatial attention path is proposed to connect the encoder-decoder to retain more characteristic information. In order to further improve the segmentation performance, a pre-processing method called MSRCR-PT (Multi-Scale Retinex Colour Recovery and Polar Transformation) has been devised. It incorporates a multi-scale Retinex colour recovery algorithm and a polar coordinate transformation, which can help RSAP-Net to produce more refined boundaries of the optic disc and the optic cup.
    RESULTS: The experimental results show that our method achieves excellent segmentation performance on the Drishti-GS1 standard dataset. In the OD and OC segmentation effects, the F1 scores are 0.9752 and 0.9012, respectively. The BLE are 6.33 pixels and 11.97 pixels, respectively.
    CONCLUSIONS: This paper presents a new framework for the joint segmentation of optic discs and optic cups, called RSAP-Net. The framework mainly consists of a U-shaped segmentation skeleton and a residual space attention path module. The design of a pre-processing method called MSRCR-PT for the OD/OC segmentation task can improve segmentation performance. The method was evaluated on the publicly available Drishti-GS1 standard dataset and proved to be effective.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Metaverse,这被认为是互联网的未来,是一个3D虚拟世界,用户通过高度可定制的计算机化身进行交互。它对几个行业来说是相当有前途的,包括游戏,教育,和生意。然而,它仍然有缺点,特别是在隐私和身份方面。当一个人通过虚拟现实(VR)人机设备加入隐喻时,他们的化身,数字资产,私人信息可能会被网络犯罪分子破坏。本文介绍了一种针对Metaverse的虚拟现实(VR)人机设备的特定手指静脉识别方法,以防止他人盗用它。手指静脉是一种隐藏在我们皮肤下的生物特征。由于难以模仿,因此在个人验证中比其他基于手的生物特征(例如指纹和掌纹)更安全。大多数使用手工制作特征的传统手指静脉识别系统是无效的,特别是对于低质量的图像,低对比度,尺度变化,翻译,和旋转。深度学习方法已被证明比计算机视觉中的传统方法更成功。本文开发了一种基于卷积神经网络和抗锯齿技术的手指静脉识别系统。我们在预处理步骤中采用/利用对比度图像增强算法来提高系统的性能。所提出的方法在三个公开可用的手指静脉数据集上进行了评估。实验结果表明,我们提出的方法优于当前最先进的方法,在FVUSM数据集上提高97.66%的准确度,SDUMLA数据集的准确率为99.94%,在THUFV2数据集上的准确率为88.19%。
    Metaverse, which is anticipated to be the future of the internet, is a 3D virtual world in which users interact via highly customizable computer avatars. It is considerably promising for several industries, including gaming, education, and business. However, it still has drawbacks, particularly in the privacy and identity threads. When a person joins the metaverse via a virtual reality (VR) human-robot equipment, their avatar, digital assets, and private information may be compromised by cybercriminals. This paper introduces a specific Finger Vein Recognition approach for the virtual reality (VR) human-robot equipment of the metaverse of the Metaverse to prevent others from misappropriating it. Finger vein is a is a biometric feature hidden beneath our skin. It is considerably more secure in person verification than other hand-based biometric characteristics such as finger print and palm print since it is difficult to imitate. Most conventional finger vein recognition systems that use hand-crafted features are ineffective, especially for images with low quality, low contrast, scale variation, translation, and rotation. Deep learning methods have been demonstrated to be more successful than traditional methods in computer vision. This paper develops a finger vein recognition system based on a convolution neural network and anti-aliasing technique. We employ/ utilize a contrast image enhancement algorithm in the preprocessing step to improve performance of the system. The proposed approach is evaluated on three publicly available finger vein datasets. Experimental results show that our proposed method outperforms the current state-of-the-art methods, improvement of 97.66% accuracy on FVUSM dataset, 99.94% accuracy on SDUMLA dataset, and 88.19% accuracy on THUFV2 dataset.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    高信号不确定性已被认为是激光诱导击穿光谱(LIBS)定量分析的关键障碍。降低不确定性的最有效方法之一是直接补偿等离子体特性的变化,尤其是总数密度。然而,难以对总数密度的变化进行可靠的补偿。在这项工作中,我们提出了一种数据预处理方法,称为总数密度补偿(TNDC),减少信号的不确定性。它是在从内标法扩展的假设基础上建立的,并利用所有主要元素的发射线的加权和来反映总数密度的变化。TNDC方法在29个黄铜样品上进行了测试,并且在信号可重复性和分析性能方面优于基于光谱面积的常用归一化方法。对于Cu,信号的平均脉冲-脉冲相对标准偏差(RSD)从5.10%下降到1.03%,这几乎是LIBS可以实现的最佳信号可重复性,并且与ICP-OES相当。预测均方根误差(RMSEP)和预测平均RSD从6.56%下降到0.60%,从12.00%下降到1.03%,分别。而对于Zn,信号的平均RSD从6.43%提高到4.12%,RMSEP从1.57%降至0.59%,预测的RSD从5.41%降至4.18%。结果表明,TNDC可以成为LIBS分析的有效方法,尤其是提高可重复性。
    High signal uncertainty has been regarded as a critical obstacle for the quantitative analysis of laser-induced breakdown spectroscopy (LIBS). One of the most effective ways for uncertainty reduction is to directly compensate for the variation of plasma properties, especially total number density. However, reliable compensation for the variation of total number density is hard to implement. In this work, we propose a data pre-processing method, called total number density compensation (TNDC), to reduce signal uncertainty. It is established on an assumption extended from the internal standard method and utilizes a weighted sum of emission lines from all major elements to reflect the variation of total number density. The TNDC method is tested on 29 brass samples and outperforms common normalization methods based on the spectral area in terms of signal repeatability and analytical performance. For Cu, the mean pulse-to-pulse relative standard deviation (RSD) of signals is greatly decreased from 5.10% to 1.03%, which is almost the best signal repeatability that LIBS can achieve and is comparable to that of ICP-OES. The root mean square error of prediction (RMSEP) and the mean RSD of prediction are decreased from 6.56% to 0.60% and from 12.00% to 1.03%, respectively. While for Zn, the mean RSD of signals improves from 6.43% to 4.12%, and the RMSEP is reduced from 1.57% to 0.59% with the RSD of prediction from 5.41% to 4.18%. Results demonstrate that TNDC can be an effective method for LIBS analysis especially for repeatability improvement.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑磁图(MEG)允许在毫秒时间尺度上量化人类神经元活动的调制,同时还可以估计潜在神经元源的位置。该技术在很大程度上依赖于信号处理和源建模。为此,社区开发了几个开源工具箱。虽然这些工具箱功能强大,因为它们为分析提供了丰富的选择,许多选择也提出了一个挑战,可重复的研究,以及新的研究领域。FLUX管道旨在使认知神经科学中进行的标准分析的分析步骤和设置明确。它专注于振荡大脑活动的量化和来源定位,但它也可以用于事件相关领域和多变量模式分析。该管道源自Cogitate联盟,解决了一系列具体的认知神经科学问题。具体来说,包括文档代码的管道是为MNEPython(Python工具箱)和FieldTrip(Matlab工具箱)定义的,并使用视觉空间注意力数据集来说明步骤。这些脚本作为JupyterNotebook和MATLABLiveEditor中实现的笔记本提供,提供解释,必要步骤的理由和图形输出。此外,我们还为注册和出版物中使用的文本和参数设置提供建议,以提高可复制性并促进预注册。FLUX可用于自学或指导研讨会的教育。我们期望FLUX管道将通过在基本分析步骤上提供一些标准化以及通过跨工具箱调整方法来加强MEG领域。此外,我们还旨在通过提供教育和培训来支持新的研究人员进入该领域。FLUX管道并不意味着是静态的;它将随着工具箱的发展和新的见解而发展。此外,随着基于光泵浦磁力计的MEG系统的预期增加,管道也将发展以接受这些发展。
    Magnetoencephalography (MEG) allows for quantifying modulations of human neuronal activity on a millisecond time scale while also making it possible to estimate the location of the underlying neuronal sources. The technique relies heavily on signal processing and source modelling. To this end, there are several open-source toolboxes developed by the community. While these toolboxes are powerful as they provide a wealth of options for analyses, the many options also pose a challenge for reproducible research as well as for researchers new to the field. The FLUX pipeline aims to make the analyses steps and setting explicit for standard analysis done in cognitive neuroscience. It focuses on quantifying and source localization of oscillatory brain activity, but it can also be used for event-related fields and multivariate pattern analysis. The pipeline is derived from the Cogitate consortium addressing a set of concrete cognitive neuroscience questions. Specifically, the pipeline including documented code is defined for MNE Python (a Python toolbox) and FieldTrip (a Matlab toolbox), and a data set on visuospatial attention is used to illustrate the steps. The scripts are provided as notebooks implemented in Jupyter Notebook and MATLAB Live Editor providing explanations, justifications and graphical outputs for the essential steps. Furthermore, we also provide suggestions for text and parameter settings to be used in registrations and publications to improve replicability and facilitate pre-registrations. The FLUX can be used for education either in self-studies or guided workshops. We expect that the FLUX pipeline will strengthen the field of MEG by providing some standardization on the basic analysis steps and by aligning approaches across toolboxes. Furthermore, we also aim to support new researchers entering the field by providing education and training. The FLUX pipeline is not meant to be static; it will evolve with the development of the toolboxes and with new insights. Furthermore, with the anticipated increase in MEG systems based on the Optically Pumped Magnetometers, the pipeline will also evolve to embrace these developments.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    因为它的价值很高,特级初榨橄榄油(EVOO)经常与劣质植物油混合。这项研究提出了一种光学方法,用于使用LED诱导荧光光谱法确定EVOO与大豆油和花生油的掺假水平。测试了八个中心波长从紫外线(UV)到蓝色的LED,以诱导EVOO的荧光光谱,花生油,和大豆油,选择372nm的UVLED进行进一步检测。通过将橄榄油与不同体积分数的花生油或大豆油混合来制备样品,并收集它们的荧光光谱。采用不同的预处理和回归方法建立预测模型,预测掺假浓度与实际掺假浓度之间具有良好的线性关系。这个结果,伴随着不破坏和无预处理的特点,证明了用LED诱导荧光光谱作为考察EVOO掺假水平的方法是可行的,并为构建可应用于未来真实市场条件的手持设备铺平道路。
    As it is high in value, extra virgin olive oil (EVOO) is frequently blended with inferior vegetable oils. This study presents an optical method for determining the adulteration level of EVOO with soybean oil as well as peanut oil using LED-induced fluorescence spectroscopy. Eight LEDs with central wavelengths from ultra-violet (UV) to blue are tested to induce the fluorescence spectra of EVOO, peanut oil, and soybean oil, and the UV LED of 372 nm is selected for further detection. Samples are prepared by mixing olive oil with different volume fractions of peanut or soybean oil, and their fluorescence spectra are collected. Different pre-processing and regression methods are utilized to build the prediction model, and good linearity is obtained between the predicted and actual adulteration concentration. This result, accompanied by the non-destruction and no pre-treatment characteristics, proves that it is feasible to use LED-induced fluorescence spectroscopy as a way to investigate the EVOO adulteration level, and paves the way for building a hand-hold device that can be applied to real market conditions in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Until now, the over-exploitation of wild resources has increased growing concern over the quality of wild medicinal plants. This led to the necessity of developing a rapid method for the evaluation of wild medicinal plants. In this study, the content of total secoiridoids (gentiopicroside, swertiamarin, and sweroside) of Gentiana rigescens from 37 different regions in southwest China were analyzed by high performance liquid chromatography (HPLC). Furthermore, Fourier transform infrared (FT-IR) was adopted to trace the geographical origin (331 individuals) and predict the content of total secoiridoids (273 individuals). In the traditional FT-IR analysis, only one scatter correction technique could be selected from a series of preprocessing candidates to decrease the impact of the light correcting effect. Nevertheless, different scatter correction techniques may carry complementary information so that using the single scatter correction technique is sub-optimal. Hence, the emerging ensemble approach to preprocessing fusion, sequential preprocessing through orthogonalization (SPORT), was carried out to fuse the complementary information linked to different preprocessing methods. The results suggested that, compared with the best results obtained on the scatter correction modeling, SPORT increased the accuracy of the test set by 12.8% in qualitative analysis and decreased the RMSEP by 66.7% in quantitative analysis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号