resampling

重采样
  • 文章类型: Journal Article
    许多研究已经证明了准确识别miRNA-疾病关联(MDA)对于理解疾病机制的重要性。然而,已知MDA的数量明显少于未知对。这里,我们提议RSANMDA,用于预测MDAs的子视图注意力网络。我们首先从多个相似性矩阵中提取miRNA和疾病特征。接下来,使用重采样技术,我们从已知的MDA生成不同的子视图。每个子视图都经过多头图的关注来捕获其特征,其次是语义关注,以整合跨子视图的功能。最后,结合原始和培训功能,我们使用多层评分感知器进行预测。在实验部分,我们在HMDDv2.0和HMDDv3.2数据集上与其他高级模型进行了比较实验.我们还进行了一系列消融研究和参数调整练习。综合实验最终证明了我们模型的优越性。肺的案例研究,乳房,和食道癌进一步验证了我们的方法识别疾病相关miRNA的预测能力。
    Many studies have demonstrated the importance of accurately identifying miRNA-disease associations (MDAs) for understanding disease mechanisms. However, the number of known MDAs is significantly fewer than the unknown pairs. Here, we propose RSANMDA, a subview attention network for predicting MDAs. We first extract miRNA and disease features from multiple similarity matrices. Next, using resampling techniques, we generate different subviews from known MDAs. Each subview undergoes multi-head graph attention to capture its features, followed by semantic attention to integrate features across subviews. Finally, combining raw and training features, we use a multilayer scoring perceptron for prediction. In the experimental section, we conducted comparative experiments with other advanced models on both HMDD v2.0 and HMDD v3.2 datasets. We also performed a series of ablation studies and parameter tuning exercises. Comprehensive experiments conclusively demonstrate the superiority of our model. Case studies on lung, breast, and esophageal cancers further validate our method\'s predictive capability for identifying disease-related miRNAs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:多利益相关方德尔菲调查的最小样本量仍未得到充分研究。从三项大型国际多利益相关方德尔菲调查中得出,本研究旨在:1)研究增加样本量对结果可复制性的影响;2)评估结果的可复制性水平是否与参与者特征不同:例如,性别,年龄,专业。
    方法:我们使用来自Delphi调查的数据来制定改进医疗干预试验报告的指南:SPIRIT(标准方案项目:干预试验建议)和CONSORT(报告试验综合标准)扩展替代终点(n=175,22项评级);CONSORT-SPI,社会和心理干预的扩展(n=333,评级77项);烧伤护理的核心结果集(n=553,评级88项)。在三项调查中,使用带替换的重新采样从参与者数据集中抽取随机子样本。对于每个子样本,计算所有评级调查项目的中位数,并与全部参与者数据集的中位数进行比较.使用所复制的中值数目(和四分位间距)来计算可复制性百分比(和变异性)。高可复制性定义为≥80%,中等为60%和<80%结果:在样本大小为60的情况下,三个数据集的平均可复制性(变异性)占项目总数的百分比为81%(10%)。在其中一个数据集(CONSORT-SPI)中,当样本量为80时,可复制性达到≥80%.平均而言,将样本量从80增加到160,结果的可复制性又增加了3%,变异性降低了1%.对于基于参与者特征的亚组分析(如性别、年龄,专业角色),使用20至100的重采样样本显示,20至30的样本量导致64%至77%的中等复制水平。
    结论:我们发现,在多利益相关者德尔菲调查中,60至80名参与者的最小样本量在结果中提供了高水平的可复制性(≥80%)。对于仅限于个别利益相关者群体的德尔福研究(如研究人员,临床医生,病人),每组20到30个样本就足够了。
    OBJECTIVE: The minimum sample size for multistakeholder Delphi surveys remains understudied. Drawing from three large international multistakeholder Delphi surveys, this study aimed to: 1) investigate the effect of increasing sample size on replicability of results; 2) assess whether the level of replicability of results differed with participant characteristics: for example, gender, age, and profession.
    METHODS: We used data from Delphi surveys to develop guidance for improved reporting of health-care intervention trials: SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) and CONSORT (Consolidated Standards of Reporting Trials) extension for surrogate end points (n = 175, 22 items rated); CONSORT-SPI [CONSORT extension for Social and Psychological Interventions] (n = 333, 77 items rated); and core outcome set for burn care (n = 553, 88 items rated). Resampling with replacement was used to draw random subsamples from the participant data set in each of the three surveys. For each subsample, the median value of all rated survey items was calculated and compared to the medians from the full participant data set. The median number (and interquartile range) of medians replicated was used to calculate the percentage replicability (and variability). High replicability was defined as ≥80% and moderate as 60% and <80% RESULTS: The average median replicability (variability) as a percentage of total number of items rated from the three datasets was 81% (10%) at a sample size of 60. In one of the datasets (CONSORT-SPI), a ≥80% replicability was reached at a sample size of 80. On average, increasing the sample size from 80 to 160 increased the replicability of results by a further 3% and reduced variability by 1%. For subgroup analysis based on participant characteristics (eg, gender, age, professional role), using resampled samples of 20 to 100 showed that a sample size of 20 to 30 resulted to moderate replicability levels of 64% to 77%.
    CONCLUSIONS: We found that a minimum sample size of 60-80 participants in multistakeholder Delphi surveys provides a high level of replicability (≥80%) in the results. For Delphi studies limited to individual stakeholder groups (such as researchers, clinicians, patients), a sample size of 20 to 30 per group may be sufficient.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本章显示了将不对称样品内转化应用于与先前的缺失填补匹配的单细胞RNA-Seq数据。不对称转化是一种特殊的winsorization,可以平坦低表达强度并保留高表达基因水平。在标准的分层聚类算法之前,中间步骤根据应用于每个基因熵估计的阈值来去除非信息性基因。在聚类之后,一个时间密集的算法被证明可以揭示与每个簇相关的分子特征。该步骤实施重采样算法以生成随机基线来测量上调/下调的显著基因。为了这个目标,我们采用在DESeq2包中实现的GLM模型。我们以图形模式呈现结果。虽然这些工具是标准的热图,我们引入了一些数据缩放来阐明结果的可靠性。
    This chapter shows applying the Asymmetric Within-Sample Transformation to single-cell RNA-Seq data matched with a previous dropout imputation. The asymmetric transformation is a special winsorization that flattens low-expressed intensities and preserves highly expressed gene levels. Before a standard hierarchical clustering algorithm, an intermediate step removes noninformative genes according to a threshold applied to a per-gene entropy estimate. Following the clustering, a time-intensive algorithm is shown to uncover the molecular features associated with each cluster. This step implements a resampling algorithm to generate a random baseline to measure up/downregulated significant genes. To this aim, we adopt a GLM model as implemented in DESeq2 package. We render the results in graphical mode. While the tools are standard heat maps, we introduce some data scaling to clarify the results\' reliability.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    分析真实数据时,丢失数据很常见。一种流行的解决方案是估算缺失数据,以便可以获得一个完整的数据集以进行后续数据分析。在本研究中,我们专注于使用分类和回归树(CART)进行缺失数据填补。
    我们考虑了CART归因问题中丢失数据的新视角,并通过一些重采样算法实现了该视角。通过仿真研究,比较了几种使用CART的现有缺失数据填补方法,我们的目的是研究在各种条件下具有更好的填补准确性的方法。证明并提出了一些系统的发现。这些插补方法进一步应用于两个真实的数据集:肝炎数据和信贷批准数据,以进行说明。
    执行最佳的方法强烈取决于变量之间的相关性。对于估算缺失的序数分类变量,建议在相关性大于0的情况下使用具有替代变量的rpart包,其中完全随机缺失(MCAR)和随机缺失(MAR)条件。在非随机缺失(MNAR)下,提出了卡方测试方法和带替代变量的rpart包。为了估算缺失的定量变量,在中等相关条件下,最推荐使用迭代插补方法。
    UNASSIGNED: Missing data are common when analyzing real data. One popular solution is to impute missing data so that one complete dataset can be obtained for subsequent data analysis. In the present study, we focus on missing data imputation using classification and regression trees (CART).
    UNASSIGNED: We consider a new perspective on missing data in a CART imputation problem and realize the perspective through some resampling algorithms. Several existing missing data imputation methods using CART are compared through simulation studies, and we aim to investigate the methods with better imputation accuracy under various conditions. Some systematic findings are demonstrated and presented. These imputation methods are further applied to two real datasets: Hepatitis data and Credit approval data for illustration.
    UNASSIGNED: The method that performs the best strongly depends on the correlation between variables. For imputing missing ordinal categorical variables, the rpart package with surrogate variables is recommended under correlations larger than 0 with missing completely at random (MCAR) and missing at random (MAR) conditions. Under missing not at random (MNAR), chi-squared test methods and the rpart package with surrogate variables are suggested. For imputing missing quantitative variables, the iterative imputation method is most recommended under moderate correlation conditions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    玉米(ZeamaysL.)是最丰富的粮食/饲料作物,使准确的产量估计成为监测全球粮食生产的关键数据点。具有不同空间/光谱配置的传感器已用于开发从场内(0.1m地面样品距离(GSD))到区域尺度(>250mGSD)的玉米产量模型。了解这些模型的空间和频谱依赖性对于结果解释至关重要,缩放,和部署模型。我们利用安装在无人机系统上的传感器(在0.063mGSD下0.4-1μm的272个光谱带)收集的高空间分辨率高光谱数据来估算青贮饲料产量。我们对图像进行了三种波段选择算法,以定量评估光谱反射率特征对产量估计的适用性。然后我们得出11个光谱配置,在空间上重新采样到多个GSD,并应用于支持向量回归(SVR)产量估计模型。结果表明,在所有配置中,精度降低到4mGSD以上,以及对红色边缘和多个近红外波段进行采样的七波段多光谱传感器在90%的回归试验中具有更高的精度。这些结果预示着我们对全球玉米产量建模的明确传感器定义的追求。只有时间依赖关系需要额外的调查。
    Corn (Zea mays L.) is the most abundant food/feed crop, making accurate yield estimation a critical data point for monitoring global food production. Sensors with varying spatial/spectral configurations have been used to develop corn yield models from intra-field (0.1 m ground sample distance (GSD)) to regional scales (>250 m GSD). Understanding the spatial and spectral dependencies of these models is imperative to result interpretation, scaling, and deploying models. We leveraged high spatial resolution hyperspectral data collected with an unmanned aerial system mounted sensor (272 spectral bands from 0.4-1 μm at 0.063 m GSD) to estimate silage yield. We subjected our imagery to three band selection algorithms to quantitatively assess spectral reflectance features applicability to yield estimation. We then derived 11 spectral configurations, which were spatially resampled to multiple GSDs, and applied to a support vector regression (SVR) yield estimation model. Results indicate that accuracy degrades above 4 m GSD across all configurations, and a seven-band multispectral sensor which samples the red edge and multiple near-infrared bands resulted in higher accuracy in 90% of regression trials. These results bode well for our quest toward a definitive sensor definition for global corn yield modeling, with only temporal dependencies requiring additional investigation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    地下水易受硝酸盐等污染物影响的时间方面经常被忽视,假设脆弱性具有静态性质。本研究通过采用机器学习与检测断点和估计趋势段(DBEST)算法来揭示硝酸盐之间的潜在关系,从而弥合了这一差距。水表,植被覆盖,和降水时间序列,这与半干旱地区的农业活动和地下水需求有关。通过比较随机森林(RF),绘制了Lenjanat平原的污染概率,支持向量机(SVM),和K近邻(KNN)模型,用32个输入变量(DEM衍生因子,地理学,距离和密度图,时间序列数据)。此外,研究了不平衡学习和特征选择技术作为补充方法,加起来有四种情况。结果表明,射频模型,与前向序列特征选择(SFS)和SMOTE-Tomek重采样方法集成,优于其他模型(F1得分:0.94,MCC:0.83)。SFS技术在提高模型的准确性和计算费用方面优于其他特征选择方法,与其他研究方法相比,成本敏感函数在解决不平衡数据问题方面更有效。DBEST方法确定了每个时间序列数据集中的重要断点,揭示了Zayandehrood河沿岸的农业实践与Lenjanat地区大量硝酸盐污染之间的明显联系。此外,使用坦率的RF模型和最佳RF的集合创建的地下水脆弱性图,SVM,KNN模型预测了中部和西南部下坡的中高层脆弱性。
    The temporal aspect of groundwater vulnerability to contaminants such as nitrate is often overlooked, assuming vulnerability has a static nature. This study bridges this gap by employing machine learning with Detecting Breakpoints and Estimating Segments in Trend (DBEST) algorithm to reveal the underlying relationship between nitrate, water table, vegetation cover, and precipitation time series, that are related to agricultural activities and groundwater demand in a semi-arid region. The contamination probability of Lenjanat Plain has been mapped by comparing random forest (RF), support vector machine (SVM), and K-nearest-neighbors (KNN) models, fed with 32 input variables (dem-derived factors, physiography, distance and density maps, time series data). Also, imbalanced learning and feature selection techniques were investigated as supplementary methods, adding up to four scenarios. Results showed that the RF model, integrated with forward sequential feature selection (SFS) and SMOTE-Tomek resampling method, outperformed the other models (F1-score: 0.94, MCC: 0.83). The SFS techniques outperformed other feature selection methods in enhancing the accuracy of the models with the cost of computational expenses, and the cost-sensitive function proved more efficient in tackling imbalanced data issues than the other investigated methods. The DBEST method identified significant breakpoints within each time series dataset, revealing a clear association between agricultural practices along the Zayandehrood River and substantial nitrate contamination within the Lenjanat region. Additionally, the groundwater vulnerability maps created using the candid RF model and an ensemble of the best RF, SVM, and KNN models predicted mid to high levels of vulnerability in the central parts and the downhills in the southwest.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    乳腺癌是女性中最常见的癌症类型。风险因素评估可以帮助指导有关降低风险和乳腺癌监测的咨询。本研究旨在(1)使用BCSC(乳腺癌监测联盟)风险因素数据集调查各种风险因素与乳腺癌发病率之间的关系,并创建评估患乳腺癌风险的预测模型;(2)使用威斯康星州乳腺癌诊断数据集诊断乳腺癌;(3)使用SEER(监测,流行病学,和最终结果)乳腺癌数据集。在使用各种机器学习技术之前对训练数据集应用重采样技术可以影响分类器的性能。使用各种预处理方法和分类模型检查了三个乳腺癌数据集,以评估其准确性,精度,F-1得分,等。PCA(主成分分析)和重采样策略产生了显著的结果。对于BCSC数据集,随机森林算法在应用的分类器中表现出最佳性能,准确率为87.53%。在用于训练随机森林分类器的训练数据集的不同重采样技术中,TomekLink表现出最佳的测试精度,87.47%。我们将所有使用的模型与以前使用的技术进行了比较。在应用重采样技术后,即使训练数据准确度提高,测试数据的准确度也会降低。对于威斯康星州乳腺癌诊断数据集,K-最近邻算法与原始数据集测试集的精度最好,94.71%,PCA数据集测试集显示检测乳腺癌的准确率为95.29%。使用SEER数据集,这项研究还探讨了生存分析,采用有监督和无监督的学习方法提供对影响乳腺癌生存能力的变量的见解。这项研究通过纳入表型变异并认识到疾病的异质性,强调了个性化方法在乳腺癌管理和治疗中的重要性。通过数据驱动的见解和先进的机器学习,这项研究为乳腺癌研究的持续努力做出了重大贡献,诊断,个性化医疗。
    Breast cancer is the most prevalent type of cancer in women. Risk factor assessment can aid in directing counseling regarding risk reduction and breast cancer surveillance. This research aims to (1) investigate the relationship between various risk factors and breast cancer incidence using the BCSC (Breast Cancer Surveillance Consortium) Risk Factor Dataset and create a prediction model for assessing the risk of developing breast cancer; (2) diagnose breast cancer using the Breast Cancer Wisconsin diagnostic dataset; and (3) analyze breast cancer survivability using the SEER (Surveillance, Epidemiology, and End Results) Breast Cancer Dataset. Applying resampling techniques on the training dataset before using various machine learning techniques can affect the performance of the classifiers. The three breast cancer datasets were examined using a variety of pre-processing approaches and classification models to assess their performance in terms of accuracy, precision, F-1 scores, etc. The PCA (principal component analysis) and resampling strategies produced remarkable results. For the BCSC Dataset, the Random Forest algorithm exhibited the best performance out of the applied classifiers, with an accuracy of 87.53%. Out of the different resampling techniques applied to the training dataset for training the Random Forest classifier, the Tomek Link exhibited the best test accuracy, at 87.47%. We compared all the models used with previously used techniques. After applying the resampling techniques, the accuracy scores of the test data decreased even if the training data accuracy increased. For the Breast Cancer Wisconsin diagnostic dataset, the K-Nearest Neighbor algorithm had the best accuracy with the original dataset test set, at 94.71%, and the PCA dataset test set exhibited 95.29% accuracy for detecting breast cancer. Using the SEER Dataset, this study also explores survival analysis, employing supervised and unsupervised learning approaches to offer insights into the variables affecting breast cancer survivability. This study emphasizes the significance of individualized approaches in the management and treatment of breast cancer by incorporating phenotypic variations and recognizing the heterogeneity of the disease. Through data-driven insights and advanced machine learning, this study contributes significantly to the ongoing efforts in breast cancer research, diagnostics, and personalized medicine.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    一个单一系统背后的因果机制的知识可能不足以回答某些问题。通过比较和对比多个系统背后的因果机制,并发现一致和不同的因果关系,可以获得更多的见解。例如,在不同疾病中发现共同的分子机制可以导致药物的再利用。比较多个系统之间的因果机制的问题是不平凡的,因为因果机制通常是未知的,需要从数据中估计。如果我们从不同系统产生的数据中估计因果机制,并直接比较它们(幼稚方法),结果可能是次优的。如果由不同系统生成的数据在样本大小方面显著不同,则尤其如此。在这种情况下,不同系统的估计因果机制的质量会有所不同,这反过来会通过朴素方法影响系统之间估计的相似性和差异的准确性。为了缓解这个问题,我们介绍了Bootstrap估计和等样本重抽样估计方法来估计因果网络之间的差异。这两种方法都使用重采样来评估估计的置信度。我们在一组具有各种网络结构和样本量的系统模拟实验条件下,将这些方法与朴素方法进行了比较,并使用不同的性能指标。我们还在各种真实世界的生物医学数据集上评估了这些方法,涵盖了广泛的数据设计。
    The knowledge of the causal mechanisms underlying one single system may not be sufficient to answer certain questions. One can gain additional insights from comparing and contrasting the causal mechanisms underlying multiple systems and uncovering consistent and distinct causal relationships. For example, discovering common molecular mechanisms among different diseases can lead to drug repurposing. The problem of comparing causal mechanisms among multiple systems is non-trivial, since the causal mechanisms are usually unknown and need to be estimated from data. If we estimate the causal mechanisms from data generated from different systems and directly compare them (the naive method), the result can be sub-optimal. This is especially true if the data generated by the different systems differ substantially with respect to their sample sizes. In this case, the quality of the estimated causal mechanisms for the different systems will differ, which can in turn affect the accuracy of the estimated similarities and differences among the systems via the naive method. To mitigate this problem, we introduced the bootstrap estimation and the equal sample size resampling estimation method for estimating the difference between causal networks. Both of these methods use resampling to assess the confidence of the estimation. We compared these methods with the naive method in a set of systematically simulated experimental conditions with a variety of network structures and sample sizes, and using different performance metrics. We also evaluated these methods on various real-world biomedical datasets covering a wide range of data designs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    城市扩张对自然区域的快速替代是全球野生动植物的主要威胁。许多野生物种出现在城市,然而,人们对城市野生动物聚集的动态知之甚少,因为物种的灭绝和殖民可能是响应城市地区快速发展的条件而发生的。即,物种在城市地区传播的能力,除了栖息地偏好,一旦物种出现在城市中,它们很可能会塑造它们的命运。在这里,我们使用在欧洲最大和最古老的城市之一发生的哺乳动物的长期数据集来评估空间传播和与特定栖息地的关联是否以及如何驱动城市内局部灭绝的可能性。我们的分析包括1832年至2023年之间的哺乳动物记录,并显示城市地区的局部灭绝偏向于与湿地相关的物种,并且在城市中自然很少。除了强调湿地在城市地区保护野生动物的作用外,我们的工作还强调了在城市等高度动态的栖息地进行长期生物多样性监测的重要性,作为更好地了解野生动物趋势,从而培育更可持续和生物多样性友好型城市的关键资产。
    The fast rate of replacement of natural areas by expanding cities is a key threat to wildlife worldwide. Many wild species occur in cities, yet little is known on the dynamics of urban wildlife assemblages due to species\' extinction and colonization that may occur in response to the rapidly evolving conditions within urban areas. Namely, species\' ability to spread within urban areas, besides habitat preferences, is likely to shape the fate of species once they occur in a city. Here we use a long-term dataset on mammals occurring in one of the largest and most ancient cities in Europe to assess whether and how spatial spread and association with specific habitats drive the probability of local extinction within cities. Our analysis included mammalian records dating between years 1832 and 2023, and revealed that local extinctions in urban areas are biased towards species associated with wetlands and that were naturally rare within the city. Besides highlighting the role of wetlands within urban areas for conserving wildlife, our work also highlights the importance of long-term biodiversity monitoring in highly dynamic habitats such as cities, as a key asset to better understand wildlife trends and thus foster more sustainable and biodiversity-friendly cities.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    生存分析中的几种方法都是基于比例风险假设。然而,这种假设限制性很强,在实践中往往不合理。因此,在实际应用中,不依赖于比例风险假设的效应估计是非常可取的。一个流行的例子是受限平均生存时间(RMST)。它被定义为存活曲线下的面积,直到一个预先指定的时间点,因此,将存活曲线总结成一个有意义的估计。对于基于RMST的双样本比较,先前的研究发现了小样本渐近检验的I型误差的膨胀,因此,已经开发了双样本置换测试。本文的第一个目标是通过考虑Wald型检验统计量及其渐近行为,进一步扩展一般阶乘设计和一般对比假设的置换检验。此外,考虑了分组引导方法。此外,当全局测试通过比较两组以上的RMST来检测到显着差异时,感兴趣的是具体的RMST差异导致结果。然而,全局测试不提供此信息。因此,在第二步中开发了RMST的多个测试,以同时推断几个空假设。特此,结合了局部检验统计量之间的渐近精确依赖结构,以获得更多的功率。最后,在仿真中分析了所提出的全局和多个测试程序的小样本性能,并在一个真实的数据示例中进行了说明。
    Several methods in survival analysis are based on the proportional hazards assumption. However, this assumption is very restrictive and often not justifiable in practice. Therefore, effect estimands that do not rely on the proportional hazards assumption are highly desirable in practical applications. One popular example for this is the restricted mean survival time (RMST). It is defined as the area under the survival curve up to a prespecified time point and, thus, summarizes the survival curve into a meaningful estimand. For two-sample comparisons based on the RMST, previous research found the inflation of the type I error of the asymptotic test for small samples and, therefore, a two-sample permutation test has already been developed. The first goal of the present paper is to further extend the permutation test for general factorial designs and general contrast hypotheses by considering a Wald-type test statistic and its asymptotic behavior. Additionally, a groupwise bootstrap approach is considered. Moreover, when a global test detects a significant difference by comparing the RMSTs of more than two groups, it is of interest which specific RMST differences cause the result. However, global tests do not provide this information. Therefore, multiple tests for the RMST are developed in a second step to infer several null hypotheses simultaneously. Hereby, the asymptotically exact dependence structure between the local test statistics is incorporated to gain more power. Finally, the small sample performance of the proposed global and multiple testing procedures is analyzed in simulations and illustrated in a real data example.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号