penalized logistic regression

  • 文章类型: Journal Article
    For finite samples with binary outcomes penalized logistic regression such as ridge logistic regression has the potential of achieving smaller mean squared errors (MSE) of coefficients and predictions than maximum likelihood estimation. There is evidence, however, that ridge logistic regression can result in highly variable calibration slopes in small or sparse data situations.
    In this paper, we elaborate this issue further by performing a comprehensive simulation study, investigating the performance of ridge logistic regression in terms of coefficients and predictions and comparing it to Firth\'s correction that has been shown to perform well in low-dimensional settings. In addition to tuned ridge regression where the penalty strength is estimated from the data by minimizing some measure of the out-of-sample prediction error or information criterion, we also considered ridge regression with pre-specified degree of shrinkage. We included \'oracle\' models in the simulation study in which the complexity parameter was chosen based on the true event probabilities (prediction oracle) or regression coefficients (explanation oracle) to demonstrate the capability of ridge regression if truth was known.
    Performance of ridge regression strongly depends on the choice of complexity parameter. As shown in our simulation and illustrated by a data example, values optimized in small or sparse datasets are negatively correlated with optimal values and suffer from substantial variability which translates into large MSE of coefficients and large variability of calibration slopes. In contrast, in our simulations pre-specifying the degree of shrinkage prior to fitting led to accurate coefficients and predictions even in non-ideal settings such as encountered in the context of rare outcomes or sparse predictors.
    Applying tuned ridge regression in small or sparse datasets is problematic as it results in unstable coefficients and predictions. In contrast, determining the degree of shrinkage according to some meaningful prior assumptions about true effects has the potential to reduce bias and stabilize the estimates.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Lipidomics is an emerging field of science that holds the potential to provide a readout of biomarkers for an early detection of a disease. Our objective was to identify an efficient statistical methodology for lipidomics-especially in finding interpretable and predictive biomarkers useful for clinical practice. In two case studies, we address the need for data preprocessing for regression modeling of a binary response. These are based on a normalization step, in order to remove experimental variability, and on a multiple imputation step, to make the full use of the incompletely observed data with potentially informative missingness. Finally, by cross-validation, we compare stepwise variable selection to penalized regression models on stacked multiple imputed data sets and propose the use of a permutation test as a global test of association. Our results show that, depending on the design of the study, these data preprocessing methods modestly improve the precision of classification, and no clear winner among the variable selection methods is found. Lipidomics profiles are found to be highly important predictors in both of the two case studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号