Calibration plots for risk prediction models in the presence of competing risks

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Calibration plots for risk prediction models in the presence of competing risks. / Gerds, Thomas A; Andersen, Per K; Kattan, Michael W.

In: Statistics in Medicine, Vol. 33, No. 18, 15.08.2014, p. 3191–3203.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Gerds, TA, Andersen, PK & Kattan, MW 2014, 'Calibration plots for risk prediction models in the presence of competing risks', Statistics in Medicine, vol. 33, no. 18, pp. 3191–3203. https://doi.org/10.1002/sim.6152

APA

Gerds, T. A., Andersen, P. K., & Kattan, M. W. (2014). Calibration plots for risk prediction models in the presence of competing risks. Statistics in Medicine, 33(18), 3191–3203. https://doi.org/10.1002/sim.6152

Vancouver

Gerds TA, Andersen PK, Kattan MW. Calibration plots for risk prediction models in the presence of competing risks. Statistics in Medicine. 2014 Aug 15;33(18):3191–3203. https://doi.org/10.1002/sim.6152

Author

Gerds, Thomas A ; Andersen, Per K ; Kattan, Michael W. / Calibration plots for risk prediction models in the presence of competing risks. In: Statistics in Medicine. 2014 ; Vol. 33, No. 18. pp. 3191–3203.

Bibtex

@article{2c89bc17011248b5b827f75a67e51209,
title = "Calibration plots for risk prediction models in the presence of competing risks",
abstract = "A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems.",
author = "Gerds, {Thomas A} and Andersen, {Per K} and Kattan, {Michael W}",
note = "Copyright {\textcopyright} 2014 John Wiley & Sons, Ltd.",
year = "2014",
month = aug,
day = "15",
doi = "10.1002/sim.6152",
language = "English",
volume = "33",
pages = "3191–3203",
journal = "Statistics in Medicine",
issn = "0277-6715",
publisher = "JohnWiley & Sons Ltd",
number = "18",

}

RIS

TY - JOUR

T1 - Calibration plots for risk prediction models in the presence of competing risks

AU - Gerds, Thomas A

AU - Andersen, Per K

AU - Kattan, Michael W

N1 - Copyright © 2014 John Wiley & Sons, Ltd.

PY - 2014/8/15

Y1 - 2014/8/15

N2 - A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems.

AB - A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems.

U2 - 10.1002/sim.6152

DO - 10.1002/sim.6152

M3 - Journal article

C2 - 24668611

VL - 33

SP - 3191

EP - 3203

JO - Statistics in Medicine

JF - Statistics in Medicine

SN - 0277-6715

IS - 18

ER -

ID: 134780973