Hi,

I am conducting a study using a Cox proportional hazards model to predict 1-year overall survival (OS). As part of the study, I constructed a nomogram to provide an intuitive visualization of the prediction model. The nomogram demonstrated good discrimination, with a C-index of nearly 0.88 and an AUC-ROC of approximately 0.86. However, I am concerned about potential issues with the calibration curve and whether it might reflect underfitting or overestimation.

Additionally, I would like to know if there are any statistical tests specifically designed to evaluate calibration in the context of survival analysis. For logistic regression models, I am aware of the Hosmer-Lemeshow test, which assesses goodness-of-fit. Is there a comparable test or method for calibration assessment in Cox proportional hazards models?

I would greatly appreciate your observations and advice on the following:

  • The overall quality of the calibration curve and the constructed nomogram.
  • Whether there is clear evidence of underfitting or overfitting in this curve.
  • Suggestions on how to address potential calibration issues or improve the nomogram’s predictive accuracy.
  • Whether statistical tests exist to evaluate calibration in survival analysis and how they might be applied to my model.
  • I have attached the calibration curve (Graph 1) for your review. I would be grateful for any feedback, suggestions, or shared experiences that could help refine my approach.

    More Sancho Xavier's questions See All
    Similar questions and discussions