Skip to main content

To verify four 5-year-old mathematical models to predict the outcome of ICU patients

Background

In 1995 a retrospective study was made on all the patients admitted in our ICU from 2 April 1990 to 31 December 1995 with a length of stay of at least 24 hours. For each patient APACHE II score was calculated after 24 hours and, depending on the length of ICU stay, on the 5th, 10th and 15th day from the admission. The case mix of 1254 patients was subdivided in two series. The first series was used for developing the models and the second series to verify them.

Data of the patients from the first series were used to make four mathematical models (1st, 5th, 10th, 15th day from the admission) to predict the outcome from the calculated APACHE II score. Stepwise logistic regression (BMDP, Los Angeles) was used to make these four models. For each model calibration was tested with the Hosmer–Lemeshow Goodness-of-Fit test and discrimination was tested with the ROC-curve. These four models were validated for calibration and discrimination also in the second series.

The aim of this study is to verify these four models in patients admitted in the same ICU during the year 2000 and, in this way, to make a quality control of ICU care.

Material and methods

A prospective study was made on patients admitted in our ICU during the year 2000 with a length of stay of at least 24 hours. On the base of the four old mathematical models the risk of death was calculated for each of the four days (on the 1st, 5th, 10th, 15th day from the admission) and calibration and discrimination were tested.

Results

Three hundred and fifty-seven patients with more than 24 hours ICU stay were admitted in the study. The first model, at 24 hours from the admission, had a bad calibration at the Hosmer–Lemeshow test (P = 0.000088), while area under the ROC-curve was equal to 0.74 ± 0.32. The model at the 5th day had a bad calibration too (P = 0.000588), with an area under the curve equal to 0.83 ± 0.04. At the 10th day from the admission the model was well calibrated (Hosmer–Lemeshow test: P = 0.112247) with a ROC = 0.89 ± 0.04. Finally at the 15th day the model was again bad calibrated (P = 0.001422), but with a very good discrimination (area = 0.91 ± 0.06).

Discussion

A further analysis suggest that to be increased was outcome of neurosurgical and trauma patients, while outcome of patients with other pathologies remained unchanged. To be increased is not the general quality of ICU care, but only the treatment of neurosurgical and trauma patients. Moreover for the neurosurgical patients, the introduction of neuroradiological treatment of cerebral aneurysm with Guglielmi's coil has contributed to increase the outcome of these patients.

Conclusion

These self-made models help the physician to understand ICU outcome changes during the years and if increased amount of money are justified from increased outcome.

Author information

Affiliations

Authors

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Donati, A., Gabbanelli, V., Scala, C. et al. To verify four 5-year-old mathematical models to predict the outcome of ICU patients. Crit Care 6, P240 (2002). https://doi.org/10.1186/cc1708

Download citation

Keywords

  • Mathematical Model
  • Logistic Regression
  • Emergency Medicine
  • Trauma Patient
  • Model Calibration