Skip to main content

Volume 16 Supplement 3

Sepsis 2012

Use of Centre for Disease Control criteria to classify infections in critically ill patients: results from an interobserver agreement study

Background

Correct classification of the source of infection is important in observational and interventional studies of sepsis. The Centre for Disease Control (CDC) criteria are most commonly used for this purpose, but the robustness of these definitions in critically ill patients is not known. We determined the interobserver agreement for classifying infections in the ICU.

Methods

Data were collected as part of a prospective cohort of 1,214 critically ill patients admitted to two hospitals in the Netherlands between January 2011 and June 2011. Eight observers assessed a random sample of 168 out of 554 patients who had experienced at least one infectious episode in the ICU. Each patient was assessed by two randomly selected observers who independently scored the source of infection (by affected organ system or site), the plausibility of infection (rated as none, possible, probable, or definite), and the most likely causative pathogen. Assessments were based on a post hoc review of all available clinical, radiological and microbiological evidence. The observed diagnostic agreement for source of infection was classified as partial (that is, matching on organ system or site) or complete (that is, matching on specific diagnostic terms), for plausibility as partial (two-point scale) or complete (four-point scale), and for causative pathogens as an approximate or exact pathogen match. Interobserver agreement was expressed as percentage agreement and as a kappa statistic.

Results

A total of 206 infectious episodes were observed in 168 patients. Agreement regarding the source of infection was 89% (183/206) and 69% (142/206) for partial and complete diagnostic agreement, respectively (Table 1). This resulted in an overall kappa of 0.85 (95% CI = 0.79 to 0.90). Agreement varied from 70 to 100% within major diagnostic subgroups. In the subgroup of 142 episodes with full diagnostic agreement on source of infection, the interobserver agreement for plausibility of infection was 83% and 65% on a two-point and four-point scale, respectively. For causative pathogen, agreement was 78% and 70% for an approximate and exact pathogen match, respectively.

Table 1 Agreement across the various sources of infection

Conclusion

In this study, overall interobserver agreement of classifying infections using CDC criteria was excellent, whereas an exact match on all aspects of the diagnosis between independent observers was limited for some infections.

Author information

Authors and Affiliations

Authors

Rights and permissions

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Cite this article

Klouwenberg, P.K., Ong, D., Bos, L. et al. Use of Centre for Disease Control criteria to classify infections in critically ill patients: results from an interobserver agreement study. Crit Care 16 (Suppl 3), P28 (2012). https://doi.org/10.1186/cc11715

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/cc11715

Keywords

  • Interobserver Agreement
  • Kappa Statistic
  • Exact Match
  • Percentage Agreement
  • Causative Pathogen