Observational studies
During the 1980s two retrospective observational studies of patients with acute coronary syndromes questioned for the first time the benefit and safety of PAC monitoring, suggesting a higher mortality in patients undergoing PAC placement. Gore and coworkers [3] conducted a retrospective study in 3263 patients with acute myocardial infarction (MI). They found a consistent and significant increase in PAC use in patients with acute MI over time, from 7.2% in 1975 to 19.9% in 1984. They also found use of a PAC to be associated with increased length of hospital stay, irrespective of the development of acute clinical complications, without any long-term benefit. Zion and coworkers [4] analyzed the use of PAC in a registry including 5841 hospitalized patients with acute MI. A total of 371 patients underwent PAC placement, and in-hospital mortality was found to be higher in patients with congestive heart failure (CHF) who received a PAC irrespective of the presence or absence of 'pump failure'. One may argue that PAC use is reserved for sicker patients, thus inappropriately skewing such data against PAC use. Further analysis supported this assumption that the PAC was used more frequently in sicker patients. However, on adjusting for the severity of CHF, no difference in mortality was found in patients with mild or moderate CHF. Therefore, Zion and coworkers concluded that although higher inhospital mortality was found in patients receiving PAC, this excess was probably related to differences in severity of CHF. They assumed that it was unlikely that PAC increased mortality because severity-adjusted mortality rates were similar. The issue of adjustment for severity of illness when comparing groups recurs in later studies. Specifically, how does one adjust for differences in case mix? Severity scores were not calibrated to adjust for comparisons between heterogeneous groups.
In the 1990s more investigators became interested in addressing the relationship between survival and PAC use. However, those studies focused more on the potential harm of using PACs. In 1996 Connors and coworkers [5] conducted a retrospective observational study of patients with end-stage terminal disease who were expected to die within 12 months of enrollment, analyzing the subgroup of patients who required intensive care unit (ICU) admission separately. In all, 5735 critically ill patients from a 9105 SUPPORT (Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments) patient cohort were retrospectively analyzed. Of this subset, 2184 (24%) patients underwent PAC placement; from this subgroup the investigators identified 1008 (11%) patients for whom matched non-PAC critical care patients were available for analysis. The data suggested that PAC use in the critically ill was associated with increased 30-day mortality (odds ratio [OR] 1.24), increased costs of care (mean difference in cost US$13,600), and increased length of ICU stay (mean difference 3 days), even after adjusting for some of the underlying risk factors for catheterization. Nevertheless, the matching criteria were based on demographic parameters such as sex, age, patient education and type of health care insurer, but not need for catecholamines. Thus, it is not clear whether the matching of patients was appropriate. Furthermore, as designed, the propensity score used in that study would be insensitive to a response to therapy within 24 hours. Thus, failure to respond to initial therapy could be an important missing covariate in this propensity score and could account for much of the increased mortality and use of resources associated with PAC use.
The latter study [5] raised many questions about the safety of PAC use, and so it is important to consider the flaws of that work. First, patients who did not respond to initial therapy were more likely to undergo PAC, as demonstrated by the TISS (Therapeutic Intervention Scoring System) score employed that study. Second, patients managed with a PAC were more likely to enter the study with multiple organ failure, acute respiratory failure, CHF and higher APACHE (Acute Physiology and Chronic Health Evaluation) III score – factors known to be associated with increased mortality. Third, the patients receiving a PAC had lower mean arterial pressures and baseline serum albumin concentration, which are two other factors associated with increased mortality. Thus, it is not clear that PAC use increased mortality, even in the SUPPORT cohort. Although the authors were unable to account for this apparent lack of benefit, they suggested that a randomized controlled trial of PAC use might clarify the result.
In 2000 another large retrospective observational study of 10,217 patients, conducted by Rapoport and coworkers [6], identified an independent association between PAC use and admission to a surgical ICU (a twofold increase), patient race (OR 1.38 for white patients), care delivered by an intensivist (a two-thirds reduction in the probability of catheter use), and having private insurance coverage (OR 1.33). Therefore, they suggested that studies measuring clinical and economic outcomes could help in developing policies for rational use of PACs. However, Murdoch and coworkers [7] conducted a similar study to that by Connors and coworkers described above. Considering 15 relevant variables in 4182 ICU patients (1849 with PAC use and 2333 without), they demonstrated a propensity score of 0.88, similar to that reported by Connors and colleagues (0.83), indicating good predictive value for insertion of a PAC, which was also found to be a strong predictor of death (OR 45, 95% confidence interval [CI] 34.7–58.3). However, use of a PAC was not predictive of death (OR 1.08, 95% CI 0.87–1.33) after correction for treatment bias using this propensity score. This analysis confirms the hypothesis tested, namely that mortality is greater for patients receiving a PAC, although the increased mortality is not related to the PAC itself.
To underscore the concept that patients who have a higher risk of dying are more likely to receive a PAC, Polanczyk and coworkers [8] reported a prospective observational study of 4059 patients undergoing major elective non-cardiac surgery. The purpose of the study was to evaluate the relationship between use of perioperative PAC and postoperative cardiac complication rates in patients undergoing major noncardiac surgery. Multivariate analysis identified a threefold increase in major postoperative cardiac events and a twofold increase in major noncardiac events in those patients treated with a PAC. They also suggested that randomized clinical trials should be conducted to evaluate the impact of this intervention on perioperative care.
Controversies over benefits of PAC: urge for clinical trials
In mid-1990, following the controversies regarding the safety of PAC use raised by the Connors study [5], Dalen and Bone [9] suggested that the US Food and Drug Administration should impose a moratorium against use PAC until further studies were done. This caused a series of letters to the editors of various journals about disagreements among critical care physicians regarding the utility of and risks associated with PAC monitoring.
A survey of physician members of the Society of Critical Care Medicine in the USA was then conducted to evaluate physicians' attitudes toward and knowledge of the PAC and its use [10]. With a 22% return rate, the results of the survey were significant in that 95% of the respondents felt that a moratorium against PAC use was not warranted and that 75% of the respondents favored a prospective, randomized controlled trial involving PAC. The survey also showed that one-third of respondents incorrectly identified PA occlusion pressure on a clear tracing. This inability to identify one of the key hemodynamic variables available from PAC monitoring correctly raised another issue, namely that of the quality of training of ICU specialists in interpreting PAC findings. Subsequently, other studies were done to evaluate whether ICU physicians could predict the hemodynamic values generated by PAC monitoring prior to performing the procedure. If they could, then it was reasoned that they did not need to perform PAC placement. These small series studies found an average 50% rate of correct prediction of PAC values by physicians. Furthermore, 50% of PAC-derived variables in critically ill patients led to a change in management. Importantly, these changes tended to congregate among those patients with circulatory shock who were unresponsive to standard therapeutic measures [11–13]. Interestingly, these were the very patients originally suggested by Swan and coworkers [1] to be those who would benefit from PAC placement.
Other studies [14, 15] compared the utility of PAC and CVP-only monitoring in managing the perioperative period in patients undergoing cardiac and vascular surgery. No differences in clinically relevant outcomes were seen between the two groups (including a specific subgroup analysis of the highest risk patients); these outcomes included length of ICU stay, occurrence of perioperative cardiac, pulmonary or renal morbidities, in-hospital mortality, major hemodynamic aberrations, and significant noncardiac systemic complications. The one statistically significant difference between groups was the professional fee charged for anesthetic care, which was higher for patients with PAC than for those with CVP catheters. These studies, along with other limited cost-effectiveness studies [2, 16], suggested that there was a need for large multicenter randomized clinical trials and recommended that, until then, even high-risk cardiac and vascular surgical patients may safely be managed without routine PAC placement, which could result in important cost savings.
These controversies stimulated a series of small clinical trials; unfortunately, the discrepant results of those studies only exacerbated the uncertainty. Guyatt [17] reported a significant difference in favor of patients not receiving PAC, but Sandham and coworkers [18] found no benefit from therapy directed by PAC over standard care in high-risk surgical patients. At the same time, in an evaluation of all randomized clinical trials of PACs using a random effect model, Ivanov and coworkers [19] reported a relative risk ratio of 0.8 in favor of PAC, but they also showed serious deficiencies, including a lack of a priori sample size calculations, unclear definitions of concomitant therapy, inability to blind physicians and patients, and lack of blinded outcome assessments. Therefore, all of these studies collectively suggested that better designs and large, multicenter randomized clinical trials were needed.
Multicenter clinical trials
In 2003, Richard and coworkers [20] conducted a multi-center randomized clinical trial of early use of PAC versus no PAC in management of patients with shock, acute respiratory distress syndrome, or both. The treatment was left to the discretion of treating physicians. As with all previous prospective studies, those investigators found no significant impact of PAC on mortality (at 28 or 90 days) or morbidity (organ failure or ventilator dependence). Valentine [21] and Bender [22] and their groups independently found similar results in vascular surgery patients; they concluded that routine use of PACs for perioperative monitoring during aortic surgery is not beneficial and may even be associated with a higher rate of intraoperative complications.
Recently, the UK National Health Service sponsored study PAC-Man (Pulmonary Artery Catheters in Patient Management in Intensive Care) [23] reported results comparing PAC-based versus CVP-based management. As with all previous studies, the PAC-Man study found no difference in hospital mortality between patients managed with and those managed without a PAC. It also found that none of the complications associated with insertion of a PAC (<10%) was fatal. Therefore, no clear evidence of benefit or harm from managing critically ill patients with a PAC was found.
Finally, data from the ESCAPE (Evaluation Study of Congestive heart failure and Pulmonary artery catheter Effectiveness) trial [24] were recently published. In that trial it was found that basing the decision to administer vasodilator and diuretic therapy on PAC data plus clinical judgment was not superior to basing the decision on clinical judgment alone, in terms of decreasing mortality or length of hospital stay. The time to resolution of symptoms was faster with PAC data plus clinical judgment, but this did not translate into other tangible benefits. The authors did note that there was a trend toward better outcomes using PAC-guided therapy in centers admitting greater numbers of patients, and there was a consistent trend toward greater functional improvement following PAC-guided therapy, which could reflect the close relation between filling pressures and symptoms of CHF. That both ESCAPE and PAC-Man found PAC to be safe suggests that previous retrospective reports of excess mortality with this monitoring device were confounded by the severity of the clinical settings in which the PAC was applied [5, 23, 24].
Use of PAC to guide treatment protocol
Based on the results of these retrospective and prospective clinical trials, there is level 1 evidence that nonspecific use of the PAC in the general management of critically ill patients is not associated with any change in mortality or morbidity. To a certain extent, these findings are comforting because they demonstrate that critical care physicians can use a highly invasive catheter with a series of potential complications (infra vide) without causing harm. However, these data do not address the fundamental issue regarding the utility of the PAC in patient management. Every study reported above merely compared the presence of a PAC with its absence. None of the studies used PAC-specific data to drive a treatment protocol that is known to improve outcome.
Presently, results with numerous resuscitation algorithm protocols have reflected a consistent theme of benefit or harm, depending on the timing and aggressiveness of that resuscitation. Resuscitation in shock can be divided into primary and secondary periods. The primary period is the time from initial evaluation through to the first round of resuscitation. The goals during this period are cardiopulmonary/cerebral resuscitation [25]. This encompasses establishment of an adequate airway and, if necessary, mechanical ventilation, restoration of productive cardiac rhythm and forward blood flow, and attainment of a mean arterial blood pressure above 60 mmHg [26]. Without achieving the latter initial goal, all other resuscitative goals are of questionable value and thus should not be considered alone. Once mean arterial blood pressure is sufficient to maintain cerebral and myocardial perfusion, then the secondary period of resuscitation begins. The goals during this period are establishment of an adequate organ perfusion pressure for all organs, establishment of adequate organ blood flow, and establishment of adequate oxygen transport to metabolically active tissues. The first two goals are achieved by utilizing volume expansion and vasoactive agents, often using data acquired by invasive hemodynamic monitoring via a PAC. The utility of the PAC must be assessed within this context, because no monitoring device can be expected to improve patient outcome if it is not coupled to a treatment that itself improves outcome.
In support of this is the recent, large, single-center clinical trial of early goal-directed therapy for the management of severe sepsis [27]. That study documented markedly improved outcomes when individuals were aggressively treated for circulatory shock in the emergency department during the initial 6 hours of hospitalization, rather than waiting for them to be transferred to the ICU for better monitoring. That study focused on rapidly achieving an adequate mean arterial pressure, CVP and urine output, as well as an adequate degree of tissue perfusion, as assessed by superior vena caval oxygen saturation. This study is in stark contrast to the negative findings of numerous large studies that aimed to improve survival in patients once they were transferred to the ICU using similar resuscitation end-points [28–31]. These negative studies do not mean that resuscitation is ineffective in supporting life in patients who are in shock. They merely demonstrate that there is no specific level of oxygen transport or mixed venous oxygen saturation that one must attain to ensure a good outcome once shock has induced tissue injury. Thus, the lack of documented benefit of PAC use in these clinical trials probably more reflects inadequate study design than inadequate utility [32].
Shoemaker and coworkers [33] conducted a prospective trial of supranormal oxygen delivery values in high-risk surgical patients before surgery and then during the operative interval. They then followed these patients during the postoperative period for development of acute lung injury, length of time of mechanical ventilation, mortality and total hospital costs. Following initial resuscitation (blood pressure >120/80 mmHg, hematocrit >34), patients were assigned to one of three treatment groups: PA control group (n = 30; goals: cardiac index [CI] 2.8–3.5 ml/min per m2, global oxygen transport [TO2] 400–550 ml/min per m2, and oxygen consumption [VO2] 120–140 ml/min); PA protocol group (n = 28; goals: CI >4.5 ml/min per m2, TO2 >600 ml/min per m2, and VO2 >170 ml/min); and CVP control group (no specific additional treatment goals). In each group the targeted levels of CI, TO2, and VO2 were achieved. The PA protocol group, as compared with the PA control group, had less time on the ventilator (2.3 versus 9.4 days), fewer postoperative deaths (1/28 versus 10/30), and fewer ICU days (10.2 days versus 15.8 days). Similarly, their hospital costs were the lowest of the three groups. The CVP group fared similarly to the PA control group. These data are in accordance with the large clinical trials reviewed above. When a PAC is present but not used to drive therapy, outcomes are no different than if the PAC is not present.
Interestingly, Lobo and coworkers [34] examined maximizing oxygen delivery in high-risk elderly surgical patients. They included 37 high-risk patients aged above 60 years. They compared a control resuscitation (n = 18; TO2 520–600 ml/min per m2) versus a hyper-resuscitation protocol (n = 19; TO2 >600 ml/min per m2). They observed that the TO2 goals were achieved in only 13 out of 37 patients (achievers). Importantly, there were more postoperative complications in the control group, including infections (12/18 in the control group versus 6/19 in the protocol group; relative risk [RR] 0.47, 95% CI 0.2–0.9) and cardiovascular dysfunction (RR 0.34, 95% CI 0.1–0.8). A mortality benefit was suggested but not demonstrated (mortality at 28 days: 33% in the control group versus 16% in the protocol group; RR 0.32, 95% CI 0.1–0.98). Importantly, these benefits also observed in the 'non-achiever' subgroup of the protocol group, suggesting that hyper-resuscitation prior to insult improves outcomes even if it does not increase the measured oxygen delivery.
Gattinoni and coworkers [29] studied 762 patients from 56 centers, all of whom had an Acute Physiology Score above 11. The population included patients undergoing high-risk surgery; those with massive blood loss, sepsis and respiratory failure; and trauma patients. However, the mean time from developing severe shock to enrollment in the study was 23 hours, making this study a late recovery protocol, not an early resuscitation protocol. The goals of therapy were separated into three treatment groups: CI 2.5–3.5 ml/min per m2; CI >4.5 ml/min per m2; and mixed venous oxygen saturation >70%. Unfortunately, the therapeutic goals were achieved in less than half of the second group. Importantly, the investigators found no difference in ICU or 6-month mortality for any diagnosis, even in the subgroup of patients in whom the targeted goals were achieved. This study concluded that therapy aimed at achieving supranormal CI or normal mixed venous oxygen saturation in patients with severe circulatory shock for more than 24 hours cannot be justified. In support of this lack of benefit, Hayes and coworkers [28] studied 100 critically ill patients with severe circulatory shock, assigning them to either aggressive supranormal oxygen delivery levels or normal oxygen delivery levels. They found markedly increased mortality in the treatment group compared with the control group (54% versus 34%; P < 0.05). Thus, aggressive therapy once tissue injury has occurred will not restore organ function but will carry its own increased risks. However, one can also conclude that early aggressive resuscitation before organ injury may be beneficial in high-risk patients, but is probably of marginal benefit in those who are less sick.
Recently, Shah and coworkers [35] conducted a meta-analysis of 5051 patients studied in 13 RCTs during the past 20 years, randomized to the use of PAC or no PAC. They found that there was a significantly higher rate of use of vasodilator and inotropic agents in PAC groups, but there was no difference in mortality between groups. The use of PAC did not improve survival or decrease the length of hospital stay. Importantly, none of the studies used PAC-derived variables to drive therapies of proven benefit. They merely noted the impact of having a PAC in place on outcome. They stated, as others have, that monitoring without linkage to therapies of proven benefit is unlikely to show benefit. All of these trials excluded patients in whom treating physicians thought a PAC was required for treatment, and so it is possible that patients outside the studied population – such as patients who are evaluated for heart and lung transplants – are in fact those who would benefit from PAC placement [35].
The NIH ARDSNet FACTT (Fluids and Catheters Treatment Trial) study, a multicenter clinical trial of PAC versus CVP, has just finished enrollment. It is study of 2 × 2 factorial design comparing liberal versus conservative fluid management with specific hemodynamic goals and treatment strategies, which involve use of fluids, inotropes, vasopressors, and diuretics as per protocols. This study is the only trial coupling a treatment protocol with use of PAC. Therefore, if it identifies any difference in outcomes between patients managed by PAC and those managed by CVP, then one can conclude that protocolized treatment guided by PAC is more or less effective, and so the study has more credibility and may justify a change to practice regarding use of PAC in the ICU. However, FACTT used PAC-derived filling pressures not to guide resuscitation but to limit it, considering the issue of whether limited resuscitation to avoid increasing pulmonary edema in acute respiratory distress syndrome can improve outcome. Thus, the results of this trial, although important, will not address the broader issue of PAC-guided therapy in the critically ill.
Summary: impact of PAC on outcome
The combined literature shows that the nonspecific placement of a PAC in a critically ill patient does not impair outcomes. However, no study has used PAC-derived variables to drive treatment protocols of proven benefit to determine whether PAC-specific data result in better outcomes than do data derived from less invasive devices, such as the central venous catheter and echocardiography. Such studies are needed, not only to validate use of the PAC or reject it, but also to validate or reject the use of any hemodynamic monitoring tool.