Max Rady College of Medicine

Concept: Acute Myocardial Infarction (AMI) - Adapting the ICES AMI mortality model to Manitoba data

 Printer friendly

Concept Description

Last Updated: 2004-11-19

Introduction

    The ICES Practice Atlas describes the creation of a logistic model for predicting mortality 30-days and one year after AMI. This model is used to calculate risk-adjusted mortality rates, which form the basis for comparisons between various regions and institutions in Ontario. See Phibbs et al. (1992).

    Of interest to the investigators was how their model would perform on data from other jurisdictions. This note describes an attempt to validate the ICES model on Manitoba AMI data.

1. The ICES model

    The ICES model works with hospitalization records having a most responsible diagnosis of AMI (ICD-9-CM 410 1 ). From this first cut, several more detailed exclusions were applied to fine-tune the cohort and weed out likely misdiagnoses and other "problem" cases.

    Once the cohort has been defined, risk factors based on patient characteristics and other diagnoses on the record were identified. The definitions were based on prior studies which looked specifically at prediction of short-term AMI survival. An initial group of 40 risk factors was pared down using univariate analyses and clinical considerations, and the final set was selected using a multivariate backwards stepwise logistic regression.

    Predictive power of the final models was measured using the area under the ROC curve, and calibration was measured with the Hosmer-Lemeshow goodness of fit test.

2. Inclusions and Exclusions

    The ICES model can be defined for Manitoba data as follows:

    Include: All patients discharged from Manitoba hospitals with a most responsible diagnosis of AMI (ICD-9-CM 410) between fiscal 94/95and 96/97.

    The years of course depend on your own study period, but for the purposes of validation the same years as used in the Practice Atlas will be used here.

    The exclusions applied to this cohort are:
    1) Not admitted to an acute care hospital
    2) Age <20 or age> 105
    3) Non-Manitoba resident
    4) Invalid Registration Number (REGNO)
    5) Admitted to a non-cardiac surgical service
    6) Transferred from another acute care facility
    7) AMI coded as complication
    8) AMI admission within past year
    9) Discharged alive with total LOS <4 days
    10) Miscoded based on hospital chart review
    Of these, only #10 cannot be carried out using only claims data. It was not applied in the validation.

    • For details on how these exclusions were operationalized please contact Randy Walld.

3. Risk factors

    Age was coded as a categorical variable with four levels, using age 20 to 49 as the baseline. Sex was coded using male as the baseline.

    The comorbidity risk factors in the original ICES model were based on ICD-9 codes, not ICD-9-CM. While this did not pose a problem for most of them, chronic renal failure seemed to translate less well than the others.

    Risk factor ICD-9 code
    Shock 785.5
    Diabetes with complications 250.1 - 250.9
    Congestive heart failure 428.x
    Malignancy 140.0 - 208.9
    Cerebrovascular disease 430.0 - 438.x
    Pulmonary edema 518.4, 514.x
    Acute renal failure 584.x, 586.x, 788.5
    Chronic renal failure 585.x, 403.x, 404.x, 996.7,
      394.2, 399.4, V45.1
    Cardiac dysrhythmias 427.0 - 427.9

ICD-9 and ICD-9-CM Discrepancies

Correspondence with Jean Agras , validating the ICES model on California data, suggested that the codes 394.2 and 399.4 for chronic renal failure make no sense in ICD-9-CM. 394.2 codes "mitral stenosis with insufficiency", while 399.4 does not even exist. Because of this, these codes were dropped from the analysis.

A further difference between ICD-9 and ICD-9-CM is the presence of a fifth-digit in the Clinical Modification. Jean pointed out that chronic renal failure can be coded as a fifth-digit in code 404, "hypertensive heart and renal disease", and in code 996.73, "[Complications] due to renal dialysis device..."

Using these codes, however, would have identified cases in the Manitoba data which would not be picked up in Ontario. Since the validation requires the data to be treated as similarly as possible between the two sites, this was seen as undesirable. The fifth-digit modifications were therefore not used, even though this threw out some potentially useful information.

The final codes used for chronic renal failure were:

Chronic renal failure 585.x, 403.x, 404.x, 996.7, V45.1

4. Results

    A summary of the model fit on Ontario and Manitoba data looks like this:

    Province Model N AMI ROC statistic Hosmer-Lemeshow statistic
    Ontario 30-day 52,616 0.775 120.71 (p=.0001)
    Ontario 1 year 52,616 0.793 154.07 (p=.0001)
    Manitoba 30-day 4,361 0.779 13.078 (p=.1092)
    Manitoba 1 year 4,361 0.791 11.962 (p=.1529)

    Model fit as measured by ROC is very good, and remarkably similar between the two provinces.

    The more than ten-fold difference in sample size makes comparison of the Hosmer-Lemeshow statistics difficult. The Ontario sample has much greater power to detect calibration errors, so it is hard to say on the basis of this which province has the better fit. More years could be added to the Manitoba sample to increase the sample size; an additional three years would roughly double the number of cases.

Cross-validation

    Another method of validation is to apply the actual parameter estimates generated on the Ontario data to the Manitoba cohort. This test would indicate if the model parameters are 'over fit' to the Ontario data, and do not generalize well to other, similar, samples.

    Model applying Ontario parameter estimates to Manitoba data:


    Model ROC statistic Hosmer-Lemeshow statistic
    30-day 0.770 19.69 (p=.0063)
    1 year 0.783 11.86 (p=.1052)

    These indicate slightly poorer fit than the model generated from the Manitoba dataset itself. But the overall fit is still very good, indicating that the model generalizes well, at least to Manitoba data.

    Although the overall fit is good, subsets of the data may be fit less well than these statistics would indicate. For the whole Manitoba sample, the predicted 30-day mortality rate is 16.1% vs 16.3 actual. But for those with acute renal failure, the predicted death rate of 61.6% seriously underestimates the actual value of 75.4%. This is perhaps to be expected, since the 30-day mortality rate for those with acute renal failure in Ontario is 53.2%.

    That the model predicts a rate of 61.6% and not 53.2% reflects the differences in demographics and comorbid conditions among those with acute renal failure in the two provinces. The additional 14% (61.6% to 75.4%), with acute renal failure simply have a higher fatality rate in Manitoba than in Ontario, or that there exist relationships in the data that are not captured by the present model.

Correcting for lack of fit

    The Hosmer-Lemeshow test does not attempt to account for systematic bias in the predicted outcomes; it only measures total lack of fit. A technique which attempts to determine (and correct for) systematic bias is described by Phibbs et al. (1992).

    The authors examine several logistic models for rare events which overestimate the probability of the outcome at the extremes of the risk spectrum, and underestimate it in the middle. To correct for this 'U'-shaped relationship, the data are first modeled with quadratic regression, and the resulting model is then applied to the predicted probabilities to arrive at a set of adjusted predicted outcomes. These are then re-evaluated with a version of the Hosmer-Lemeshow test to see what non-systematic lack of fit remains.

Application to Manitoba data

    When this method was applied to the output of the logistic models an improvement in fit was seen for both 30-day and 1 year, with the 1 year model showing greater improvement. This suggests that re-specification of the model (by including interaction terms) could be effective in improving calibration, especially for longer-term mortality prediction.

Footnotes:

    1 410 refers to a recent AMI. Robinson et al. (1997) also used 412 (old myocardial infarction), the code for a patient not currently presenting any symptoms. The definitions used in their study "were intentionally constructed to cover any possible case to which the respondent might have given a positive response" (Robinson, Jan.27/97 email to R.Bond).

More Information

  • Phibbs CS , Romano PS, Luft HS, et al.(1992). Improving the fit of logistic models for mortality and other rare events. San Francisco: Institute for Health Policy Studies, University of California.

Related concepts 

Related terms 

References 

  • Robinson JR, Young TK, Roos LL, Gelskey DE. Estimating the burden of disease: Comparing administrative data and self-reports. Med Care 1997;35(9):932-947. [Abstract] (View)
  • Tu JV, Austin PC, Naylor CD, Iron K, Zhang H. "Acute myocardial infarction outcomes in Ontario." In: Naylor CD, et. al. (eds). Cardiovascular Health and Services in Ontario: An ICES Atlas. Toronto, ON: Institute for Clinical Evaluative Sciences; 1999. 83-110.(View)
  • Tu JV, Austin PC, Walld R, Roos L, Agras J, McDonald KM. Development and validation of the Ontario acute myocardial infarction mortality prediction rules. J Am Coll Cardiol 2001;37(4):992-997. [Abstract] (View)

Keywords 

  • cardiovascular disease
  • comorbidity
  • hypertension
  • logistic regression
  • risk factors


Request information in an accessible format

If you require access to our resources in a different format, please contact us:

We strive to provide accommodations upon request in a reasonable timeframe.

Contact us

Manitoba Centre for Health Policy
Community Health Sciences, Max Rady College of Medicine,
Rady Faculty of Health Sciences,
Room 408-727 McDermot Ave.
University of Manitoba
Winnipeg, MB R3E 3P5 Canada

204-789-3819