Hazard ratio

In survival analysis, the hazard ratio (HR) is the ratio of the hazard rates corresponding to the conditions characterised by two distinct levels of a treatment variable of interest.

"[1] In essence, the hazard for the composite outcome was 80% lower among the vaccinated relative to those who were unvaccinated in the same study.

[2] Hazard ratios differ from relative risks (RRs) and odds ratios (ORs) in that RRs and ORs are cumulative over an entire study, using a defined endpoint, while HRs represent instantaneous risk over the study time period, or some subset thereof.

Regression models are used to obtain hazard ratios and their confidence intervals.

For two groups that differ only in treatment condition, the ratio of the hazard functions is given by

[4] For a continuous explanatory variable, the same interpretation applies to a unit difference.

In its simplest form, the hazard ratio can be interpreted as the chance of an event occurring in the treatment arm divided by the chance of the event occurring in the control arm, or vice versa, of a study.

The curve represents the odds of an endpoint having occurred at each point in time (the hazard).

[3] When a study reports one hazard ratio per time period, it is assumed that difference between groups was proportional.

from the Cox-model or the log-rank test might then be used to assess the significance of any differences observed in these survival curves.

[9] Conventionally, probabilities lower than 0.05 are considered significant and researchers provide a 95% confidence interval for the hazard ratio, e.g. derived from the standard deviation of the Cox-model regression coefficient, i.e.

[9][10] Statistically significant hazard ratios cannot include unity (one) in their confidence intervals.

For instance, a surgical procedure may have high early risk, but excellent long term outcomes.

[citation needed] If the hazard ratio between groups remain constant, this is not a problem for interpretation.

However, interpretation of hazard ratios become impossible when selection bias exists between groups.

The researchers' decision about when to follow up is arbitrary and may lead to very different reported hazard ratios.

In the Cox-model, this can be shown to translate to the following relationship between group survival functions:

[11] It should be clear that the hazard ratio is a relative measure of effect and tells us nothing about absolute risk.

[3] Treatment effect depends on the underlying disease related to survival function, not just the hazard ratio.

A statistically important, but practically insignificant effect can produce a large hazard ratio, e.g. a treatment increasing the number of one-year survivors in a population from one in 10,000 to one in 1,000 has a hazard ratio of 10.

It is unlikely that such a treatment would have had much impact on the median endpoint time ratio, which likely would have been close to unity, i.e. mortality was largely the same regardless of group membership and clinically insignificant.

[citation needed] By contrast, a treatment group in which 50% of infections are resolved after one week (versus 25% in the control) yields a hazard ratio of two.

If it takes ten weeks for all cases in the treatment group and half of cases in the control group to resolve, the ten-week hazard ratio remains at two, but the median endpoint time ratio is ten, a clinically significant difference.

Kaplan-Meier curve illustrating overall survival based on volume of brain metastases . Elaimy et al. (2011) [ 6 ]