The hazard ratio is a useful measure to compare groups of patients with respect to a survival outcome. In the context of clinical trials, the terms ‘survival outcome’ or ‘survival time’ are used generically to denote the time ‘survived’ from a specified time origin, like diagnosis of cancer, start of treatment, or first infarction, to a specific event of interest, like death, recurrence of cancer, or reinfarction. Broadly speaking, the hazard rate quantifies the risk of experiencing an event at a given point in time. The hazard ratio at that time is the ratio of this risk in one population divided by the risk in a second population chosen as reference group. A hazard ratio below 1.0 indicates that the risk of the event is lower in the first group than the risk in the reference group, while a hazard ratio of 1.0 indicates equal risks, and a value above 1.0 implies a higher risk in the first group compared to the reference group.

In general, the hazard ratio between two populations can vary over time. Often it is plausible to assume proportional hazards (see Cox’s proportional hazards model), i.e., to assume that even although the respective hazard rates in each group may change in time, their ratio is constant over time. In this case, the constant hazard ratio adequately summarizes the relative survival experience in the two groups over the whole time axis. It describes the size of the difference between groups, ...

Start Free Trial

No credit card required