University of Copenhagen - Department of Biostatistics
Control for confounders in observational studies was generally handled through stratification and standardization until the 1960s. Standardization typically reweights the stratum-specific rates so that exposure categories become comparable and the resulting summary rates interpretable for the comparisons.
With the development first of loglinear models, soon also of nonlinear regression techniques (logistic regression, failure time regression) that the emerging computers could handle, regression modeling became the preferred approach, just as was already the case with multiple regression analysis for continuous outcomes.
Calculation of reweighted summary measures is still useful for interpretation purposes, and since the mid 1990s it has become increasingly obvious that weighting methods are sometimes even necessary in the analysis.
The talk will highlight selected scenes from the century-long dialogue between the two approaches to confounder control. The talk is based on joint work with David Clayton.