Propensity score matching


In the statistical analysis of observational data, propensity score matching is a statistical matching technique that attempts to estimate the effect of a treatment, policy, or other intervention by accounting for the covariates that predict receiving the treatment. PSM attempts to reduce the bias due to confounding variables that could be found in an estimate of the treatment effect obtained from simply comparing outcomes among units that received the treatment versus those that did not. Paul Rosenbaum and Donald Rubin introduced the technique in 1983.
The possibility of bias arises because a difference in the treatment outcome between treated and untreated groups may be caused by a factor that predicts treatment rather than the treatment itself. In randomized experiments, the randomization enables unbiased estimation of treatment effects; for each covariate, randomization implies that treatment-groups will be balanced on average, by the law of large numbers. Unfortunately, for observational studies, the assignment of treatments to research subjects is typically not random. Matching attempts to reduce the treatment assignment bias, and mimic randomization, by creating a sample of units that received the treatment that is comparable on all observed covariates to a sample of units that did not receive the treatment.
For example, one may be interested to know the consequences of smoking. An observational study is required since it is unethical to randomly assign people to the treatment 'smoking.' The treatment effect estimated by simply comparing those who smoked to those who did not smoke would be biased by any factors that predict smoking. PSM attempts to control for these biases by making the groups receiving treatment and not-treatment comparable with respect to the control variables.

Overview

PSM is for cases of causal inference and simple selection bias in non-experimental settings in which: few units in the non-treatment comparison group are comparable to the treatment units; and selecting a subset of comparison units similar to the treatment unit is difficult because units must be compared across a high-dimensional set of pretreatment characteristics.
In normal matching, single characteristics that distinguish treatment and control groups are matched in an attempt to make the groups more alike. But if the two groups do not have substantial overlap, then substantial error may be introduced. For example, if only the worst cases from the untreated "comparison" group are compared to only the best cases from the treatment group, the result may be regression toward the mean, which may make the comparison group look better or worse than reality.
PSM employs a predicted probability of group membership—e.g., treatment versus control group—based on observed predictors, usually obtained from logistic regression to create a counterfactual group. Propensity scores may be used for matching or as covariates, alone or with other matching variables or covariates.

General procedure

1. Run logistic regression:
2. Check that propensity score is balanced across treatment and comparison groups, and check that covariates are balanced across treatment and comparison groups within strata of the propensity score.
3. Match each participant to one or more nonparticipants on propensity score:
4. Verify that covariates are balanced across treatment and comparison groups in the matched or weighted sample
5. Multivariate analysis based on new sample
Note: When you have multiple matches for a single treated observation, it is essential to use Weighted Least Squares rather than Ordinary Least Squares.

Formal definitions

Basic settings

The basic case is of two treatments, with N subjects. Each subject i would respond to the treatment with and to the control with. The quantity to be estimated is the average treatment effect:. The variable indicates if subject i got treatment or control. Let be a vector of observed pretreatment measurement for the ith subject. The observations of are made prior to treatment assignment, but the features in may not include all of the ones used to decide on the treatment assignment. The numbering of the units are assumed to not contain any information beyond what is contained in. The following sections will omit the i index while still discussing about the stochastic behavior of some subject.

Strongly ignorable treatment assignment

Let some subject have a vector of covariates X, and some potential outcomes r0 and r1 under control and treatment, respectively. Treatment assignment is said to be strongly ignorable if the potential outcomes are independent of treatment conditional on background variables X. This can be written compactly as
where denotes statistical independence.

Balancing score

A balancing score b is a function of the observed covariates X such that the conditional distribution of X given b is the same for treated and control units:
The most trivial function is.

Propensity score

A propensity score is the probability of a unit being assigned to a particular treatment given a set of observed covariates. Propensity scores are used to reduce selection bias by equating groups based on these covariates.
Suppose that we have a binary treatment indicator Z, a response variable r, and background observed covariates X. The propensity score is defined as the conditional probability of treatment given background variables:

Main theorems

The following were first presented, and proven, by Rosenbaum and Rubin in 1983:
If we think of the value of Z as a parameter of the population that impacts the distribution of X then the balancing score serves as a sufficient statistic for Z. Furthermore, the above theorems indicate that the propensity score is a minimal sufficient statistic if thinking of Z as a parameter of X. Lastly, if treatment assignment Z is strongly ignorable given X then the propensity score is a minimal sufficient statistic for the joint distribution of.

Graphical test for detecting the presence of confounding variables

has shown that there exists a simple graphical test, called the back-door criterion, which detects the presence of confounding variables. To estimate the effect of treatment, the background variables X must block all back-door paths in the graph. This blocking can be done either by adding the confounding variable as a control in regression, or by matching on the confounding variable.

Advantages and disadvantages

PSM has been shown to increase model "imbalance, inefficiency, model dependence, and bias" and is no longer recommended compared to other matching methods. The insights behind the use of matching still hold but should be applied with other matching methods; propensity scores also have other productive uses in weighting and doubly robust estimation.
Like other matching procedures, PSM estimates an average treatment effect from observational data. The key advantages of PSM were, at the time of its introduction, that by using a linear combination of covariates for a single score, it balances treatment and control groups on a large number of covariates without losing a large number of observations. If units in the treatment and control were balanced on a large number of covariates one at a time, large numbers of observations would be needed to overcome the "dimensionality problem" whereby the introduction of a new balancing covariate increases the minimum necessary number of observations in the sample geometrically.
One disadvantage of PSM is that it only accounts for observed covariates. Factors that affect assignment to treatment and outcome but that cannot be observed cannot be accounted for in the matching procedure. As the procedure only controls for observed variables, any hidden bias due to latent variables may remain after matching. Another issue is that PSM requires large samples, with substantial overlap between treatment and control groups.
General concerns with matching have also been raised by Judea Pearl, who has argued that hidden bias may actually increase because matching on observed variables may unleash bias due to dormant unobserved confounders. Similarly, Pearl has argued that bias reduction can only be assured by modelling the qualitative causal relationships between treatment, outcome, observed and unobserved covariates. Confounding occurs when the experimenter is unable to control for alternative, non-causal explanations for an observed relationship between independent and dependent variables. Such control should satisfy the "backdoor criterion" of Pearl.

Implementations in statistics packages