For the most basic background of random forest, we refer to this lecture note on how to split a tree node and this one for how to build a forest. Assuming that we understand how the tree finds the split in the regression setting, which is to maximize the variance reduction of the current observation’s outcome \(Y\) at a given internal node. The issue for our treatment decision problem is that splitting using this outcome does not lead to terminal nodes that are homogeneous in terms of the treatment effect or treatment decision. Hence, the tree split needs to facilitate the treatment effect estimation. For that task, the generalized random forest (GRF) by Athey, Tibshirani, and Wager (2019) is framework that can be used. But before we go into the details of the GRF, let’s first review the basic idea of the GRF.
At each internal node, the random forest tries to find an axis-aligned split that that puts some higher value outcomes to one side and lower value outcomes to the other side. This is done by variance reduction. However, when we do not have the outcome directly observed, we may use a pseudo-outcome motivated from the influence function. In a rough sense, the influence function corresponding to one observation is a measure of how much the estimated parameter would change if we perturb the value \(Y_i\) of that subject. One can naively think that this is to re-estimate the parameter without the subject \(i\) and then compare the difference with the whole sample. A simple example is when estimating the mean with i.i.d. samples, the parameter of interest is \(\theta = E(Y)\), with
\[ \hat \theta = \frac{1}{n} \sum_{i=1}^n Y_i. \]
and the influence function can be computed as1
\[ \begin{aligned} \text{IF}_i(\hat \theta) &= \frac{\frac{1}{n}\left[ \frac{n-1}{n} \sum_i Y_i + Y_i \right] - \frac{1}{n} \sum_i Y_i}{\frac{1}{n}}\\ &= Y_i - \hat\theta \end{aligned} \]
The interpretation of the influence function is that it measures how much the parameter would change if we perturb the value of \(Y_i\). A positive influence function means that the parameter would increase if we increase \(Y_i\), and vice versa. Hence, we can instead using this influence function (Psudo-Outcome) as our outcome to split the tree node, instead of using the original outcomes. But of course, in this regression case, the split is exactly the same since the variance reduction on \(Y_i\) would be the same as the variance reduction on \(Y_i - \hat\theta\).
For estmiating the treatment effect, we can set our goal using the following procedure. First, we fit a regression model, without covaraite on all data at an internal node:
\[ Y \sim \beta_0 + \tau A \]
This provides a regression estimation \(\hat \tau\) of the average treatment effect of the internal node samples. This estimator has well-established analytic form from the linear regression litureture, with the form
\[ \hat \tau = \frac{\sum_i (A_i - \bar A)(Y_i - \bar Y)}{\sum_i (A_i - \bar A)^2} \]
Then we can use the pseudo-outcomes introduced previously to calculate the influence function for \(\hat \tau\), which would turn out to be
\[ \text{IF}_i(\hat \tau) = \frac{(A_i - \bar A)\left(Y_i - \bar Y - \hat \tau (A_i - \bar A)\right)}{\sum_j (A_j - \bar A)^2} \]
At each internal node, we need to re-calculate these influence functions, but then they will be used against all covariates. Please also note that in observational study, propensity score would be used adjust the estimation, and a outcome model of \(\E[Y | X]\) would be used to adjust the outcome. The effect of that is similar to the variance reduction property we discussed previously.
grf
packageThese procedures are implemented in the grf
package.
# generate data
set.seed(1)
n = 800
p = 5
x = matrix(runif(n*p), n, p)
ps = exp(x[,3]-0.5)/(1+exp(x[,3]-0.5))
a = rbinom(n, 1, ps)*2-1
side = sign(x[,2] - sin(2*pi*x[,1])/3-0.5)
R <- rnorm(n, mean = ifelse(side==a, 2, 0.5) + 3*x[, 4] + 3*x[, 5], sd = 0.5)
# fit grf
library(grf)
c.forest <- causal_forest(x, R, a)
table(c.forest$predictions > 0, side)
## side
## -1 1
## FALSE 382 12
## TRUE 36 370
mean((c.forest$predictions > 0) == (side == 1))
## [1] 0.94
# plot the estimated treatment regime
par(mar=rep(2, 4))
plot(x[,1], x[,2], pch = 19, xaxt = "n", yaxt = "n", xlab = "", ylab = "",
col = ifelse(c.forest$predictions > 0, "deepskyblue", "darkorange"))
legend("topright", c("Treatment = 1", "Treatment = -1"), pch = 19,
col = c("deepskyblue", "darkorange"), cex = 1.3)
A nice thing about using random forests is that we can easily obtain variance estimation and confidence intervals of the conditional treatment effect. A sequence of papers have discussed how to estimate the variance of random forests through infinitesimal jackknife (Wager, Hastie, and Efron 2014), and U-statistics (Mentch and Hooker 2016). Some details are provided in this lecture note.
This can be directly calculated from the definition of the influence function, which is \(\lim_{\epsilon \rightarrow0} \frac{\hat\theta((1-\epsilon) F + \epsilon\delta_x) - \hat\theta(F)}{\epsilon}\) where \(T\) is the statistic, \(F\) is the distribution of the sample, and \(\delta_x\) is the distribution that puts all point mass at \(x\). And in our case of estimating the mean, we can take \(\epsilon\) to be \(1/n\) and \(x\) to be \(Y_i\)↩︎