Instruction

Students are encouraged to work together on homework. However, sharing, copying, or providing any part of a homework solution or code is an infraction of the University’s rules on Academic Integrity. Any violation will be punished as severely as possible. Final submissions must be uploaded to compass2g. No email or hardcopy will be accepted. For late submission policy and grading rubrics, please refer to the course website.

About HW5

We utilize the coordinate descent algorithm introduced in the class to implement the entire Lasso solution. For coordinate descent, you may also want to review HW4. This HW involves two steps: in the first step, we solve the solution for a fixed \(\lambda\) value, while in the second step, we consider a sequence of \(\lambda\) values and solve it using the path-wise coordinate descent.

Question 1 [50 Points] Lasso solution for fixed \(\lambda\)

For this question, you cannot use functions from any additional library, except the MASS package, which is used to generate multivariate normal data. Following HW4, we use the this version of the objective function:

\[\arg\min_{\beta} \frac{1}{n} \lVert \mathbf{y} - \mathbf{X}\beta \rVert^2 + \lambda \lVert \beta \rVert_1\] The following data is used to fit this model. You can consider using similar functions in Python if needed. We use

  library(MASS)
  set.seed(10)
  n = 100
  p = 200

  # generate data
  V = matrix(0.3, p, p)
  diag(V) = 1
  X_org = as.matrix(mvrnorm(n, mu = rep(0, p), Sigma = V))
  true_b = c(1:3, -3:-1, rep(0, p-6))
  y_org = X_org %*% true_b + rnorm(n)

  # pre-scaling and centering X and y
  X = scale(X_org)*sqrt(n/(n-1))
  y = scale(y_org)*sqrt(n/(n-1))
  lambda = 0.3
  1. [10 pts] State the solution \(x\) of the following problem \[ \underset{x}{\arg \min} \,\, (x-b)^{2}+\lambda|x|, \quad \lambda>0 \] Then, implement a function in the form of soft_th <- function(b, lambda) to return the result of the above problem. Note in the coordinate descent discussed in the slides, where \(b\) is an OLS estimator, and \(\lambda\) is the penalty parameter. Report the function output for the following testing cases with \(\lambda = 0.3\): 1) \(b = 1\); 2) \(b = -1\); 3) \(b = -0.1\).

  2. [40 pts] We will use the pre-scale and centered data X and y for this question, hence no intercept is needed. Write a Lasso algorithm function myLasso(X, y, lambda, beta_init, tol, maxitr), which return two outputs (as a list with two components):

    • a vector of \(\beta\) values without the intercept
    • number of iterations

    You need to consider the following while completing this question:

    • Do not use functions from any additional library
    • Start with a vector beta_init: \(\boldsymbol \beta = \mathbf{0}_{p \times 1}\)
    • Use the soft-threshold function in the iteration when performing the coordinate-wise update.
    • Use the efficient \(\mathbf{r}\) updating appraoch (we discussed this in lecture and HW4) in the iteration
    • Run your coordinate descent algorithm for a maximum of maxitr = 100 iterations. Each iteration should loop through all variables.
    • You should implement the early stopping rule with tol. This means terminating the algorithm when the \(\boldsymbol \beta\) value of the current iteration is sufficiently similar to the previous one, i.e., \(\lVert \boldsymbol \beta^{(k)} - \boldsymbol \beta^{(k-1)} \rVert^2 \leq \text{tol}\).

    Aftering completing your code, run it on the data we generated previously. Provide the following results:

    • Print out the first 8 coefficients and the number of iterations.
    • Check and compare your answer to the glmnet package using the following code. You should report their first 8 coefficients and the \(L_1\) norm of the difference \(\| \, \hat{\boldsymbol\beta}^\text{glment}_{[1:8]} - \hat{\boldsymbol\beta}^\text{yours}_{[1:8]} \, \|_1\).
  library(glmnet)
  # glmnetfit use a different loss function. Use lambda / 2 as the penalty
  glmnetfit = glmnet(X, y, lambda = lambda / 2, intercept = FALSE)
  glmnetfit$beta[1:8]
## [1]  0.000000000  0.158233309  0.429648259 -0.519431173 -0.171467909 -0.006652507
## [7]  0.000000000  0.000000000

Question 2 [50 Points] Path-wise Coordinate Descent

Let’s perform path-wise coordinate descent. The idea is simple: we will solve the solution on a sequence of \(\lambda\) values, starting from the largest one in the sequence. The first initial \(\boldsymbol\beta\) are still all zero. After obtaining the optimal \(\boldsymbol \beta\) for a given \(\lambda\), we simply use this solution as the initial value for the next, smaller \(\lambda\). This is referred to as a warm start in optimization problems. We will consider the following sequence of \(\lambda\) borrowed from glmnet. Note that this is a decreasing sequence from large to small values.

  glmnetfit = glmnet(X, y, intercept = FALSE)

  # Again, twice lambda is used for our function
  lambda_all = glmnetfit$lambda * 2
  
  # a matplot of the first 8 coefficients vs log scale of lambda
  matplot(log(lambda_all), t(glmnetfit$beta[1:8, ]), type = "l", lwd = 2, 
          xlab = "Log Lambda",ylab = "Estimated Beta", main = "glmnet")
  legend("topleft", paste("beta", 1:8, "=", c(1:3, -3:-1, 0, 0)), 
         col = 1:8, lty = 1:8, lwd = 2)

  1. [20 pts] Write a function myLasso_pw <- function(X, y, lambda_all, tol, maxitr), which output a \(p \times N_{\lambda}\) matrix. \(N_{\lambda}\) is the number of unique \(\lambda\) values. Also follow the above instruction at the beginning of this question to include the warm start for path-wise solution. Your myLasso_pw should make use of your myLasso in Question 1.

  2. [5 pts] Provide the same plot as the above glmnet solution plot of the first 8 parameter in your solution path. Make the two plots side-by-side (e.g. par(mfrow = c(1, 2) in R) with glmnet on the left and your solution path on the right.

  3. [5 pts] Based on your plot, if we decrease \(\lambda\) from its maximum value, which two variables enter (start to have nonzero values) the model first? You may denote your covariates as \(X_1, ..., X_8\).

  4. [5 pts] In Question 1, we calculated the L1 norm discrepancy between our solution and glmnet on the first 8 parameters. In this question, we will calculate the discrepancy on all coefficients, and over all \(\lambda\) parameters. After calculating the discrepancies, show a scatter plot of log(\(\lambda\)) vs. discrepancy. Comment on what you observe.

  5. [15 pts] Based on the solution you obtained in the previous question, recover the unscaled coefficients using the formula in HW4. Then compare the first 9 coefficients (including the intercept term) with the following using a similar plot in b). Report the maximum value of discrepancy (see d) across all \(\lambda\) values.

  glmnetfit2 = glmnet(X_org, y_org, lambda = lambda_all/2*sd(y_org)*sqrt(n/(n-1)))
  lassobeta2 = coef(glmnetfit2)[1:9, ]