Instruction

Please remove this section when submitting your homework.

Students are encouraged to work together on homework and/or utilize advanced AI tools. However, there are two basic rules:

Final submissions must be uploaded to Gradescope. No email or hard copy will be accepted. Please refer to the course website for late submission policy and grading rubrics.

Question 1 [65 pts]: Linear Discriminant Analysis

Load the same MNIST dataset from HW5, the same way as previous HW.

  load("mnist_first2000.RData")
  dim(mnist)
## [1] 2000  785
  1. [25 pts] We aim to fit an LDA (Linear Discriminant Analysis) model with our own defined function following our understanding of the LDA. An issue with this dataset, as we saw earlier, is that some pixels display little or no variation across all observations. This zero variance issue poses a problem when inverting the estimated covariance matrix. Do the following to address this issue and fit the LDA model.

    • Within the first 1000 observations, extract digts 1, 4, and 9 as the training dataset. Do the same for the second 1000 observations as the testing dataset.
    • To remove variables (pixels) with low variance, do the same procedure as we did before to select the top 300 Pixels with the highest variance.
    • Following our lecture note, the LDA model requires the estimation of several quantities, \(\Sigma\), \(\mu_k\) and \(\pi_k\). Perform these estimations and calculate the linear decision score \(x^T w_k + b_k\).
    • Use the linear decision score to predict the class label on the testing data. Report the prediction error and the confusion matrix based on the testing data.
  2. [20 pts] The result may not be ideal. At least compared with our previous HW using SVM one-vs-one model, this is probably worse. Let’s try to improve it. One issue could be that the inverse of the covariance matrix is not very stable. As we discussed in class, one possible choice is to add a ridge penalty to \(\Sigma\). Carry out this approach using \(\lambda = 1\) and re-calculate the confusion matrix and prediction error. Then try a few different penalty values of \(\lambda\) to observe how the prediction error changes. Comment on the effect of \(\lambda\), specifically under the context of this model. Is this related to the bias-variance trade-off?

  3. [20 pts] Another approach we could do is to perform PCA at the very beginning of this analysis, instead of screening for the top 300 variables. And then we can perform the same type of analysis as in part a. but with PCA as your variables (in both training and testing data).

    • Start with the original complete mnist data, and take digits 1, 6, and 7. Perform PCA on the pixels.
    • Take the first 50 PCs as your new dataset, and then split the data back to training and testing data, with the same sample size as part a).
    • Perform the same analysis as in part a. and report the confusion matrix and prediction error.

Comment on why do you think this approach would work well.

Question 2 [35 pts]: Classification Trees

We will use the same PCA data you constructed in Question 1(c). Use the training data for model fitting and the testing data for model evaluation. Fit a 5-fold cross-validation CART model and answer the following question. There are many packages that can do this, you could consider the rpart package which provides cross-validation functionality and easy plotting. Do not print excessive output when using this package.