Instruction

Students are encouraged to work together on homework. However, sharing, copying, or providing any part of a homework solution or code is an infraction of the University’s rules on Academic Integrity. Any violation will be punished as severely as possible. Final submissions must be uploaded to Gradescope. No email or hardcopy will be accepted. For late submission policy and grading rubrics, please refer to the course website.

About Homework 1

You will receive 100% score for HW1 as long as you submit a finished version before the deadline. The goal of HW1 is to check your prerequisite knowledge. All of them should be already covered in courses such as Stat 410, Stat 425, and other basic mathematical courses. These concepts and skills will be used extensively in our course.

In addition, this HW also checks your basic programming knowledge, such as writing a function, random seed, and Latex. Please note that you must type all formulas in Latex form in all future homework. Failing to do so will lead to some penalty.

Question 1: Basic calculus

  1. Calculate the derivative of \(f(x)\)

    1. \(f(x) = e^x\)
    2. \(f(x) = \log(1 + x)\)
    3. \(f(x) = \log(1 + e^x)\)
  2. Taylor expansion. Let \(f\): \(\mathbb{R} \rightarrow \mathbb{R}\) be a twice differentiable function. Please write down the first three terms of its Taylor expansion at point \(x = 1\).

  3. For the infinite sum \(\sum_{n=1}^\infty \frac{1}{n^\alpha}\), where \(\alpha\) is a positive real number, give the exact range of \(\alpha\) such that the series converges.

Question 2: Linear algebra

  1. What is the eigen-decomposition of a real symmetric matrix \(A_{n \times n}\)? Write down one form of that decomposition and explain each term in your formula. Based on these terms, suppose all eigenvalues are positive, derive \(A^{-1/2}\).

  2. What is a symmetric positive definite matrix \(A_{n \times n}\)? Give one of the equivalent definitions and explain your notation.

  3. True/False. If you claim a statement is false, explain why. For two real matrices \(A_{m \times n}\) and \(B_{n \times m}\)

    1. Rank\((A)\) = \(\min\{m, n\}\)
    2. If \(m = n\), then trace\((A)\) = \(\sum_{i=1}^n A_{ii}\)
    3. If \(A\) is a symmetric matrix, then all eigenvalues of \(A\) are real
    4. If \(A\) is a symmetric matrix, \(\lambda_1\) and \(\lambda_2\) are two distinct eigen-values and \(v_1\),\(v_2\) are the corresponding eigen-vectors, then it is possible that \(v_1^T v_2 > 0\).
    5. If \(A\) is a symmetric matrix, \(v_1\),\(v_2\) are two distinct eigen-vectors of \(\lambda\), then it is possible that \(v_1^T v_2 > 0\).
    6. trace(ABAB) = trace(AABB).

Question 3: Statistics

  1. \(X_1\), \(X_2\), \(\ldots\), \(X_n\) are i.i.d. \({\cal N}(\mu, \sigma^2)\) random variables, where \(\mu \in \mathbb{R}\) and \(\sigma > 0\) is finite. Let \(\bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i\).

    1. What is an unbiased estimator? Is \(\bar{X}_n\) an unbiased estimator of \(\mu\)?
    2. What is \(E[(\bar{X}_n)^2]\) in terms of \(n, \mu, \sigma\)?
    3. Give an unbiased estimator of \(\sigma^2\).
    4. What is a consistent estimator? Is \(\bar{X}_n\) a consistent estimator of \(\mu\)?
  2. Suppose \(X_{p \times 1}\) is a vector of covariates, \(\beta_{p \times 1}\) is a vector of unknown parameters, \(\epsilon\) is the unobserved random noise and we assume the linear model relationship \(y = X^T \beta + \epsilon\). Suppose we have \(n\) i.i.d. samples from this linear model, and the observed data can be written using the matrix form: \(\mathbf{y}_{n \times 1} = \mathbf{X}_{n\times p} \beta_{p \times 1} + \boldsymbol \epsilon_{n \times 1}\).

    1. If we want to estimate the unknown \(\beta\) using a least square method, what is the objective/loss function \(L(\beta)\) to obtain \(\widehat \beta\)?
    2. What is the solution of \(\widehat \beta\)? Represent the solution using the observed data \(\mathbf{y}\) and \(\mathbf{X}_{n\times p}\). Note that you may assume that \(\mathbf{X}^T \mathbf{X}\) is invertible.

Question 4: Programming

  1. Use the following code to generate a set of \(n\) observations \(\mathbf{y}_{n \times 1}\) and \(\mathbf{X}_{n\times p}\). Follow the previously established formula to solve for the least square estimator \(\widehat \beta\). Note that you must write your own code, instead of using existing functions such as lm(). In addition, what should you do if you are asked to add an intercept term \(\beta_0\) into your estimation (even the true \(\beta_0 = 0\) in our data generator)?
set.seed(1)
n = 100; p = 5
X = matrix(rnorm(n * p), n, p)
y = X %*% c(1, 0, 0, 1, -1) + rnorm(n)
  1. Perform a simulation study to check the consistency of a sample mean estimator \(\bar{X}_n\). Please save your random seed so that the results can be replicated by others.

    1. Generate a set of \(n = 20\) i.i.d. observations from uniform(0, 1) distribution and calculate the sample mean \(\bar{X}_n\)
    2. Repeat step (a) 1000 times to collect 1000 such sample means and plot them using a histogram.
    3. How many of such sample means (out of 1000) are at least 0.1 away from true mean parameter, which is 0.5 for uniform (0, 1)?
    4. Repeat steps (a) to (c) with \(n = 100\) and \(n = 500\). What conclusion can you make?