Maximum likelihood interpretation of least squares
Maximum likelihood interpretation of least squares¶
I will now show you how to derive least squares from the maximum likelihood principle. Recall that the maximum likelihood principle states that you should pick the model parameters that maximize the probability of the data conditioned on the parameters.
Just like before assume that we have \(N\) observations of inputs \(\mathbf{x}_{1:N}\) and outputs \(\mathbf{y}_{1:N}\). We model the map between inputs and outputs using a generalized linear model with \(M\) basis functions:
Now here is the difference with what we did before. Instead of directly picking a loss function to minimize we come up with a probabilistic description of the measurement process. In particular, we model the measurement process using a likelihood function:
What is the interpretation of the likelihood function? Well, \(p(\mathbf{y}_{1:N} | \mathbf{x}_{1:N}, \mathbf{w})\) tells us how plausible is it to observe \(\mathbf{y}_{1:N}\) at inputs \(\mathbf{x}_{1:N}\), if we know that the model parameters are \(\mathbf{w}\).
The most common choice for the likelihood of a single measurement is to pick it to be Normal. This corresponds to the belief that our measurement is around the model prediction \(\mathbf{w^{T}\boldsymbol{\phi}(\mathbf{x})}\) but it is contaminated with Gaussian noice of variance \(\sigma^2\). Mathematically, we have:
where \(\sigma^2\) models the variance of the measurement noise. Note that here I used the notation \(N(y|\mu,\sigma^2)\) to denote the PDF of a Normal with mean \(\mu\) and variance \(\sigma^2\), i.e.,
Since, in almost all the cases we encounter, the measurements are independent conditioned on the model, then likelihood of the data factorizes as follows:
where \(\boldsymbol{\Phi}\) is the \(N\times M\) design matrix.
Now we are ready to apply the maximum likelihood function to find all the parameters. This includes both the weight vector \(\mathbf{w}\) and the measurement variance \(\sigma^2\). We need to solve this:
Notice that the rightmost part is actually the negative of the sum of square errors. So, by maximizing the likelihood with respect to \(\mathbf{w}\) we are actually minimizing the sum of square errors. This means that the maximum likelihood weights and the least square weights are exactly the same! We do not even have to do anything further. The weights should satisfy this linear system:
This is nice. The probabilistic interpretation above gives the same solution as least squares! But there is more. Notice that it can also give us an estimate for the measurement noise variance \(\sigma^2\). All you have to do is maximize likelihood with respect to \(\sigma^2\). If we take the derivative of the log-likelihood with respect to \(\sigma^2\), set it equal to zero and solve for \(\sigma^2\) you get:
Finally, you can incorporate this measurement uncertainty when you are making predictions. This is done through the point predictive distribution, which is Normal in our case:
In other words, your prediction about the measured output \(y\) is that it will be Normally distributed around your model prediction with a variance \(\sigma^2\). You can use this to find a 95% credible interval. Let’s demonstrate with an example.s