Why Linear Regression Is More Profound Than You Think: A Journey Through Estimation Theory
Linear regression is often the first algorithm encountered in machine learning, prized for its simplicity and interpretability. But this apparent simplicity is deceptive. Beneath the surface, linear regression serves as a gateway to a rich, interconnected world of geometric, probabilistic, and dynamic systems concepts.
This post revisits the classic least-squares problem to uncover the deeper mathematical structures at its core. We will journey through four distinct but convergent perspectives:
- Geometric Intuition: Viewing regression as a projection in vector spaces.
- Probabilistic Rigor: Framing it as a maximum likelihood estimation problem and invoking the Gauss-Markov theorem.
- Bayesian Inference: Understanding regularization as the incorporation of prior beliefs.
- Sequential Estimation: Evolving the batch solution into a recursive one suitable for real-time systems, leading us to the Kalman Filter and its surprising links to reinforcement learning.
1. The Geometric View: Regression as Projection
The most elegant and fundamental perspective on linear regression is geometric. We are given a set of input vectors and corresponding scalar outputs . Our goal is to find a linear combination of the features that best approximates the outputs.
Let's assemble our data:
- The design matrix , whose rows are the feature vectors .
- The target vector .
- The parameter vector .
Our model predicts . The vector is, by definition, a linear combination of the columns of . This means must lie in the column space of , denoted .
The ordinary least squares (OLS) problem seeks to find the parameter vector that minimizes the squared Euclidean distance between the prediction and the true targets:
Geometrically, this is equivalent to finding the vector in the column space that is closest to . This vector is the orthogonal projection of onto .
This geometric condition implies that the residual vector, , must be orthogonal to every vector in . This can be stated succinctly as:
Rearranging this gives the celebrated normal equations:
If the columns of are linearly independent, the Gram matrix is invertible, and we obtain the unique solution:
The Pseudoinverse and SVD
When is singular or ill-conditioned, we use the Moore-Penrose pseudoinverse . Via the singular value decomposition (SVD), write:
where and are orthogonal, and contains the singular values . The pseudoinverse is:
where replaces each nonzero with . The solution exists for any and gives the minimum-norm solution when the system is underdetermined.
Conditioning: The condition number measures sensitivity to perturbations. Large indicates ill-conditioning, amplifying noise and motivating regularization.
The Hat Matrix and Leverage
The hat matrix (or projection matrix) is:
It projects onto : . The diagonal elements are the leverage values, quantifying the influence of observation on its own prediction. High leverage indicates potential outliers or influential points. The residual is orthogonal to , as required.
2. The Probabilistic View: Maximum Likelihood & The Gauss-Markov Theorem
The geometric view is deterministic. To introduce statistical properties, we model the data-generating process. A standard assumption is that the outputs are generated by a linear model with additive, zero-mean Gaussian noise:
In vector form, this is , where .
From this, we can ask: what parameter vector most likely generated the observed data ? The likelihood function gives us the probability density of the data given the parameters:
To find the Maximum Likelihood Estimator (MLE), we typically maximize the log-likelihood, which is equivalent and mathematically simpler:
Maximizing this expression is equivalent to minimizing . The solution is identical to the OLS estimator:
This reveals that OLS is not merely a convenient heuristic; it is the statistically optimal estimator under the Gaussian noise assumption. This allows us to analyze its properties:
- Unbiasedness: The estimator is correct on average: .
- Covariance: The uncertainty in our estimate is .
- Efficiency: The estimator achieves the Cramér-Rao Lower Bound, meaning it is the most precise unbiased estimator possible.
Crucially, the Gauss-Markov Theorem provides a weaker but more general guarantee: even if the noise is not Gaussian, as long as it is uncorrelated and has zero mean and constant variance (homoscedastic), the OLS estimator is the Best Linear Unbiased Estimator (BLUE). It has the minimum variance among all linear unbiased estimators.
Gauss-Markov Theorem: A Sketch
Setup: Consider any linear unbiased estimator of the form where . For unbiasedness:
This requires .
Variance: The covariance of is:
Optimality: Among all matrices satisfying , the choice minimizes the variance (in the sense of the Loewner ordering). This gives:
Thus, OLS is BLUE.
3. The Bayesian View: From Regularization to Priors
The MLE perspective assumes we know nothing about beforehand. Bayesian inference allows us to incorporate prior beliefs. We treat as a random variable with a prior distribution .
Let's assume a zero-mean Gaussian prior for , which encodes a belief that smaller parameter values are more likely:
Using Bayes' rule, the posterior distribution of after observing the data is:
The Maximum A Posteriori (MAP) estimate maximizes this posterior probability. Taking the negative log of the posterior gives:
If we assume a simple spherical prior , the MAP estimate becomes the minimizer of:
The solution to this is:
This is exactly Ridge Regression. The regularization term, often seen as an ad-hoc penalty to prevent overfitting, is now revealed to be the consequence of a Gaussian prior on the parameters.
Posterior Covariance: The full posterior distribution is also Gaussian, with covariance:
For the isotropic case, this becomes:
Notice that regularization reduces posterior uncertainty (smaller eigenvalues) at the cost of introducing bias.
Bias-Variance Decomposition
For a new test point , the prediction has expected squared error that decomposes into three components:
where:
- Bias: measures systematic error.
- Variance: measures sensitivity to training data.
- Irreducible error: from the noise.
For OLS, bias is zero but variance can be large (especially when is large). Ridge regression introduces bias by shrinking coefficients, but reduces variance, often improving overall test error—the fundamental bias-variance tradeoff.
Other priors lead to different regularizers. For instance, a Laplacian prior results in an penalty, leading to LASSO regression, which encourages sparse solutions.
4. The Information-Theoretic View
Linear regression connects deeply to information theory through the Fisher information matrix and mutual information in linear Gaussian channels.
Fisher Information and the Cramér-Rao Bound
The Fisher information matrix quantifies how much information the data carries about the parameters . For the linear Gaussian model with , it is:
The Cramér-Rao Lower Bound (CRLB) states that for any unbiased estimator :
The MLE/OLS estimator achieves this bound exactly, making it efficient—no unbiased estimator can do better.
Mutual Information in Linear Gaussian Channels
Consider as a random signal with prior , and the observation model . The mutual information between and is:
This quantifies how much observing reduces uncertainty about . Via the SVD of , this depends on the singular values and the prior variance along each singular direction. Regularization (small ) reduces mutual information but improves numerical stability and generalization.
5. Signal Estimation Theory Connections
Linear regression is a special case of broader signal estimation frameworks.
Generalized Least Squares (GLS)
When the noise is correlated with covariance , OLS is no longer BLUE. The Generalized Least Squares (GLS) estimator is:
GLS is BLUE for correlated noise, transforming the problem via to recover the OLS structure.
Linear Minimum Mean Square Error (LMMSE) Estimation
When is random with known prior , the LMMSE estimator (also the Bayesian posterior mean) is:
This is equivalent to the MAP estimator under Gaussian assumptions and minimizes over all linear estimators (biased or unbiased).
Wiener Filtering and Matched Filtering
- Wiener Filter: In the frequency domain (for stationary signals), the LMMSE estimator becomes the Wiener filter, which shapes the spectrum based on signal and noise power spectral densities.
- Matched Filter: For detection in AWGN, the optimal filter maximizes SNR by correlating with the known signal template—geometrically, this is projection onto the signal direction, exactly the OLS perspective.
6. Sequential Estimation: RLS and the Kalman Filter
So far, our solutions are "batch" — they require all data at once. In many real-world systems (robotics, finance, online services), data arrives sequentially. We need an efficient way to update our estimate.
Recursive Least Squares (RLS)
Assumptions: The parameter is static, and observations arrive sequentially with measurement model , where is zero-mean noise.
The RLS algorithm updates the estimate and its error covariance recursively:
Innovation (prediction error):
Gain:
Parameter update:
Covariance update:
where satisfies (up to a noise variance factor). The gain balances prior uncertainty and new information.
Forgetting factor: A variant uses to discount old data, improving tracking in non-stationary environments.
The Kalman Filter: A General Framework
The Kalman filter generalizes RLS to time-varying, dynamic systems. Consider a linear state-space model:
Process model:
Measurement model:
The Kalman filter provides the optimal (LMMSE) estimate of via a predict-correct cycle:
Prediction:
Innovation and its covariance:
Kalman gain:
Correction:
RLS as a special case: For static parameters, set , , , and recover the RLS update exactly. The innovation is the new information not predicted by the model, and the gain optimally weights it based on relative uncertainties and .
7. The Bridge to Reinforcement Learning
This recursive, error-driven update structure forms a conceptual bridge to modern reinforcement learning. Consider the temporal-difference (TD) learning update for a value function :
The structure mirrors the Kalman/RLS update:
The TD error acts as the innovation signal, correcting the value estimate based on new experience. This parallel is not merely an analogy. When using linear function approximation for value functions (), many RL algorithms (e.g., LSTD, TD()) are forms of recursive least-squares estimation. Both fields share the same core principle: using noisy, sequential data to recursively estimate hidden quantities—whether physical states or optimal values.## 8. Conclusion
Linear regression is far more than a simple curve-fitting tool. It is a microcosm of the core principles of estimation, learning, and information processing. By viewing it through multiple lenses, we uncover a profound web of connections:
- Geometry reveals the solution as an orthogonal projection, with the SVD providing insight into ill-conditioning and the pseudoinverse.
- Probability justifies it as the maximum likelihood estimator under Gaussian noise and proves it is BLUE via the Gauss-Markov theorem.
- Bayesian Inference recasts regularization as encoding prior beliefs, with the bias-variance tradeoff governing generalization.
- Information Theory connects Fisher information, the Cramér-Rao bound, and mutual information in linear Gaussian channels, revealing fundamental limits on estimation.
- Signal Estimation extends the framework to GLS (correlated noise), LMMSE (random parameters), and Wiener/matched filtering.
- Sequential Estimation transforms batch solutions into recursive algorithms (RLS, Kalman filter) that operate in real-time with precise covariance updates.
- Reinforcement Learning shares the same recursive, innovation-driven structure, unifying value estimation with state estimation.
This journey reveals a unifying theme: learning is a process of updating beliefs from data under uncertainty. Linear regression, in its elegance and depth, is our first and most fundamental guide to this landscape—a bridge connecting geometry, probability, information theory, signal processing, control, and modern machine learning.
References and Further Reading
Linear Models and Estimation:
- Seber, G. A. F., & Lee, A. J. (2003). Linear Regression Analysis (2nd ed.). Wiley.
- Björck, Å. (1996). Numerical Methods for Least Squares Problems. SIAM.
- Kay, S. M. (1993). Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory. Prentice Hall.
Bayesian Methods and Regularization:
- Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
- Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press.
Kalman Filtering and Recursive Estimation:
- Simon, D. (2006). Optimal State Estimation. Wiley.
- Grewal, M. S., & Andrews, A. P. (2014). Kalman Filtering: Theory and Practice Using MATLAB (4th ed.). Wiley.
Information Theory:
- Cover, T. M., & Thomas, J. A. (2006). Elements of Information Theory (2nd ed.). Wiley.
Diagnostics and Computational Methods:
- Cook, R. D., & Weisberg, S. (1982). Residuals and Influence in Regression. Chapman and Hall.