Eigenvalue and SVD Computation

How spectral problems become numerical algorithms, why eigenvalue computation is usually iterative, and why the SVD is the stable computational lens behind low-rank approximation, PCA, and least squares.
Modified

April 26, 2026

Keywords

eigenvalues, SVD, power method, QR iteration, low rank

1 Role

This is the fifth page of the Numerical Methods module.

Its job is to explain why spectral decompositions are not only formulas from linear algebra, but numerical targets that require their own algorithms, convergence ideas, and error intuition.

2 First-Pass Promise

Read this page after Numerical Least Squares and Regularization.

If you stop here, you should still understand:

  • why eigenvalue and singular-value computations are usually iterative
  • what power iteration, orthogonal or subspace iteration, and QR iteration are trying to do
  • why SVD is the key computational lens for low-rank structure
  • why spectral gap matters for convergence and stability

3 Why It Matters

Eigenvalues and singular values appear across the whole site:

  • PCA and covariance estimation
  • low-rank approximation
  • spectral clustering
  • stability of iterative methods
  • Hessian and curvature analysis
  • matrix conditioning and perturbation theory

But unlike solving a small symbolic characteristic polynomial, realistic spectral computation has to deal with:

  • large matrices
  • structured or sparse matrices
  • partial spectra rather than all eigenpairs
  • sensitivity of eigenvectors
  • the need for stable low-rank approximations rather than exact decompositions on paper

That is why numerical linear algebra treats spectral computation as an algorithmic subject in its own right.

4 Prerequisite Recall

  • matrix factorizations such as QR and Cholesky already appeared in earlier pages
  • the least-squares page used SVD as a lens for weak directions and ill-posedness
  • matrix analysis already introduced operator norms, PSD structure, and perturbation language

5 Intuition

5.1 Why Eigenvalue Computation Is Different

For linear systems, we often have a clear right-hand side and a factorization route.

For eigenvalue problems, the unknown is part of the equation:

\[ Av=\lambda v. \]

That makes the problem nonlinear, even though A is linear.

5.2 Power Iteration

If one eigenvalue dominates in magnitude, repeated multiplication by A tends to amplify its eigenvector direction.

That is the core intuition behind power iteration:

keep multiplying, renormalizing, and let the dominant direction emerge

5.3 From One Vector To A Whole Invariant Subspace

When one direction is not enough, orthogonal or subspace iteration tracks several directions at once.

This is the bridge from simple repeated multiplication to more serious spectral algorithms.

5.4 QR Iteration

QR iteration is the workhorse idea for dense eigenvalue computation.

It repeatedly applies orthogonal changes of basis in a way that drives the matrix toward a form where the eigenvalues become visible.

5.5 Why SVD Is So Central

The SVD is numerically valuable because it organizes a matrix into orthogonal directions and nonnegative singular values.

That makes it the natural computational tool for:

  • low-rank approximation
  • least squares
  • PCA
  • rank-revealing analysis

6 Formal Core

Definition 1 (Definition: Eigenvalue Problem) The eigenvalue problem asks for nonzero vectors v and scalars \lambda such that

\[ Av=\lambda v. \]

The computational task is often not to recover everything, but to find the dominant eigenvalues or a stable invariant subspace.

Definition 2 (Definition: Power Iteration) Starting from a vector x^{(0)}, power iteration repeatedly computes

\[ x^{(k+1)}=\frac{Ax^{(k)}}{\|Ax^{(k)}\|}. \]

When there is a unique dominant eigenvalue in magnitude and the start has a component in its direction, the iterates tend toward the dominant eigenvector direction.

Theorem 1 (Theorem Idea: Spectral Gap Controls Convergence Intuition) The separation between the dominant eigenvalue and the next competing eigenvalue strongly influences how quickly power-style methods settle into the leading invariant direction.

So spectral computation is not only about eigenvalues themselves, but about how separated they are.

Definition 3 (Definition: Singular Value Decomposition) The singular value decomposition writes

\[ A=U\Sigma V^T, \]

where U and V are orthogonal and \Sigma has nonnegative diagonal entries.

The singular values are the square roots of the eigenvalues of A^TA.

Theorem 2 (Theorem Idea: Truncated SVD Gives Best Low-Rank Approximation) Keeping only the largest singular values yields the best rank-k approximation in both spectral and Frobenius norm.

That is why the SVD is the computational backbone of low-rank modeling.

Theorem 3 (Theorem Idea: QR Iteration Turns Orthogonal Similarity Into Spectral Information) QR iteration repeatedly factors and recombines a matrix in a way that preserves eigenvalues while driving the matrix toward a simpler nearly triangular form.

This is the core dense-matrix eigenvalue workhorse.

7 A Small Worked Example

Consider

\[ A= \begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix}. \]

Start power iteration from

\[ x^{(0)}= \begin{bmatrix} 1 \\ 1 \end{bmatrix}. \]

After one multiplication,

\[ Ax^{(0)}= \begin{bmatrix} 3 \\ 1 \end{bmatrix}. \]

After another,

\[ A^2x^{(0)}= \begin{bmatrix} 9 \\ 1 \end{bmatrix}. \]

The first coordinate keeps dominating more strongly because 3 grows faster than 1.

So after normalization, the iterates move toward the dominant eigenvector direction

\[ \begin{bmatrix} 1 \\ 0 \end{bmatrix}. \]

The important lesson is not that this example is hard. It is that the algorithm is exploiting repeated amplification of the dominant direction, and the speed depends on the gap between 3 and 1.

8 Computation Lens

When you face a spectral computation, ask:

  1. do I need all eigenvalues or only the leading few?
  2. is the matrix dense, sparse, symmetric, or positive semidefinite?
  3. is the useful object really the eigen-decomposition, or is the SVD the safer viewpoint?
  4. is there a spectral gap large enough to support fast convergence of simple iterations?

Those questions usually matter more than the symbolic decomposition formula.

9 Application Lens

9.1 PCA And High-Dimensional Statistics

Principal components are spectral objects, so numerical PCA is really about stable singular-value or eigenvalue computation.

9.2 Low-Rank Approximation

Compression, denoising, embeddings, and matrix completion all rely on singular values and their truncated approximations.

9.3 Optimization And Dynamics

Curvature, stability, and dominant modes are often spectral questions, which is why iterative spectral algorithms show up throughout applied math.

10 Stop Here For First Pass

If you can now explain:

  • why spectral computation is usually iterative
  • what power iteration is exploiting
  • why spectral gap matters
  • why SVD is the computational workhorse for low-rank structure and many least-squares problems

then this page has done its job.

11 Go Deeper

After this page, the next natural step is:

The strongest adjacent pages are:

12 Optional Deeper Reading After First Pass

The strongest current references connected to this page are:

13 Sources and Further Reading

Back to top