Perturbation and Stability
perturbation theory, Weyl inequality, singular value stability, Davis-Kahan, spectral gap
1 Role
This is the fourth page of the Matrix Analysis module.
The previous page turned symmetric matrix questions into variational max/min problems.
This page asks the next natural question:
if the matrix changes a little, how much do the spectral objects change?
That is the core perturbation viewpoint.
2 First-Pass Promise
Read this page after Spectral Inequalities and Variational Principles.
If you stop here, you should still understand:
- why operator norm is the natural first-pass size for perturbations
- why symmetric eigenvalues and singular values are stable under small perturbations
- why eigenspaces need a spectral gap in order to stay stable
- why covariance, PCA, kernels, and Hessian arguments keep talking about perturbation
3 Why It Matters
Real matrices are almost never observed exactly.
They come from:
- noisy samples
- finite precision
- estimated covariance matrices
- approximate Hessians
- truncated kernels and low-rank approximations
So advanced arguments constantly ask:
- if the matrix estimate is close, are the top eigenvalues close?
- do the leading eigenvectors stay meaningful?
- how large does the spectral gap need to be?
- what quantity is actually stable enough to trust?
Perturbation theory is the language for turning matrix close into spectral close.
4 Prerequisite Recall
- operator norm measures worst-case amplification
- Rayleigh quotients give variational formulas for extreme eigenvalues
- singular values measure operator size even for nonsymmetric matrices
- PSD and symmetric structure make spectral reasoning much cleaner
5 Intuition
5.1 Stable Objects Differ By Type
Not every spectral quantity reacts to perturbation in the same way.
At a first pass:
- symmetric eigenvalues are stable
- singular values are stable
- eigenspaces are stable only when separated by a gap
That is why perturbation arguments usually start by saying what object they want to control.
5.2 Why Symmetry Helps
For symmetric matrices, eigenvalues come from max/min principles.
That variational structure forces the spectrum to move in a controlled way when the matrix changes by a small operator-norm amount.
For general nonsymmetric matrices, eigenvalues can be much more delicate.
So the clean first-pass stability story lives in the symmetric and singular-value settings.
5.3 Why Gaps Matter
Suppose the top two eigenvalues are almost tied.
Then a tiny perturbation can rotate the corresponding top eigenspace a lot, because the matrix has almost no preference between those nearby directions.
If the gap is large, the preferred direction is much more robust.
So eigenspace stability is really a signal versus gap story.
6 Formal Core
Definition 1 (Definition: Perturbation Size) If a matrix \(A\) is replaced by \(A+E\), then \(E\) is the perturbation.
At a first pass, the most common size measure is the operator norm
\[ \|E\|_{\mathrm{op}}. \]
Theorem 1 (Theorem Idea: Symmetric Eigenvalues Are Stable In Operator Norm) If \(A\) and \(A+E\) are real symmetric matrices, then their ordered eigenvalues satisfy
\[ |\lambda_i(A+E)-\lambda_i(A)| \le \|E\|_{\mathrm{op}} \]
for every index \(i\).
This is the first-pass eigenvalue stability theorem to remember.
Theorem 2 (Theorem Idea: Singular Values Are Stable In Operator Norm) For any matrices \(A\) and \(A+E\) of the same size,
\[ |\sigma_i(A+E)-\sigma_i(A)| \le \|E\|_{\mathrm{op}} \]
for every singular value index \(i\).
So singular values inherit a strong Lipschitz-type stability under operator-norm perturbations.
Definition 2 (Definition: Spectral Gap) A spectral gap is a separation between the eigenvalue or singular-value block you care about and the nearby spectrum.
At first pass, think of it as:
how far is the target spectral cluster from the rest?
Theorem 3 (Theorem Idea: Eigenspace Stability Needs A Gap) If a symmetric matrix is perturbed by \(E\), then the invariant subspace for a separated eigenvalue block changes by an amount on the order of
\[ \frac{\|E\|_{\mathrm{op}}}{\text{gap}}. \]
This is the core Davis-Kahan idea.
The message is not that subspaces are automatically stable.
The message is:
- small perturbation helps
- large spectral gap helps
- without a gap, eigendirections can rotate a lot
7 Worked Example
Consider the symmetric matrix
\[ A= \begin{bmatrix} 5 & 0\\ 0 & 1 \end{bmatrix} \]
and perturb it by
\[ E= \begin{bmatrix} 0 & 0.2\\ 0.2 & 0 \end{bmatrix}. \]
Then
\[ \|E\|_{\mathrm{op}}=0.2. \]
The perturbed matrix is
\[ A+E= \begin{bmatrix} 5 & 0.2\\ 0.2 & 1 \end{bmatrix}. \]
Its eigenvalues are
\[ 3 \pm \sqrt{4.04}, \]
which are approximately
\[ 5.01 \quad \text{and} \quad 0.99. \]
So the eigenvalue shifts are much smaller than the operator-norm bound \(0.2\), exactly as Weyl’s inequality permits.
The original gap was
\[ 5-1=4, \]
which is large compared with the perturbation size.
That is why the top eigendirection only rotates a little: the matrix still strongly prefers the first coordinate direction.
This is the core first-pass perturbation picture:
- eigenvalues move by at most operator-norm scale
- eigendirections stay reliable when the gap is comfortably larger than the perturbation
8 Computation Lens
When a theorem says that an estimated matrix is close to a target matrix, ask:
- close in what norm?
- do we care about eigenvalues, singular values, or eigenspaces?
- is there a spectral gap?
- is the perturbation small relative to that gap?
Those four questions usually reveal the actual stability mechanism.
9 Application Lens
9.1 PCA And Covariance Estimation
Sample covariance matrices are perturbations of population covariance matrices. Perturbation theory explains when top principal components remain meaningful.
9.2 Optimization
Approximate Hessians are useful only when curvature summaries such as eigenvalues and dominant eigenspaces stay stable under approximation.
9.3 Kernels And Spectral ML
Kernel matrices, graph Laplacians, and normalized operators are often estimated from finite data. Perturbation bounds explain when the learned spectral structure is trustworthy.
10 Stop Here For First Pass
If you can now explain:
- why operator norm is the default perturbation scale
- why symmetric eigenvalues and singular values are stable
- why eigenspace stability needs a gap
- why perturbation theorems appear constantly in PCA, kernels, and random-matrix arguments
then this page has done its job.
11 Go Deeper
After this page, the next natural step is:
The strongest adjacent live pages right now are:
12 Optional Deeper Reading After First Pass
The strongest current references connected to this page are:
- Cornell CS6210 perturbation theory notes - official current notes for first-order eigenvalue sensitivity and perturbation language. Checked
2026-04-25. - Cornell CS6210 matrix computations notes - official current notes that frame conditioning and operator-norm reasoning clearly. Checked
2026-04-25. - MIT 18.065: Computing Eigenvalues and Singular Values - official lecture page tied to spectral computation and stability ideas. Checked
2026-04-25. - MIT 18.06 lecture notes - official notes with the cleanest linear-algebra background for symmetric and singular-value structure. Checked
2026-04-25.
13 Sources and Further Reading
- Cornell CS6210 perturbation theory notes -
First pass- official perturbation notes with direct sensitivity formulas and stability framing. Checked2026-04-25. - MIT 18.065: Computing Eigenvalues and Singular Values -
First pass- official lecture page that helps connect spectral structure to numerical stability questions. Checked2026-04-25. - Cornell CS6210 matrix computations notes -
Second pass- official notes for matrix-conditioning language and operator-level reasoning. Checked2026-04-25. - MIT 18.06 lecture notes -
Second pass- official notes for the linear-algebra background behind the perturbation story. Checked2026-04-25.