Estimation, Kalman Filtering, and the Separation Principle

How noisy measurements become state estimates, why the Kalman filter alternates prediction and correction, and how estimation and control separate cleanly in the linear-quadratic-Gaussian setting.
Modified

April 26, 2026

Keywords

estimation, Kalman filter, observer, covariance, separation principle

1 Role

This is the fifth page of the Control and Dynamics module.

Its job is to explain what we do when the full state is not directly available:

we estimate the hidden state from noisy measurements, then use that estimate inside the controller

This is the estimator side of the same systems story that LQR plays on the control side.

2 First-Pass Promise

Read this page after Linear Quadratic Regulation and Riccati Intuition.

If you stop here, you should still understand:

  • why state estimation is needed
  • how the Kalman filter alternates prediction and correction
  • why covariance matters for estimator trust
  • what the separation principle says at a first pass

3 Why It Matters

Many systems are controlled through incomplete and noisy measurements.

We may measure:

  • position but not velocity
  • some sensors but not the whole internal state
  • a delayed or noisy proxy instead of the exact quantity we care about

So even if a good feedback law exists in terms of the true state x, the controller often cannot use x directly.

It must use an estimate \hat x.

That is why estimation is not an optional add-on. It is one half of practical feedback design.

This page is where the site turns:

state-space + noisy outputs

into

recursive state estimation + control-ready state estimates

4 Prerequisite Recall

  • a state-space model distinguishes hidden state, input, and measured output
  • observability asks whether output data can reveal the state
  • LQR solved the control side by balancing state deviation against control effort
  • covariance measures uncertainty and is the natural language for mean-square estimation

5 Intuition

5.1 Estimation Is About Hidden State, Not Just Sensor Denoising

The point is not only to smooth noisy measurements.

The point is to infer the internal state that the dynamics still needs.

So an estimator uses both:

  • the measurements
  • the model of how the state evolves

5.2 Prediction And Correction Are The Two Motions Of The Kalman Filter

At each step, the filter:

  1. predicts the next state using the dynamics model
  2. corrects that prediction using the new measurement

This is the core rhythm of the algorithm.

5.3 Covariance Tells Us How Much To Trust Model Versus Measurement

If process noise is large, then model-based prediction becomes less trustworthy.

If measurement noise is small, then new data should be weighted more heavily.

The Kalman gain formalizes that balance.

5.4 The Separation Principle Is The Clean Bridge Back To Control

LQR solved:

what feedback law is best if the state were known

Kalman filtering solves:

what estimate is best if the measurements are noisy

The separation principle says these two designs can be combined cleanly in the standard linear-quadratic-Gaussian setting.

6 Formal Core

Definition 1 (Definition: Linear State-Space Model With Process And Measurement Noise) At a first pass, the discrete-time estimation model is

\[ x_{k+1}=Ax_k+Bu_k+w_k, \qquad y_k=Cx_k+v_k, \]

where:

  • w_k is process noise
  • v_k is measurement noise

In the standard Kalman setup, these noises are taken to be zero-mean with known covariances.

Definition 2 (Definition: Prediction Step) The predicted state and covariance are

\[ \hat x_{k|k-1}=A\hat x_{k-1|k-1}+Bu_{k-1}, \]

\[ P_{k|k-1}=AP_{k-1|k-1}A^T+W. \]

Definition 3 (Definition: Innovation) The innovation is the measurement mismatch

\[ r_k = y_k - C\hat x_{k|k-1}. \]

This is the new information not already explained by the prediction.

Definition 4 (Definition: Kalman Update) The Kalman gain is

\[ L_k = P_{k|k-1}C^T(CP_{k|k-1}C^T+V)^{-1}, \]

and the corrected estimate is

\[ \hat x_{k|k}=\hat x_{k|k-1}+L_kr_k. \]

Theorem 1 (Theorem Idea: Kalman Filter Optimality) Under the standard linear-Gaussian assumptions, the Kalman filter gives the minimum-mean-square estimate of the state among estimators based on the available measurements.

Theorem 2 (Theorem Idea: Separation Principle) In the standard linear-quadratic-Gaussian setting, optimal state-feedback control and optimal state estimation can be designed separately:

  • LQR designs the control gain
  • Kalman filtering designs the estimator gain

Then the controller uses the estimated state in place of the true state.

At a first pass, this means the estimator and controller are two Riccati-based halves of one larger design.

7 Worked Example

Consider the scalar random-walk model

\[ x_{k+1}=x_k+w_k, \qquad y_k=x_k+v_k, \]

where

\[ \mathbb E[w_k]=0, \qquad \mathbb E[v_k]=0, \qquad \operatorname{Var}(w_k)=q, \qquad \operatorname{Var}(v_k)=r. \]

Suppose our prior estimate at step k has variance P^-.

Then the scalar Kalman gain is

\[ L = \frac{P^-}{P^-+r}. \]

The updated estimate is

\[ \hat x^+ = \hat x^- + L(y-\hat x^-). \]

This simple formula already shows the right intuition:

  • if r is small, the measurement is trusted more, so L is larger
  • if P^- is large, the prediction is uncertain, so the filter leans more on the new measurement
  • if r is large, the filter trusts the model more and corrects less aggressively

So the Kalman filter is not just “averaging.”

It is averaging with weights determined by uncertainty.

8 Computation Lens

When you see a Kalman-filtering problem, ask:

  1. what part of the state is hidden and why can it not be measured directly?
  2. what dynamics model produces the one-step prediction?
  3. what are the process-noise and measurement-noise covariances trying to encode?
  4. is the system observable enough for the measurements to inform the hidden state directions?
  5. are we in a linear-Gaussian regime where the plain Kalman filter is justified, or is this really an EKF/UKF/particle-filter situation?

These questions matter more than memorizing the recursion alone.

9 Application Lens

9.2 Signal Processing And Tracking

Recursive filtering is the systems-language version of sequential inference under uncertainty.

9.3 Bridge To LQG And MPC

This page sets up the next control idea naturally:

once control and estimation both exist, we can combine them, and then later impose constraints through MPC.

10 Stop Here For First Pass

If you can now explain:

  • why control often needs a state estimate instead of the true state
  • how prediction and correction alternate in the Kalman filter
  • why covariance determines the trust balance
  • what the innovation is
  • why estimation and control separate cleanly in the standard LQG setting

then this page has done its job.

11 Go Deeper

The next natural step in this module is:

The strongest adjacent live pages right now are:

12 Optional Deeper Reading After First Pass

The strongest current references connected to this page are:

  • MIT 16.323 Lecture 11: Estimators/Observers - official lecture notes page for estimators, observers, and optimal estimators. Checked 2026-04-25.
  • MIT 16.323 lecture notes - official notes index for estimators, stochastic optimal control, and LQG context. Checked 2026-04-25.
  • MIT 16.30/31 Recitation 10 - official recitation notes for the Kalman filter and innovation-gain interpretation. Checked 2026-04-25.
  • Stanford EE363 bulletin - official current course description connecting LQR and the Kalman filter in one linear-systems arc. Checked 2026-04-25.
  • Stanford AA273 bulletin - official current course description for state estimation and filtering in robotics and autonomy. Checked 2026-04-25.

13 Sources and Further Reading

  • MIT 16.323 Lecture 11: Estimators/Observers - First pass - official notes page for observer and optimal-estimation structure. Checked 2026-04-25.
  • MIT 16.30/31 Recitation 10 - First pass - official notes for the Kalman gain, covariance recursion, and innovation form. Checked 2026-04-25.
  • MIT 16.323 lecture notes - Second pass - official notes index for the broader estimator, stochastic optimal control, and LQG context. Checked 2026-04-25.
  • Stanford EE363 bulletin - Second pass - official current course description connecting regulator design and Kalman filtering. Checked 2026-04-25.
  • Stanford AA273 bulletin - Second pass - official current course description for modern filtering and state-estimation applications. Checked 2026-04-25.
Back to top