Filtering, Smoothing, and Hidden-State Inference
filtering, smoothing, hidden-state, Kalman, HMM
1 Application Snapshot
Many inference problems are not static.
The hidden quantity changes over time, observations arrive sequentially, and each new measurement should update what we believe about the current state.
That is the setting of hidden-state inference:
a latent state evolvesobservations reveal it only indirectlybeliefs must be updated over time
This is where optimization and inference stop looking like one-shot recovery and start looking like sequential estimation.
2 Problem Setting
A hidden-state model usually has two pieces:
\[ x_{t+1} \sim p(x_{t+1}\mid x_t), \qquad y_t \sim p(y_t\mid x_t). \]
Here:
- \(x_t\) is the hidden state at time \(t\)
- \(y_t\) is the observation at time \(t\)
The first part says how the state evolves. The second part says how the observation is generated from the state.
This creates three closely related but different inference questions:
prediction: what do we believe about the next state before seeing the next measurement?filtering: what do we believe about the current state after seeing observations up to now?smoothing: what do we believe about an earlier state after future observations have arrived too?
3 Why This Math Appears
This page reuses several math layers that are already live on the site:
Probability and Statistics: conditional distributions and posterior beliefsStochastic Processes: Markov structure and evolving uncertaintySignal Processing and Estimation: state-space models, filtering, and noisy observationsControl and Dynamics: hidden state, sensing, and partial observabilityStochastic Control and Dynamic Programming: belief updates and sequential decision-making
So filtering and smoothing are not a side branch. They are the sequential version of the same inference story that began with measurement models and MAP estimation.
4 Math Objects In Use
- hidden state \(x_t\)
- observation \(y_t\)
- transition model \(p(x_{t+1}\mid x_t)\)
- observation model \(p(y_t\mid x_t)\)
- filtered belief \(p(x_t\mid y_{1:t})\)
- smoothed belief \(p(x_t\mid y_{1:T})\)
At the application level, the belief itself becomes a computational object.
Instead of storing the whole history naively, we update a compact summary of uncertainty as new data arrive.
5 A Small Worked Walkthrough
Imagine tracking a vehicle whose hidden state contains position and velocity:
\[ x_t = \begin{bmatrix} \text{position}_t \\ \text{velocity}_t \end{bmatrix}. \]
Suppose the sensor only reports noisy position:
\[ y_t = \text{position}_t + v_t. \]
Now the sequential inference tasks separate cleanly:
PredictionUse the dynamics model to estimate where the vehicle will be at the next step.FilteringCombine the dynamics prediction with the newest noisy position measurement to estimate the current state.SmoothingAfter later measurements arrive, revise what you think the vehicle state was a few time steps ago.
This already explains why the same problem gives rise to different algorithms:
- in linear-Gaussian models, Kalman filtering is the load-bearing recursion
- in discrete hidden-state models, forward-backward inference plays the same structural role
The observation model has not changed. What changes is how much of the observation sequence you allow the estimator to use.
6 Implementation or Computation Note
The main computational choices here are:
Recursive belief updateGood when data arrive online and you need a current estimate quickly.Forward-backward or smoothing passGood when you can use future observations to revise earlier hidden states.Optimization-based trajectory inferenceGood when you want a single most likely hidden trajectory instead of a full belief distribution.
Strong next bridges already live on the site:
7 Failure Modes
- treating filtering, smoothing, and prediction as if they were the same question
- forgetting that observations may reveal only part of the state
- using a one-shot estimator when the real problem is sequential and stateful
- confusing a filtered online estimate with a smoothed retrospective estimate
- ignoring how model mismatch in the dynamics can distort all later belief updates
8 Paper Bridge
- EE278 / Introduction to Statistical Signal Processing -
First pass- useful once recursive estimation becomes the main lens. Checked2026-04-26. - 16.322 / Stochastic Estimation and Control -
Bridge to state estimation- useful once hidden-state models and belief updates become operational. Checked2026-04-26.
9 Sources and Further Reading
- 6.438 / Algorithms for Inference -
First pass- official MIT course anchor for recursive inference and graphical-model viewpoints. Checked2026-04-26. - Lecture 13: Kalman Filtering and Smoothing -
First pass- compact official MIT notes on the exact filtering-versus-smoothing distinction used here. Checked2026-04-26. - 16.322 / Stochastic Estimation and Control -
Bridge to model-based estimation- official MIT anchor for state estimation under uncertainty. Checked2026-04-26. - EE278 / Introduction to Statistical Signal Processing -
Second pass- official Stanford course anchor for probabilistic estimation from sequential noisy data. Checked2026-04-26. - Stats366 HMM Notes -
Second pass- Stanford notes that make the forward-backward hidden-state story explicit. Checked2026-04-26. - Stats366 Underlying Algorithms -
Algorithm bridge- Stanford notes for the forward-backward and recursion viewpoint. Checked2026-04-26.