Measurements, Models, and Hidden Variables
measurements, hidden-variables, likelihood, posterior, inference
1 Application Snapshot
A large fraction of inference can be summarized in one sentence:
you do not observe the quantity you care about directly, so you build a model that connects hidden structure to noisy data
That sentence already contains the main objects:
- measurements
- a forward model
- hidden variables
- uncertainty or approximation choices
This page is the shortest bridge from the site’s math modules into that shared inference language.
2 Problem Setting
A generic inference problem starts with:
- observed data \(y\)
- a hidden quantity \(x\) or \(z\) that you actually care about
- a model describing how observations arise from the hidden quantity
In a linear measurement model, this often looks like
\[ y = Hx + \eta, \]
where \(H\) is a measurement map and \(\eta\) is noise.
In a probabilistic form, the same idea is written as
\[ y \sim p(y \mid x) \]
or, with a latent variable \(z\),
\[ z \sim p(z), \qquad x \sim p(x \mid z). \]
The point is the same in both languages:
- the observation is not the target
- the model tells you how they are related
- inference is the work of going backward from data to hidden structure
3 Why This Math Appears
This language keeps reusing several math layers already on the site:
Statistics: likelihoods, priors, posteriors, estimation targetsOptimization: MAP estimation, regularized recovery, constraintsSignal Processing and Estimation: noisy measurements, filtering, inverse problemsStochastic Processes: hidden-state evolution, sequential uncertainty, MCMC bridgesInformation Theory: compression, uncertainty, and information limits
So optimization and inference are not separate subjects glued together late. They are two views of the same recurring problem: hidden structure must be recovered from imperfect evidence.
4 Math Objects In Use
- observed data \(y\)
- hidden variable, parameter, signal, or state \(x\) or \(z\)
- forward model or likelihood \(p(y \mid x)\)
- sometimes a prior \(p(x)\) or structural assumption
- posterior quantity \(p(x \mid y)\) when uncertainty matters
- objective function when the problem is solved as an optimization task
In many papers, a posterior question is converted into an optimization problem by taking negative logs:
\[ \hat{x}_{\mathrm{MAP}} = \arg\max_x p(x \mid y) = \arg\min_x \bigl[-\log p(y \mid x) - \log p(x)\bigr]. \]
That is why estimation and optimization keep appearing together.
5 A Small Worked Walkthrough
Suppose \(x\) is an unknown signal, \(H\) measures only part of it, and the observed data are
\[ y = Hx + \eta. \]
Now several different inference questions appear immediately:
Point estimateFind a single best guess \(\hat{x}\).Posterior uncertaintyCharacterize how uncertain we still are about \(x\) after seeing \(y\).Sequential updateIf new measurements arrive over time, update the belief about \(x\) repeatedly.Active measurementIf we can choose what to measure next, decide which observation would be most useful.
The observation model has not changed. Only the downstream question has changed.
That is the main organizing idea of this section:
one measurement model can lead to optimization, filtering, variational approximation, or sampling, depending on what answer is needed
6 Implementation or Computation Note
In practice, the main computational forks are:
Optimize an objectiveUse this when you want a point estimate such as least squares, MAP, or regularized recovery.Update beliefs sequentiallyUse this when data arrive over time and hidden state evolves.Approximate a posteriorUse this when the full posterior is too hard to compute exactly, so you need variational or sampling-based methods.
Strong next bridges already live on the site:
7 Failure Modes
- treating the measurement as if it were the target quantity itself
- ignoring whether the forward model \(H\) loses information or creates non-identifiability
- confusing a regularized point estimate with a full uncertainty description
- forgetting that priors and regularizers encode assumptions about hidden structure
- choosing a computational method before deciding whether the real goal is optimization, uncertainty quantification, or sequential belief updating
8 Paper Bridge
- STATS 305B / Applied Statistics II -
First pass- useful once likelihoods, regularization, and posterior questions begin to blur together. Checked2026-04-26. - EE278 / Introduction to Statistical Signal Processing -
Bridge to estimation- useful when the hidden-variable question is driven by noisy measurements and filtering. Checked2026-04-26.
9 Sources and Further Reading
- 6.011 / Signals, Systems and Inference -
First pass- official MIT course that makes the measurement-to-inference story explicit. Checked2026-04-26. - 16.322 / Stochastic Estimation and Control -
First pass- official MIT anchor for hidden-state inference and noisy observation models. Checked2026-04-26. - EE278 / Introduction to Statistical Signal Processing -
Second pass- official Stanford course anchor for estimation from noisy data. Checked2026-04-26. - STATS 202 / Data Mining and Analysis -
Second pass- official Stanford bridge for statistical modeling, estimation, and inference choices. Checked2026-04-26. - STATS 305B / Applied Statistics II -
Second pass- official Stanford bridge for regularization and modern estimation viewpoints. Checked2026-04-26. - CS238 / Decision Making Under Uncertainty -
Bridge to sequential inference- useful once hidden-state estimation begins to overlap with planning and action. Checked2026-04-26.