Measurements, Models, and Hidden Variables

A bridge page showing how observations, forward models, and hidden quantities create the common backbone of optimization and inference problems.
Modified

April 26, 2026

Keywords

measurements, hidden-variables, likelihood, posterior, inference

1 Application Snapshot

A large fraction of inference can be summarized in one sentence:

you do not observe the quantity you care about directly, so you build a model that connects hidden structure to noisy data

That sentence already contains the main objects:

  • measurements
  • a forward model
  • hidden variables
  • uncertainty or approximation choices

This page is the shortest bridge from the site’s math modules into that shared inference language.

2 Problem Setting

A generic inference problem starts with:

  • observed data \(y\)
  • a hidden quantity \(x\) or \(z\) that you actually care about
  • a model describing how observations arise from the hidden quantity

In a linear measurement model, this often looks like

\[ y = Hx + \eta, \]

where \(H\) is a measurement map and \(\eta\) is noise.

In a probabilistic form, the same idea is written as

\[ y \sim p(y \mid x) \]

or, with a latent variable \(z\),

\[ z \sim p(z), \qquad x \sim p(x \mid z). \]

The point is the same in both languages:

  • the observation is not the target
  • the model tells you how they are related
  • inference is the work of going backward from data to hidden structure

3 Why This Math Appears

This language keeps reusing several math layers already on the site:

  • Statistics: likelihoods, priors, posteriors, estimation targets
  • Optimization: MAP estimation, regularized recovery, constraints
  • Signal Processing and Estimation: noisy measurements, filtering, inverse problems
  • Stochastic Processes: hidden-state evolution, sequential uncertainty, MCMC bridges
  • Information Theory: compression, uncertainty, and information limits

So optimization and inference are not separate subjects glued together late. They are two views of the same recurring problem: hidden structure must be recovered from imperfect evidence.

4 Math Objects In Use

  • observed data \(y\)
  • hidden variable, parameter, signal, or state \(x\) or \(z\)
  • forward model or likelihood \(p(y \mid x)\)
  • sometimes a prior \(p(x)\) or structural assumption
  • posterior quantity \(p(x \mid y)\) when uncertainty matters
  • objective function when the problem is solved as an optimization task

In many papers, a posterior question is converted into an optimization problem by taking negative logs:

\[ \hat{x}_{\mathrm{MAP}} = \arg\max_x p(x \mid y) = \arg\min_x \bigl[-\log p(y \mid x) - \log p(x)\bigr]. \]

That is why estimation and optimization keep appearing together.

5 A Small Worked Walkthrough

Suppose \(x\) is an unknown signal, \(H\) measures only part of it, and the observed data are

\[ y = Hx + \eta. \]

Now several different inference questions appear immediately:

  1. Point estimate Find a single best guess \(\hat{x}\).

  2. Posterior uncertainty Characterize how uncertain we still are about \(x\) after seeing \(y\).

  3. Sequential update If new measurements arrive over time, update the belief about \(x\) repeatedly.

  4. Active measurement If we can choose what to measure next, decide which observation would be most useful.

The observation model has not changed. Only the downstream question has changed.

That is the main organizing idea of this section:

one measurement model can lead to optimization, filtering, variational approximation, or sampling, depending on what answer is needed

6 Implementation or Computation Note

In practice, the main computational forks are:

  1. Optimize an objective Use this when you want a point estimate such as least squares, MAP, or regularized recovery.

  2. Update beliefs sequentially Use this when data arrive over time and hidden state evolves.

  3. Approximate a posterior Use this when the full posterior is too hard to compute exactly, so you need variational or sampling-based methods.

Strong next bridges already live on the site:

7 Failure Modes

  • treating the measurement as if it were the target quantity itself
  • ignoring whether the forward model \(H\) loses information or creates non-identifiability
  • confusing a regularized point estimate with a full uncertainty description
  • forgetting that priors and regularizers encode assumptions about hidden structure
  • choosing a computational method before deciding whether the real goal is optimization, uncertainty quantification, or sequential belief updating

8 Paper Bridge

9 Sources and Further Reading

Back to top