State, Sensing, and Actuation
state, sensing, actuation, state-space, control
1 Application Snapshot
A large fraction of systems work can be summarized in one sentence:
something evolves over time, you only measure part of it, and you try to influence it through inputs
That sentence already contains the three basic objects:
statesensingactuation
This page is the shortest bridge from the site’s math modules into the language used in control, robotics, navigation, and sequential decision-making.
2 Problem Setting
A system is usually described by:
- a
state\(x\), which stores the information needed to predict future evolution - an
input\(u\), which represents what we can command or apply - an
output\(y\), which is what sensors actually report
In continuous time, a common model is
\[ \dot{x}(t) = f(x(t), u(t)), \qquad y(t) = h(x(t)). \]
In discrete time, the same idea becomes
\[ x_{t+1} = f(x_t, u_t), \qquad y_t = h(x_t). \]
If the world is noisy, we often add process and measurement noise:
\[ x_{t+1} = f(x_t, u_t, w_t), \qquad y_t = h(x_t) + v_t. \]
The key point is that the output need not reveal the full state.
3 Why This Math Appears
This language reuses several math layers already on the site:
Linear Algebra: states, inputs, and outputs are often vectors; system models are often matrices or linear mapsODEs and Dynamical Systems: the state evolves according to differential or difference equationsControl and Dynamics: feedback laws act on the state or its estimateSignal Processing and Estimation: sensors are noisy, delayed, partial, or filteredStochastic Control and Dynamic Programming: decisions are made repeatedly under uncertainty
So state-space language is not a side topic. It is the common translation layer between math and real systems.
4 Math Objects In Use
- state vector \(x\)
- input or control \(u\)
- output or measurement \(y\)
- dynamics law \(f\)
- observation law \(h\)
- sometimes disturbance or process noise \(w\)
- sometimes measurement noise \(v\)
In linear time-invariant form, these objects often become
\[ \dot{x} = Ax + Bu, \qquad y = Cx + Du \]
or the discrete-time analog
\[ x_{t+1} = Ax_t + Bu_t, \qquad y_t = Cx_t + Du_t. \]
5 A Small Worked Walkthrough
Consider a simple vertical-motion model for a drone:
\[ \dot{h} = v, \qquad \dot{v} = -g + \alpha u + d(t), \qquad y = h + \eta. \]
Here:
- \(h\) is height
- \(v\) is vertical velocity
- \(u\) is commanded thrust
- \(d(t)\) is an unmodeled disturbance such as wind
- \(y\) is a noisy height measurement
The natural state is
\[ x = \begin{bmatrix} h \\ v \end{bmatrix}. \]
This example makes the roles clear:
state: height and velocity together determine future motionactuation: thrust changes the accelerationsensing: the sensor only reports a noisy version of height
So even if height is observed, velocity may still be hidden and must be estimated or inferred.
That is exactly why state estimation and feedback appear so quickly after the first state-space model.
6 Implementation or Computation Note
The main practical questions start immediately after the model is written:
FeedbackHow should the input depend on the current state or estimated state?EstimationIf only noisy outputs are measured, how do we reconstruct the hidden state?SamplingIf the controller runs on a computer, how does the continuous model become a discrete update?
Use these as the strongest follow-on pages:
7 Failure Modes
- treating the sensor output as if it were the full state
- forgetting that hidden variables can still drive future evolution
- confusing disturbances with control inputs
- writing down a feedback law before deciding what is actually measured
- ignoring sampling and actuator limits when jumping from theory to implementation
8 Paper Bridge
- 6.241J / Dynamic Systems and Control -
First pass- official MIT course notes that make the state-space point of view operational. Checked2026-04-25. - 16.322 / Stochastic Estimation and Control -
Paper bridge- useful once sensing, hidden state, and uncertainty start to matter more than bare dynamics. Checked2026-04-25.
9 Sources and Further Reading
- 6.241J / Dynamic Systems and Control -
First pass- official MIT state-space lecture sequence. Checked2026-04-25. - Lecture 7: State-Space Models -
First pass- compact official MIT notes for the exact objects used on this page. Checked2026-04-25. - 16.30 / Feedback Control Systems -
Second pass- official MIT control framing for sensing, actuation, and feedback. Checked2026-04-25. - EE363 / Linear Dynamical Systems -
Second pass- official Stanford course anchor for state-space systems and control. Checked2026-04-25. - EE278 / Introduction to Statistical Signal Processing -
Bridge to estimation- useful once you want the sensing side, noise models, and estimation viewpoint to become more explicit. Checked2026-04-25. - AA228 / Decision Making Under Uncertainty -
Bridge to planning- useful once control language starts to merge with planning and sequential decisions. Checked2026-04-25.