Control and Dynamics
control, state-space, feedback, observability, LQR, MPC
1 Why This Module Matters
Dynamics asks:
- what trajectory follows from a law of change
- where the equilibria are
- whether nearby trajectories stay close
Control asks one layer more:
- what input should we apply
- what state can we infer from measurements
- what feedback stabilizes the system
- what objective are we optimizing over trajectories
So this module is where the site turns dynamics you analyze into dynamics you can shape.
It is the bridge from ODEs and numerical simulation into feedback, estimation, optimal control, and learning-based systems work.
2 First Pass Through This Module
The intended first-pass spine for this module is:
- State-Space Models, Inputs, and Outputs
- Controllability, Reachability, and Observability
- Feedback, Stability, and Pole Placement
- Linear Quadratic Regulation and Riccati Intuition
- Estimation, Kalman Filtering, and the Separation Principle
- Model Predictive Control and Constraint Handling
- Learning-Based Control, System Identification, and RL Bridges
The module now has a complete seven-page first-pass spine. The pages move from state-space modeling to reachability and observability, then into feedback design, LQR, Kalman filtering, MPC, and finally the bridge into learning-based control and RL-flavored sequential decision-making.
3 How To Use This Module
A good first pass through the live pages is:
- start with State-Space Models, Inputs, and Outputs
- continue to Controllability, Reachability, and Observability
- continue to Feedback, Stability, and Pole Placement
- continue to Linear Quadratic Regulation and Riccati Intuition
- continue to Estimation, Kalman Filtering, and the Separation Principle
- continue to Model Predictive Control and Constraint Handling
- continue to Learning-Based Control, System Identification, and RL Bridges
- pair those pages with Discretization, Time-Stepping, and the Bridge to Control
- use Linear Systems, Matrix Exponentials, and Modes as the continuous-time systems companion
- use Time-Stepping for ODEs and Stability to keep the continuous versus sampled distinction clear
- return to Optimization once the route reaches LQR or MPC
The design goal is to make the state-input-output viewpoint feel natural before introducing heavier control theorems.
4 Core Concepts
- State-Space Models, Inputs, and Outputs: the opening page that explains what a state is, how inputs and outputs enter the model, and why control uses this viewpoint instead of only raw differential equations.
- Controllability, Reachability, and Observability: the page that tests whether inputs can move the state where needed and whether outputs can reveal the hidden state.
- Feedback, Stability, and Pole Placement: the page that turns controllability into actual feedback design and stability shaping.
- Linear Quadratic Regulation and Riccati Intuition: the page that turns feedback design into an explicit optimization problem with quadratic cost and Riccati structure.
- Estimation, Kalman Filtering, and the Separation Principle: the page that turns noisy outputs into recursive state estimates and connects estimator design back to LQR through the separation principle.
- Model Predictive Control and Constraint Handling: the page that turns constrained optimal control into an online receding-horizon optimization problem.
- Learning-Based Control, System Identification, and RL Bridges: the page that distinguishes learned models, learned controllers, and RL objectives while connecting them back to Bellman, MPC, and optimal control.
5 After This First Pass
Natural next directions after the completed spine are:
Applications > Control and Dynamicsstochastic control and dynamic programmingrobust or adaptive controldeeper reinforcement learning theory
6 Applications
6.1 Robotics And Autonomous Systems
Control turns motion models into systems that can track, stabilize, and reject disturbances.
6.2 Estimation And Sensing
The same state-space language also supports observers, filtering, and hidden-state estimation.
6.3 ML And Sequential Decision-Making
Modern ML keeps rediscovering control language through model-based RL, system identification, differentiable simulators, and continuous-time generative viewpoints.
7 Go Deeper By Topic
The strongest adjacent live pages right now are:
- ODEs and Dynamical Systems
- Discretization, Time-Stepping, and the Bridge to Control
- Research Bridges: Reverse-Time SDEs, Probability-Flow ODEs, Flow Matching, and Control
- Numerical Methods
- Time-Stepping for ODEs and Stability
- Controllability, Reachability, and Observability
- Feedback, Stability, and Pole Placement
- Linear Quadratic Regulation and Riccati Intuition
- Estimation, Kalman Filtering, and the Separation Principle
- Model Predictive Control and Constraint Handling
- Learning-Based Control, System Identification, and RL Bridges
- Optimization
8 Optional Deeper Reading After First Pass
The strongest current references connected to this module are:
- MIT 6.241J: Dynamic Systems and Control - official lecture-note index spanning state-space models, reachability, observability, feedback stabilization, and optimal control. Checked
2026-04-25. - MIT 6.241J Lecture 7: State-space Models - official lecture notes for the state definition and continuous/discrete state-space form. Checked
2026-04-25. - MIT 6.241J Lecture 8: Solutions of State-space Models - official lecture notes for zero-input versus forced response and state-transition language. Checked
2026-04-25. - MIT 6.241J Lecture 20: Reachability and Observability - official lecture notes for the next control test layer after state-space form. Checked
2026-04-25. - MIT 6.241J Lecture 23: Feedback stabilization - official lecture notes for the feedback-stabilization viewpoint. Checked
2026-04-25. - MIT 16.30 Topic 9: State-space model features - official lecture notes emphasizing controllability, observability, and minimal realization language. Checked
2026-04-25. - MIT 16.30 Topic 11: Full-state feedback control - official lecture notes for turning controllability into feedback design. Checked
2026-04-25. - MIT 16.30 Topic 12: Pole placement approach - official lecture notes for where to place poles and why. Checked
2026-04-25. - MIT 16.30 Topic 18: Linear Quadratic Regulator - official lecture notes for the LQR setup and Riccati interpretation. Checked
2026-04-25. - MIT 16.323: Principles of Optimal Control lecture notes - official lecture-note index for dynamic programming, continuous/discrete LQR, and optimal-control structure. Checked
2026-04-25. - MIT 16.323 Lecture 11: Estimators/Observers - official lecture notes page for estimators, observers, and optimal estimators. Checked
2026-04-25. - MIT 16.323 Lecture 16: Model Predictive Control - official lecture notes page for receding-horizon control and constrained optimal-control structure. Checked
2026-04-25. - MIT 16.30/31 Recitation 10 - official recitation notes for the Kalman filter and innovation-gain interpretation. Checked
2026-04-25. - MIT 6.435: System Identification - official course page for learning dynamics from observations. Checked
2026-04-25. - MIT 6.435 lecture notes - official lecture index for identifiability, prediction-error methods, and recursive estimation. Checked
2026-04-25. - MIT 16.30: Feedback Control Systems - official course page showing the standard state-space and feedback-control arc. Checked
2026-04-25. - Stanford EE363 bulletin - official current course description connecting state-space models, reachability, observability, LQR, and Kalman filtering. Checked
2026-04-25. - Stanford EE263 bulletin - official archived bulletin describing the linear-dynamical-systems prerequisite layer into control and estimation. Checked
2026-04-25. - Stanford AA203 bulletin - official archived course description for optimal and learning-based control. Checked
2026-04-25. - Stanford AA273 bulletin - official current course description for state estimation and filtering in robotics and autonomy. Checked
2026-04-25. - Stanford EE364b course information - official recent course page for optimization methods with control-facing applications. Checked
2026-04-25. - Stanford EE364b: Lecture Slides and Notes - official lecture index including MPC and stochastic MPC materials. Checked
2026-04-25. - Stanford EE364b: Model Predictive Control slides - official Stanford slides for linear time-invariant convex optimal control and receding-horizon design. Checked
2026-04-25. - Stanford EE365: Stochastic Control - official course page for Bellman, policy iteration, and stochastic-control bridges to RL. Checked
2026-04-25. - Stanford EE365 lecture slides - official lecture index for dynamic programming and value-function viewpoints. Checked
2026-04-25. - Stanford ENGR105 bulletin - official current course description centered on feedback control design. Checked
2026-04-25. - Stanford ENGR205 bulletin - official current course description explicitly naming state-feedback regulator design and pole placement. Checked
2026-04-25.
9 Sources and Further Reading
- MIT 6.241J: Dynamic Systems and Control -
First pass- official lecture-note index for the state-space-to-control route. Checked2026-04-25. - MIT 6.241J Lecture 7: State-space Models -
First pass- official notes for what state means and how continuous/discrete state-space models are written. Checked2026-04-25. - MIT 6.241J Lecture 8: Solutions of State-space Models -
First pass- official notes for state-transition and forced-response language. Checked2026-04-25. - MIT 6.241J Lecture 20: Reachability and Observability -
First pass- official notes for the first input/output feasibility tests in the module. Checked2026-04-25. - MIT 6.241J Lecture 23: Feedback stabilization -
First pass- official notes for why stabilization needs feedback rather than only open-loop steering. Checked2026-04-25. - MIT 16.30 Topic 9: State-space model features -
First pass- official notes emphasizing controllability, observability, and hidden state directions. Checked2026-04-25. - MIT 16.30 Topic 11: Full-state feedback control -
First pass- official notes for shaping the closed-loop poles with state feedback. Checked2026-04-25. - MIT 16.30 Topic 12: Pole placement approach -
First pass- official notes for practical pole-placement tradeoffs. Checked2026-04-25. - MIT 16.30 Topic 18: Linear Quadratic Regulator -
First pass- official notes for the LQR problem and the Riccati viewpoint. Checked2026-04-25. - MIT 16.323 Lecture 11: Estimators/Observers -
First pass- official notes page for observer and optimal-estimation structure. Checked2026-04-25. - MIT 16.323 Lecture 16: Model Predictive Control -
First pass- official notes page for finite-horizon receding-horizon control. Checked2026-04-25. - MIT 16.30/31 Recitation 10 -
First pass- official notes for the Kalman gain, covariance recursion, and innovation form. Checked2026-04-25. - MIT 6.435: System Identification -
First pass- official course page for the system-identification side of learning-based control. Checked2026-04-25. - MIT 6.435 lecture notes -
Second pass- official notes index for identifiability, prediction error methods, and recursive estimation. Checked2026-04-25. - MIT 6.435 syllabus -
Second pass- official syllabus showing the broader structure of a full system-identification course. Checked2026-04-25. - MIT 16.30: Feedback Control Systems -
Second pass- official course page for the broader feedback-design arc. Checked2026-04-25. - MIT 16.323: Principles of Optimal Control lecture notes -
Second pass- official notes index for discrete/continuous LQR and optimal-control formulations. Checked2026-04-25. - Stanford EE363 bulletin -
Second pass- official current course description connecting the module’s later topics in one place. Checked2026-04-25. - Stanford EE263 bulletin -
Second pass- official archived description of the state-space prerequisite layer. Checked2026-04-25. - Stanford AA203 bulletin -
Second pass- official archived description connecting optimal control with learning-based directions. Checked2026-04-25. - Stanford AA273 bulletin -
Second pass- official current course description for modern state estimation and filtering applications. Checked2026-04-25. - Stanford EE364b course information -
Second pass- official recent course page for optimization methods that feed directly into control applications. Checked2026-04-25. - Stanford EE364b: Lecture Slides and Notes -
Second pass- official lecture index showing where MPC fits inside convex optimization and control. Checked2026-04-25. - Stanford EE364b: Model Predictive Control slides -
Second pass- official slides for convex finite-horizon control and receding-horizon policy design. Checked2026-04-25. - Stanford EE365: Stochastic Control -
Second pass- official course page for Bellman-style sequential decision-making and stochastic control. Checked2026-04-25. - Stanford EE365 lecture slides -
Second pass- official lecture index for value functions, policy iteration, and stochastic-control foundations. Checked2026-04-25. - Stanford ENGR105 bulletin -
Second pass- official current description of feedback control design for stability and response specifications. Checked2026-04-25. - Stanford ENGR205 bulletin -
Second pass- official current description that explicitly includes pole placement and observer design. Checked2026-04-25.