Math for Research
  • Start Here
  • Roadmaps
  • Topics
  • Applications
  • Paper Lab
  • Research
  • Publication
  • Library
  • Notes
  • About
  1. Applications
  2. Applications
  • Start Here
    • Start Here
    • How to Use This Site
    • Math Diagnostic
    • Notation and Symbols
    • How to Write Proofs
    • How to Read Theory Papers
  • Roadmaps
    • Roadmaps
    • Core Foundation Roadmap
    • Theory Reading Track
    • AI / ML Theory Roadmap
    • Engineering Systems Roadmap
  • Topics
    • Topics
    • Foundation Topics
    • Proofs
      • Proofs
      • Statements and Quantifiers
      • Direct Proof
      • Contrapositive and Contradiction
      • Induction
      • Counterexamples and Proof Debugging
      • Proof-writing Clinic
    • Logic
      • Logic
      • Propositional Logic
      • Logical Equivalence and Negation
      • Predicate Logic
      • Sets, Functions, and Relations
      • Translation Between English and Symbols
    • Algebra Repair
      • Algebra Repair
      • Expressions and Equations
      • Functions and Graph Reading
      • Exponents, Logarithms, and Growth
      • Trig and Complex Numbers
      • Symbol Manipulation Lab
    • Discrete Math
      • Discrete Math
      • Counting and Combinatorics
      • Recurrences and Asymptotics
      • Graphs and Trees
      • Discrete Probability Bridge
      • Number Theory Basics
    • Single-Variable Calculus
      • Single-Variable Calculus
      • Limits and Continuity
      • Derivatives and Local Approximation
      • Integrals and Accumulation
      • Sequences and Series
      • Taylor Expansion
    • Multivariable Calculus
      • Multivariable Calculus
      • Partial Derivatives and Gradients
      • Chain Rule and Linearization
      • Jacobians and Hessians
      • Multiple Integrals
      • Constrained Optimization
      • Vector Fields and Divergence / Curl
    • Linear Algebra
      • Linear Algebra
      • Vectors and Linear Combinations
      • Matrices and Linear Maps
      • Subspaces, Basis, and Dimension
      • Orthogonality and Least Squares
      • Eigenvalues and Diagonalization
      • SVD and Low-Rank Approximation
    • Probability
      • Probability
      • Sample Spaces, Events, and Conditioning
      • Random Variables and Distributions
      • Expectation, Variance, Covariance
      • Joint, Conditional, and Bayes
      • Law of Large Numbers and CLT
      • Concentration and Common Inequalities
    • Statistics
      • Statistics
      • Descriptive Statistics and Data Models
      • Estimation and Bias-Variance
      • Maximum Likelihood and Bayesian Basics
      • Confidence Intervals and Hypothesis Testing
      • Regression and Classification Basics
      • Experimental Design and Model Evaluation
    • Advanced Topics
    • Real Analysis
      • Real Analysis
      • Rigorous Convergence
      • Continuity, Compactness, and Completeness
      • Sequences and Series of Functions
      • Differentiation and Integration as Theorems
      • Fixed-Point, Implicit, and Inverse Function Ideas
    • Optimization
      • Optimization
      • Convex Sets and Separation
      • Convex Functions and Subgradients
      • Constrained Optimization, KKT, and Lagrangians
      • Unconstrained First-Order Methods
      • Duality and Certificates
    • Numerical Methods
      • Numerical Methods
      • Floating-Point, Conditioning, and Backward Error
      • Numerical Linear Systems and Factorizations
      • Iterative Methods and Preconditioning
      • Numerical Least Squares and Regularization
      • Eigenvalue and SVD Computation
      • Approximation, Differentiation, Integration, and Error Control
      • Time-Stepping for ODEs and Stability
    • Signal Processing and Estimation
      • Signal Processing and Estimation
      • Signals, Convolution, and Linear Time-Invariant Systems
      • Fourier Analysis, Frequency Response, and Spectral Views
      • Sampling, Aliasing, and Reconstruction
      • Noise Models, Wiener Filtering, and MMSE Estimation
      • State Estimation, Smoothing, and Hidden-State Inference
      • Inverse Problems, Deconvolution, and Regularized Recovery
      • Signal Processing Bridges to Communication, Sensing, and Modern ML
    • ODEs and Dynamical Systems
      • ODEs and Dynamical Systems
      • First-Order ODEs, Existence, and Solution Curves
      • Second-Order Systems, State Variables, and Reduction to First Order
      • Linear Systems, Matrix Exponentials, and Modes
      • Phase Portraits, Equilibria, and Local Stability
      • Lyapunov Functions, Invariant Sets, and Long-Time Behavior
      • Discretization, Time-Stepping, and the Bridge to Control
      • Research Bridges: Reverse-Time SDEs, Probability-Flow ODEs, Flow Matching, and Control
    • Control and Dynamics
      • Control and Dynamics
      • State-Space Models, Inputs, and Outputs
      • Controllability, Reachability, and Observability
      • Feedback, Stability, and Pole Placement
      • Linear Quadratic Regulation and Riccati Intuition
      • Estimation, Kalman Filtering, and the Separation Principle
      • Model Predictive Control and Constraint Handling
      • Learning-Based Control, System Identification, and RL Bridges
    • Matrix Analysis
      • Matrix Analysis
      • Norms and Operator Norms
      • Positive Semidefinite Matrices and Quadratic Forms
      • Spectral Inequalities and Variational Principles
      • Perturbation and Stability
      • Trace, Determinant, and Matrix Functions
    • Learning Theory
      • Learning Theory
      • ERM, Population Risk, and Hypothesis Classes
      • PAC Learning, Sample Complexity, and the Learning Setup
      • VC Dimension and Shattering
      • Uniform Convergence and Generalization Bounds
      • Rademacher Complexity and Data-Dependent Capacity
      • Algorithmic Stability and Regularization
      • Generalization in Modern Regimes
    • High-Dimensional Probability
      • High-Dimensional Probability
      • Concentration Beyond Basics
      • Sub-Gaussian and Sub-Exponential Variables
      • Random Vectors, Isotropy, and Norms
      • Random Matrices and Spectral Concentration
      • High-Dimensional Phenomena
      • High-Dimensional Probability for Learning Theory and Modern ML
    • High-Dimensional Statistics
      • High-Dimensional Statistics
      • Sparsity and Regularization
      • Lasso and Compressed Sensing Basics
      • Design Geometry: Restricted Eigenvalues, Coherence, and RIP
      • High-Dimensional Regression
      • Covariance, PCA, and Spectral Estimation in High Dimension
      • Minimax and Lower Bounds
      • Inference in High Dimension
    • Information Theory
      • Information Theory
      • Entropy, Cross-Entropy, and KL Divergence
      • Mutual Information, Conditional Entropy, and Data Processing
      • Typicality, Source Coding, and Compression Intuition
      • Channel Coding, Capacity, and Converse Proofs
      • Rate-Distortion and Representation Tradeoffs
      • Variational Objectives, ELBO, and Information Bounds
      • Information-Theoretic Lower Bounds in Statistics, Learning, and Communication
    • Stochastic Processes
      • Stochastic Processes
      • Markov Chains and Stationary Distributions
      • Martingales and Optional Stopping Intuition
      • Poisson Processes and Counting Models
      • Brownian Motion and Diffusion Intuition
      • SDEs and Ito Intuition
      • Mixing, Ergodicity, and MCMC Bridges
    • Stochastic Control and Dynamic Programming
      • Stochastic Control and Dynamic Programming
      • Controlled Markov Models, Policies, and Cost Functionals
      • Finite-Horizon Dynamic Programming and Backward Induction
      • Infinite-Horizon Value Functions, Bellman Equations, and Contractions
      • Value Iteration, Policy Iteration, and Approximate Dynamic Programming
      • Stochastic Linear Systems, LQG, and the Separation Principle
      • Continuous-Time Stochastic Control and Hamilton-Jacobi-Bellman Intuition
      • Partial Observability, Belief States, and RL/Control Bridges
  • Applications
    • Applications
    • Machine Learning
      • Machine Learning
      • Core Route
        • Supervised Learning, Losses, and Empirical Risk
        • Optimization for Machine Learning
        • Generalization, Overfitting, and Validation
        • Regularization, Implicit Bias, and Model Complexity
        • Uncertainty Calibration and Predictive Confidence
      • Model Components
        • Representation Learning and Geometry of Embeddings
        • Linear Probes and Representation Diagnostics
        • Backpropagation and Computation Graphs
        • Attention, Softmax, and Weighted Mixtures
        • In-Context Learning and Linearization
      • Graphs
        • Graph Diffusion and Message Passing
        • Oversmoothing, Depth, and Graph Sampling
        • Graph Rewiring, Homophily, and Heterophily
        • Long-Range Dependence and Oversquashing in Graphs
      • Kernels and Surrogates
        • Kernel Methods and Similarity Geometry
        • Kernel Ridge and Gaussian-Process Intuition
        • Bayesian Optimization and Surrogate Modeling
      • Generative Models
        • Diffusion Models and Denoising
        • Score Matching and the SDE View of Diffusion
        • Flow Matching and Transport Views of Generation
    • Control and Dynamics
      • Control and Dynamics
      • State, Sensing, and Actuation
      • Feedback and Stability in Real Systems
      • Estimation under Noise
      • Optimal Control and Trajectory Planning
      • Constraints, MPC, and Safe Operation
      • Learning, Identification, and RL Bridges
    • Optimization and Inference
      • Optimization and Inference
      • Measurements, Models, and Hidden Variables
      • Likelihoods, Priors, and MAP Estimation
      • Filtering, Smoothing, and Hidden-State Inference
      • Variational Inference, ELBO, and Tractable Approximation
      • Sampling, Mixing, and MCMC for Inference
      • Bayesian Optimization, Active Sensing, and Information Gathering
    • Scientific Computing
      • Scientific Computing
      • Models, Discretization, and Simulation Loops
      • Time-Stepping, Stiffness, and Solver Choice
      • Linear Systems, Conditioning, and Stable Computation
      • Approximation, Quadrature, and Error Control in Practice
      • Inverse Problems, Parameter Estimation, and Data Assimilation
      • Scientific ML, Surrogates, and Computation-Physics Bridges
    • Signal and Communication
      • Signal and Communication
      • Signals, Channels, and Noisy Measurements
      • Filtering, Denoising, and Estimation in Communication Systems
      • Sampling, Bandwidth, and Reconstruction in Practice
      • Detection, Decoding, and Error Tradeoffs
      • Inverse Problems, Sensing, and Reconstruction
      • Modern Bridges: Representation Learning, Sensing, and Communication
  • Paper Lab
    • Paper Lab
    • How to Read a Paper
    • Notation Translation
    • Theorem Decoder
    • Dependency Maps
    • Reading Trails
  • Research
    • Research
    • Theorem Families
    • Directions
    • Surveys
    • Venues
  • Publication
    • Publication
    • Venue Map
    • How Top-Venue Papers Are Shaped
    • Claim-Evidence Matrix
    • Related Work and Positioning
    • Theorem-to-Experiment Alignment
    • Writing Theory Sections
    • Writing Experiment Sections
    • Reproducibility Checklist
    • Review and Rebuttal
  • Library
    • Library
    • Books
    • Courses
    • Notes
    • Papers
  • Notes
    • Notes
    • When a Topic Starts to Feel Reusable
  1. Applications
  2. Applications

Applications

Why This Section Exists

Math becomes far easier to retain when you keep seeing where it actually appears.

This section organizes application bridges by destination area rather than by pure math topic.

Start here when you want to answer:

  • where does this math actually show up?
  • which pages are the best bridge from foundations into a field?
  • what should I read if I care about ML, systems, control, or scientific computing?

Available Application Hubs

Machine Learning

Representation learning, optimization, generalization, kernels, graphs, and generative modeling bridges.

Open Machine Learning

Control and Dynamics

State, sensing, feedback, estimation, constraints, and learning-aware control.

Open Control and Dynamics

Optimization and Inference

Hidden variables, MAP estimation, filtering, variational inference, sampling, and information gathering.

Open Optimization and Inference

Scientific Computing

Models, discretization, simulation loops, inverse problems, and computation-aware scientific reasoning.

Open Scientific Computing

Signal and Communication

Channels, noise, filtering, sampling, detection, sensing, and representation-learning bridges.

Open Signal and Communication

Best First Routes By Goal

  • If you want the broadest ML-facing bridge, start with Machine Learning.
  • If you care about physical systems, start with Control and Dynamics.
  • If your main bottleneck is hidden variables, uncertainty, or posterior approximation, start with Optimization and Inference.
  • If your work begins with a model equation or simulation loop, start with Scientific Computing.
  • If your work begins with signals, channels, or noisy measurements, start with Signal and Communication.

The design rule is simple: every serious application page should point back to the exact math objects it uses.

Back to top
Partial Observability, Belief States, and RL/Control Bridges
Machine Learning