Math for Research
  • Start Here
  • Roadmaps
  • Topics
  • Applications
  • Paper Lab
  • Research
  • Publication
  • Library
  • Notes
  • About
  1. Math for CS / AI / Engineering Research
  • Start Here
    • Start Here
    • How to Use This Site
    • Math Diagnostic
    • Notation and Symbols
    • How to Write Proofs
    • How to Read Theory Papers
  • Roadmaps
    • Roadmaps
    • Core Foundation Roadmap
    • Theory Reading Track
    • AI / ML Theory Roadmap
    • Engineering Systems Roadmap
  • Topics
    • Topics
    • Foundation Topics
    • Proofs
      • Proofs
      • Statements and Quantifiers
      • Direct Proof
      • Contrapositive and Contradiction
      • Induction
      • Counterexamples and Proof Debugging
      • Proof-writing Clinic
    • Logic
      • Logic
      • Propositional Logic
      • Logical Equivalence and Negation
      • Predicate Logic
      • Sets, Functions, and Relations
      • Translation Between English and Symbols
    • Algebra Repair
      • Algebra Repair
      • Expressions and Equations
      • Functions and Graph Reading
      • Exponents, Logarithms, and Growth
      • Trig and Complex Numbers
      • Symbol Manipulation Lab
    • Discrete Math
      • Discrete Math
      • Counting and Combinatorics
      • Recurrences and Asymptotics
      • Graphs and Trees
      • Discrete Probability Bridge
      • Number Theory Basics
    • Single-Variable Calculus
      • Single-Variable Calculus
      • Limits and Continuity
      • Derivatives and Local Approximation
      • Integrals and Accumulation
      • Sequences and Series
      • Taylor Expansion
    • Multivariable Calculus
      • Multivariable Calculus
      • Partial Derivatives and Gradients
      • Chain Rule and Linearization
      • Jacobians and Hessians
      • Multiple Integrals
      • Constrained Optimization
      • Vector Fields and Divergence / Curl
    • Linear Algebra
      • Linear Algebra
      • Vectors and Linear Combinations
      • Matrices and Linear Maps
      • Subspaces, Basis, and Dimension
      • Orthogonality and Least Squares
      • Eigenvalues and Diagonalization
      • SVD and Low-Rank Approximation
    • Probability
      • Probability
      • Sample Spaces, Events, and Conditioning
      • Random Variables and Distributions
      • Expectation, Variance, Covariance
      • Joint, Conditional, and Bayes
      • Law of Large Numbers and CLT
      • Concentration and Common Inequalities
    • Statistics
      • Statistics
      • Descriptive Statistics and Data Models
      • Estimation and Bias-Variance
      • Maximum Likelihood and Bayesian Basics
      • Confidence Intervals and Hypothesis Testing
      • Regression and Classification Basics
      • Experimental Design and Model Evaluation
    • Advanced Topics
    • Real Analysis
      • Real Analysis
      • Rigorous Convergence
      • Continuity, Compactness, and Completeness
      • Sequences and Series of Functions
      • Differentiation and Integration as Theorems
      • Fixed-Point, Implicit, and Inverse Function Ideas
    • Optimization
      • Optimization
      • Convex Sets and Separation
      • Convex Functions and Subgradients
      • Constrained Optimization, KKT, and Lagrangians
      • Unconstrained First-Order Methods
      • Duality and Certificates
    • Numerical Methods
      • Numerical Methods
      • Floating-Point, Conditioning, and Backward Error
      • Numerical Linear Systems and Factorizations
      • Iterative Methods and Preconditioning
      • Numerical Least Squares and Regularization
      • Eigenvalue and SVD Computation
      • Approximation, Differentiation, Integration, and Error Control
      • Time-Stepping for ODEs and Stability
    • Signal Processing and Estimation
      • Signal Processing and Estimation
      • Signals, Convolution, and Linear Time-Invariant Systems
      • Fourier Analysis, Frequency Response, and Spectral Views
      • Sampling, Aliasing, and Reconstruction
      • Noise Models, Wiener Filtering, and MMSE Estimation
      • State Estimation, Smoothing, and Hidden-State Inference
      • Inverse Problems, Deconvolution, and Regularized Recovery
      • Signal Processing Bridges to Communication, Sensing, and Modern ML
    • ODEs and Dynamical Systems
      • ODEs and Dynamical Systems
      • First-Order ODEs, Existence, and Solution Curves
      • Second-Order Systems, State Variables, and Reduction to First Order
      • Linear Systems, Matrix Exponentials, and Modes
      • Phase Portraits, Equilibria, and Local Stability
      • Lyapunov Functions, Invariant Sets, and Long-Time Behavior
      • Discretization, Time-Stepping, and the Bridge to Control
      • Research Bridges: Reverse-Time SDEs, Probability-Flow ODEs, Flow Matching, and Control
    • Control and Dynamics
      • Control and Dynamics
      • State-Space Models, Inputs, and Outputs
      • Controllability, Reachability, and Observability
      • Feedback, Stability, and Pole Placement
      • Linear Quadratic Regulation and Riccati Intuition
      • Estimation, Kalman Filtering, and the Separation Principle
      • Model Predictive Control and Constraint Handling
      • Learning-Based Control, System Identification, and RL Bridges
    • Matrix Analysis
      • Matrix Analysis
      • Norms and Operator Norms
      • Positive Semidefinite Matrices and Quadratic Forms
      • Spectral Inequalities and Variational Principles
      • Perturbation and Stability
      • Trace, Determinant, and Matrix Functions
    • Learning Theory
      • Learning Theory
      • ERM, Population Risk, and Hypothesis Classes
      • PAC Learning, Sample Complexity, and the Learning Setup
      • VC Dimension and Shattering
      • Uniform Convergence and Generalization Bounds
      • Rademacher Complexity and Data-Dependent Capacity
      • Algorithmic Stability and Regularization
      • Generalization in Modern Regimes
    • High-Dimensional Probability
      • High-Dimensional Probability
      • Concentration Beyond Basics
      • Sub-Gaussian and Sub-Exponential Variables
      • Random Vectors, Isotropy, and Norms
      • Random Matrices and Spectral Concentration
      • High-Dimensional Phenomena
      • High-Dimensional Probability for Learning Theory and Modern ML
    • High-Dimensional Statistics
      • High-Dimensional Statistics
      • Sparsity and Regularization
      • Lasso and Compressed Sensing Basics
      • Design Geometry: Restricted Eigenvalues, Coherence, and RIP
      • High-Dimensional Regression
      • Covariance, PCA, and Spectral Estimation in High Dimension
      • Minimax and Lower Bounds
      • Inference in High Dimension
    • Information Theory
      • Information Theory
      • Entropy, Cross-Entropy, and KL Divergence
      • Mutual Information, Conditional Entropy, and Data Processing
      • Typicality, Source Coding, and Compression Intuition
      • Channel Coding, Capacity, and Converse Proofs
      • Rate-Distortion and Representation Tradeoffs
      • Variational Objectives, ELBO, and Information Bounds
      • Information-Theoretic Lower Bounds in Statistics, Learning, and Communication
    • Stochastic Processes
      • Stochastic Processes
      • Markov Chains and Stationary Distributions
      • Martingales and Optional Stopping Intuition
      • Poisson Processes and Counting Models
      • Brownian Motion and Diffusion Intuition
      • SDEs and Ito Intuition
      • Mixing, Ergodicity, and MCMC Bridges
    • Stochastic Control and Dynamic Programming
      • Stochastic Control and Dynamic Programming
      • Controlled Markov Models, Policies, and Cost Functionals
      • Finite-Horizon Dynamic Programming and Backward Induction
      • Infinite-Horizon Value Functions, Bellman Equations, and Contractions
      • Value Iteration, Policy Iteration, and Approximate Dynamic Programming
      • Stochastic Linear Systems, LQG, and the Separation Principle
      • Continuous-Time Stochastic Control and Hamilton-Jacobi-Bellman Intuition
      • Partial Observability, Belief States, and RL/Control Bridges
  • Applications
    • Applications
    • Machine Learning
      • Machine Learning
      • Core Route
        • Supervised Learning, Losses, and Empirical Risk
        • Optimization for Machine Learning
        • Generalization, Overfitting, and Validation
        • Regularization, Implicit Bias, and Model Complexity
        • Uncertainty Calibration and Predictive Confidence
      • Model Components
        • Representation Learning and Geometry of Embeddings
        • Linear Probes and Representation Diagnostics
        • Backpropagation and Computation Graphs
        • Attention, Softmax, and Weighted Mixtures
        • In-Context Learning and Linearization
      • Graphs
        • Graph Diffusion and Message Passing
        • Oversmoothing, Depth, and Graph Sampling
        • Graph Rewiring, Homophily, and Heterophily
        • Long-Range Dependence and Oversquashing in Graphs
      • Kernels and Surrogates
        • Kernel Methods and Similarity Geometry
        • Kernel Ridge and Gaussian-Process Intuition
        • Bayesian Optimization and Surrogate Modeling
      • Generative Models
        • Diffusion Models and Denoising
        • Score Matching and the SDE View of Diffusion
        • Flow Matching and Transport Views of Generation
    • Control and Dynamics
      • Control and Dynamics
      • State, Sensing, and Actuation
      • Feedback and Stability in Real Systems
      • Estimation under Noise
      • Optimal Control and Trajectory Planning
      • Constraints, MPC, and Safe Operation
      • Learning, Identification, and RL Bridges
    • Optimization and Inference
      • Optimization and Inference
      • Measurements, Models, and Hidden Variables
      • Likelihoods, Priors, and MAP Estimation
      • Filtering, Smoothing, and Hidden-State Inference
      • Variational Inference, ELBO, and Tractable Approximation
      • Sampling, Mixing, and MCMC for Inference
      • Bayesian Optimization, Active Sensing, and Information Gathering
    • Scientific Computing
      • Scientific Computing
      • Models, Discretization, and Simulation Loops
      • Time-Stepping, Stiffness, and Solver Choice
      • Linear Systems, Conditioning, and Stable Computation
      • Approximation, Quadrature, and Error Control in Practice
      • Inverse Problems, Parameter Estimation, and Data Assimilation
      • Scientific ML, Surrogates, and Computation-Physics Bridges
    • Signal and Communication
      • Signal and Communication
      • Signals, Channels, and Noisy Measurements
      • Filtering, Denoising, and Estimation in Communication Systems
      • Sampling, Bandwidth, and Reconstruction in Practice
      • Detection, Decoding, and Error Tradeoffs
      • Inverse Problems, Sensing, and Reconstruction
      • Modern Bridges: Representation Learning, Sensing, and Communication
  • Paper Lab
    • Paper Lab
    • How to Read a Paper
    • Notation Translation
    • Theorem Decoder
    • Dependency Maps
    • Reading Trails
  • Research
    • Research
    • Theorem Families
    • Directions
    • Surveys
    • Venues
  • Publication
    • Publication
    • Venue Map
    • How Top-Venue Papers Are Shaped
    • Claim-Evidence Matrix
    • Related Work and Positioning
    • Theorem-to-Experiment Alignment
    • Writing Theory Sections
    • Writing Experiment Sections
    • Reproducibility Checklist
    • Review and Rebuttal
  • Library
    • Library
    • Books
    • Courses
    • Notes
    • Papers
  • Notes
    • Notes
    • When a Topic Starts to Feel Reusable

Math for CS / AI / Engineering Research

A public knowledge system for foundations, theory-heavy papers, and research workflow

Build the math that lets you read hard papers calmly

This site is designed for people who want more than scattered notes. The goal is to make each topic understandable on its own while still connecting it to proofs, algorithms, applications, and research directions.

Start Here See the Roadmaps Open Paper Lab

Foundations Proofs Applications Paper Reading Publication Workflow

What This Site Is Trying To Do

This site is built around a simple promise: each topic should help you answer three questions.

  1. What is the math?
  2. Why does it matter?
  3. How is it used in papers, models, and research?

The structure is intentionally topic-first. Books, courses, blogs, and school-specific syllabi are used as references inside pages, not as the organizing principle of the site.

Choose a Starting Path

Build foundations

If your algebra, proof writing, calculus, or probability feels uneven, start with the guided foundation track and repair weak links early.

Open the Foundation Roadmap

Understand a topic

If you already know roughly what you need, jump into a topic page and use the built-in flow: intuition, formal core, proof pattern, application, paper bridge.

Browse Topics

Read a hard paper

If you are blocked by notation, theorem dependencies, or hidden assumptions, use the paper lab and research pages to backfill exactly what you need.

Open Paper Lab

The Site Spine

Start Here

Diagnostic pages, notation, proof writing, and how to read theory-heavy papers without getting lost.

Open Start Here

Topics

Foundation and advanced modules that move from definitions to theorems to applications and research.

Open Topics

Applications

Machine learning, optimization, scientific computing, signal processing, control, and related bridges.

Open Applications

Paper Lab

Reading trails, theorem decoding, assumption tracing, and research-guided walkthroughs.

Open Paper Lab

Publication

How strong papers align claims, theorems, experiments, and positioning without turning the site into prestige theater.

Open Publication

Library

Curated books, notes, and courses grouped by use case rather than dumped as an unstructured list.

Open Library

First Pages Worth Opening

  • How to Use This Site
  • Math Diagnostic
  • Core Foundation Roadmap
  • Proofs Module Overview
  • Linear Algebra Module Overview
  • Probability Module Overview
  • Claim-Evidence Matrix

Design Rule

The site only works if each mature page respects the same loop:

Learn -> Prove -> Solve -> Apply -> Read

That is how the site stays useful for both deep study and urgent research reading.

Back to top