Machine Learning
machine learning, applications, roadmap, theory bridge
1 Why This Section Exists
Many readers do not want math in the abstract. They want to know:
- where it shows up in ML
- which math topics matter first
- what to read next if they care about theory rather than only implementation
This hub answers those questions without trying to become a full ML textbook.
The rule for this section is simple:
every ML page should point back to the exact math objects it uses.
2 What Machine Learning Keeps Reusing
Across models, training pipelines, and theory papers, the same mathematical objects keep returning:
- vectors, linear maps, projections, and low-rank structure
- probability, conditioning, and concentration
- estimators, validation, and generalization language
- optimization objectives, gradients, and regularization
- graph and spectral structure in structured or relational data
If you can identify those objects quickly, ML papers stop feeling like disconnected tricks.
3 Start Here By Interest
3.1 If You Want The Shortest Math-to-ML Entry
Start in this order:
3.2 If You Want The Canonical Bridge Route Inside This Section
Start in this order:
- Supervised Learning, Losses, and Empirical Risk
- Optimization for Machine Learning
- Generalization, Overfitting, and Validation
- Regularization, Implicit Bias, and Model Complexity
- Uncertainty Calibration and Predictive Confidence
This is the cleanest first bridge from the site’s math modules into recurring ML questions.
3.3 If You Want Concrete Model Components First
Start with the linear-algebra-heavy bridges:
3.4 If You Want Theory-Flavored ML
Start with:
4 Application Families
4.1 Linear Models, Embeddings, and Projections
These pages show the most reusable linear-algebra layer inside modern ML:
- Representation Learning and Geometry of Embeddings
- Linear Probes and Representation Diagnostics
- Vector Mixtures in Embeddings and Attention
- Learned Linear Projections in Transformers
- Linear Regression Through Projection
- PCA Through SVD
- Low-Dimensional Subspace Models
Use this family when you want to see how vectors, matrices, orthogonality, and low-rank structure become real model components.
4.2 Transformers, Prompting, and In-Context Learning
Use this family when you want to understand how prompts act like temporary task descriptions and why attention-based models can sometimes behave like small learners at inference time:
4.3 Graph and Spectral ML
Use this family when the model depends on graph structure, diffusion, or repeated linear updates:
4.4 Training, Complexity, and Generalization
Use this family when the main question is not only fitting the sample, but understanding why training prefers some solutions over others:
4.5 Similarity and Kernel ML
Use this family when the model is built from comparisons, similarities, or geometry induced by a kernel:
4.6 Surrogate Modeling and Sequential Design
Use this family when each evaluation is expensive and the model must decide where to gather information next:
4.7 Generative Modeling and Denoising
Use this family when the model learns to generate, denoise, or reverse a stochastic corruption process:
4.8 Research Bridges Already On The Site
These pages are useful when you want the first step from stable math into current ML-flavored research:
5 How To Move Beyond The First Bridge Route
After the canonical bridge route, branch by family instead of reading the whole section linearly.
- choose
Transformers, Prompting, and In-Context Learningif you care about sequence models and inference-time adaptation - choose
Training, Complexity, and Generalizationif you care about why models fit and generalize - choose
Graph and Spectral MLif you care about structured data and message passing - choose
Similarity and Kernel MLif you care about nonlinear prediction through geometry - choose
Generative Modeling and Denoisingif you care about diffusion, score, and transport views
The family blocks above are the main navigation layer. They are better first guides than a long page inventory.
6 How To Use This Section
- Use
Topicswhen you want the math itself. - Use
Applications > Machine Learningwhen you want the math-to-ML translation layer. - Use AI / ML Theory Roadmap when you want a dependency-aware study order.
- Use Paper Lab when you want to practice reading actual papers in parallel.
7 Sources and Further Reading
- CS229: Machine Learning -
First pass- official current Stanford ML course hub showing the breadth of mainstream ML topics. Checked2026-04-24. - CS 189 Syllabus -
First pass- official Berkeley syllabus with a strong math-to-ML framing and prerequisite expectations. Checked2026-04-24. - Mathematics for Machine Learning -
Second pass- a stable math bridge for readers moving from foundations into ML-facing formulations. Checked2026-04-24. - CS229T / Statistical Learning Theory -
Paper bridge- a compact theory-facing anchor for readers aiming beyond introductory ML. Checked2026-04-24.