Numerical Methods
numerical methods, floating point, conditioning, stability, scientific computing
1 Why This Module Matters
Pure mathematics usually presents objects as exact:
- a matrix is exactly
A - a solution is exactly
x - a derivative is exactly
f'(x) - an eigenvalue is exactly
\lambda
Numerical methods asks what survives when those objects are computed on an actual machine.
That shift changes the questions.
Now we care about:
- floating-point approximation
- cancellation and roundoff
- conditioning of the underlying problem
- stability of the algorithm
- approximation error versus discretization error
- whether an answer is accurate enough for the use case, not merely symbolic
This module is the bridge from exact mathematical language to trustworthy computation.
2 First Pass Through This Module
The intended first-pass spine for this module is:
- Floating-Point, Conditioning, and Backward Error
- Numerical Linear Systems and Factorizations
- Iterative Methods and Preconditioning
- Numerical Least Squares and Regularization
- Eigenvalue and SVD Computation
- Approximation, Differentiation, Integration, and Error Control
This six-page first-pass spine is now complete. Together, it covers numerical error language, direct and iterative linear-system methods, least squares, regularization, spectral computation, and the approximation viewpoint behind differentiation, integration, and practical error control.
One optional extension page pushes the module toward dynamics and simulation:
3 How To Use This Module
Read this module as a computation layer rather than as a replacement for exact math.
The default reading path is:
- start with Floating-Point, Conditioning, and Backward Error
- continue to Numerical Linear Systems and Factorizations
- continue to Iterative Methods and Preconditioning
- continue to Numerical Least Squares and Regularization
- continue to Eigenvalue and SVD Computation
- continue to Approximation, Differentiation, Integration, and Error Control
- if you want the ODE and simulation bridge, continue to Time-Stepping for ODEs and Stability
- use nearby live pages in Optimization, Matrix Analysis, and High-Dimensional Statistics whenever a page talks about conditioning, spectra, or estimation error
The module should stay focused on a compact first-pass toolkit:
- finite precision
- conditioning and stability
- core linear algebra computations
- approximation and error control
4 Core Concepts
- Floating-Point, Conditioning, and Backward Error: the opening page that explains finite precision, forward versus backward viewpoints, and why conditioning multiplies numerical trouble.
- Numerical Linear Systems and Factorizations: the page where exact linear algebra becomes LU, QR, and Cholesky computation.
- Iterative Methods and Preconditioning: the page where direct factorization gives way to stationary iterations, Krylov intuition, and preconditioning.
- Numerical Least Squares and Regularization: the bridge from overdetermined fitting problems to projection methods, QR/SVD computation, and Tikhonov-style stabilization.
- Eigenvalue and SVD Computation: the page where eigenvalues and singular values stop being only exact decompositions and become numerical targets with their own iterative algorithms.
- Approximation, Differentiation, Integration, and Error Control: the page where interpolation, finite differences, quadrature, and adaptive error control turn approximation into an explicit numerical design problem.
- Time-Stepping for ODEs and Stability: the optional extension page that turns local approximation ideas into explicit and implicit ODE solvers, stability regions, and stiffness intuition.
5 Proof Patterns In This Module
Backward-error view: reinterpret a computed answer as the exact answer to a nearby problem.Conditioning amplifies perturbations: separate the sensitivity of the problem from the quality of the algorithm.Error decomposition: split total error into approximation, roundoff, and iteration or discretization pieces.
6 Applications
6.1 Scientific Computing
This is the module where exact mathematics becomes implementable computation with accuracy and cost tradeoffs.
6.2 Optimization And Statistics
Least squares, Hessian methods, covariance estimation, and spectral algorithms all depend on conditioning and solver behavior.
6.3 Engineering And Dynamics
Later ODE, control, and simulation pages need numerical stability language, not only exact differential-equation theory.
7 Go Deeper By Topic
The main starting path is:
- Floating-Point, Conditioning, and Backward Error
- Numerical Linear Systems and Factorizations
- Iterative Methods and Preconditioning
- Numerical Least Squares and Regularization
- Eigenvalue and SVD Computation
- Approximation, Differentiation, Integration, and Error Control
If you want the bridge from numerical methods into ODEs, dynamics, or control, the next live page is:
The strongest adjacent pages are:
8 Optional Deeper Reading After First Pass
The strongest current references connected to this module are:
- MIT 18.335J: Introduction to Numerical Methods - official current MIT OCW course page covering floating point, conditioning, stability, and core numerical linear algebra. Checked
2026-04-25. - Cornell CS4220: Numerical Analysis - official current course introduction describing the scientific-computing viewpoint of solving continuous-math problems fast enough and accurately enough. Checked
2026-04-25. - Stanford CME108 bulletin - current official course description showing a numerical-methods path that connects error analysis, linear systems, optimization, and ODEs. Checked
2026-04-25.
9 Study Order
For the current module state, read:
- Floating-Point, Conditioning, and Backward Error
- Numerical Linear Systems and Factorizations
- Iterative Methods and Preconditioning
- Numerical Least Squares and Regularization
- Eigenvalue and SVD Computation
- Approximation, Differentiation, Integration, and Error Control
and then, if you are heading toward simulation or ODEs,
before trying to reason casually about solver output, residuals, or floating-point implementations elsewhere on the site.
You are ready to move deeper into this module when you can:
- explain why finite precision is a modeling issue, not only a programming nuisance
- distinguish forward error, backward error, and residual
- explain why conditioning belongs to the problem while stability belongs to the algorithm
- explain why a backward-stable algorithm can still produce a poor forward answer on an ill-conditioned problem
- explain why numerical linear systems are usually solved through factorizations rather than through an explicit matrix inverse
- explain why iterative methods become attractive for large sparse systems and why preconditioning changes convergence behavior
- explain why least squares should usually be computed through QR or SVD thinking rather than by blindly forming normal equations
- explain why eigenvalue and singular-value computations need dedicated iterative ideas rather than just symbolic decomposition formulas
- explain how truncation or discretization error joins roundoff error in differentiation, integration, and interpolation
- explain why time stepping for ODEs adds a new concern: stability over many steps, not only local approximation at one step
10 Sources and Further Reading
- MIT 18.335J: Introduction to Numerical Methods -
First pass- official current numerical-methods course page with the right opening emphasis on accuracy and efficiency. Checked2026-04-25. - Cornell CS4220: Numerical Analysis -
First pass- official current course introduction with a clean scientific-computing framing. Checked2026-04-25. - Stanford CME108 bulletin -
Second pass- official course description showing how the module naturally grows into interpolation, integration, optimization, and ODEs. Checked2026-04-25.