Numerical Methods

How exact mathematical objects turn into finite-precision computations, and how conditioning, stability, and error analysis decide whether those computations can be trusted.
Modified

April 26, 2026

Keywords

numerical methods, floating point, conditioning, stability, scientific computing

1 Why This Module Matters

Pure mathematics usually presents objects as exact:

  • a matrix is exactly A
  • a solution is exactly x
  • a derivative is exactly f'(x)
  • an eigenvalue is exactly \lambda

Numerical methods asks what survives when those objects are computed on an actual machine.

That shift changes the questions.

Now we care about:

  • floating-point approximation
  • cancellation and roundoff
  • conditioning of the underlying problem
  • stability of the algorithm
  • approximation error versus discretization error
  • whether an answer is accurate enough for the use case, not merely symbolic

This module is the bridge from exact mathematical language to trustworthy computation.

Prerequisites Linear Algebra and Single-Variable Calculus should come first. Multivariable Calculus, Optimization, and Matrix Analysis make the later pages much more useful, but they are not required to start the opening page.

Unlocks Scientific computing, solver literacy, error analysis, numerical linear algebra, stability language for ODEs and control

Research Use Reading papers that discuss conditioning, backward stability, iterative solvers, approximation error, discretization, or computational tradeoffs

2 First Pass Through This Module

The intended first-pass spine for this module is:

  1. Floating-Point, Conditioning, and Backward Error
  2. Numerical Linear Systems and Factorizations
  3. Iterative Methods and Preconditioning
  4. Numerical Least Squares and Regularization
  5. Eigenvalue and SVD Computation
  6. Approximation, Differentiation, Integration, and Error Control

This six-page first-pass spine is now complete. Together, it covers numerical error language, direct and iterative linear-system methods, least squares, regularization, spectral computation, and the approximation viewpoint behind differentiation, integration, and practical error control.

One optional extension page pushes the module toward dynamics and simulation:

3 How To Use This Module

Read this module as a computation layer rather than as a replacement for exact math.

The default reading path is:

  1. start with Floating-Point, Conditioning, and Backward Error
  2. continue to Numerical Linear Systems and Factorizations
  3. continue to Iterative Methods and Preconditioning
  4. continue to Numerical Least Squares and Regularization
  5. continue to Eigenvalue and SVD Computation
  6. continue to Approximation, Differentiation, Integration, and Error Control
  7. if you want the ODE and simulation bridge, continue to Time-Stepping for ODEs and Stability
  8. use nearby live pages in Optimization, Matrix Analysis, and High-Dimensional Statistics whenever a page talks about conditioning, spectra, or estimation error

The module should stay focused on a compact first-pass toolkit:

  • finite precision
  • conditioning and stability
  • core linear algebra computations
  • approximation and error control

4 Core Concepts

5 Proof Patterns In This Module

  • Backward-error view: reinterpret a computed answer as the exact answer to a nearby problem.
  • Conditioning amplifies perturbations: separate the sensitivity of the problem from the quality of the algorithm.
  • Error decomposition: split total error into approximation, roundoff, and iteration or discretization pieces.

6 Applications

6.1 Scientific Computing

This is the module where exact mathematics becomes implementable computation with accuracy and cost tradeoffs.

6.2 Optimization And Statistics

Least squares, Hessian methods, covariance estimation, and spectral algorithms all depend on conditioning and solver behavior.

6.3 Engineering And Dynamics

Later ODE, control, and simulation pages need numerical stability language, not only exact differential-equation theory.

7 Go Deeper By Topic

The main starting path is:

  1. Floating-Point, Conditioning, and Backward Error
  2. Numerical Linear Systems and Factorizations
  3. Iterative Methods and Preconditioning
  4. Numerical Least Squares and Regularization
  5. Eigenvalue and SVD Computation
  6. Approximation, Differentiation, Integration, and Error Control

If you want the bridge from numerical methods into ODEs, dynamics, or control, the next live page is:

  1. Time-Stepping for ODEs and Stability

The strongest adjacent pages are:

8 Optional Deeper Reading After First Pass

The strongest current references connected to this module are:

  • MIT 18.335J: Introduction to Numerical Methods - official current MIT OCW course page covering floating point, conditioning, stability, and core numerical linear algebra. Checked 2026-04-25.
  • Cornell CS4220: Numerical Analysis - official current course introduction describing the scientific-computing viewpoint of solving continuous-math problems fast enough and accurately enough. Checked 2026-04-25.
  • Stanford CME108 bulletin - current official course description showing a numerical-methods path that connects error analysis, linear systems, optimization, and ODEs. Checked 2026-04-25.

9 Study Order

For the current module state, read:

  1. Floating-Point, Conditioning, and Backward Error
  2. Numerical Linear Systems and Factorizations
  3. Iterative Methods and Preconditioning
  4. Numerical Least Squares and Regularization
  5. Eigenvalue and SVD Computation
  6. Approximation, Differentiation, Integration, and Error Control

and then, if you are heading toward simulation or ODEs,

  1. Time-Stepping for ODEs and Stability

before trying to reason casually about solver output, residuals, or floating-point implementations elsewhere on the site.

You are ready to move deeper into this module when you can:

  • explain why finite precision is a modeling issue, not only a programming nuisance
  • distinguish forward error, backward error, and residual
  • explain why conditioning belongs to the problem while stability belongs to the algorithm
  • explain why a backward-stable algorithm can still produce a poor forward answer on an ill-conditioned problem
  • explain why numerical linear systems are usually solved through factorizations rather than through an explicit matrix inverse
  • explain why iterative methods become attractive for large sparse systems and why preconditioning changes convergence behavior
  • explain why least squares should usually be computed through QR or SVD thinking rather than by blindly forming normal equations
  • explain why eigenvalue and singular-value computations need dedicated iterative ideas rather than just symbolic decomposition formulas
  • explain how truncation or discretization error joins roundoff error in differentiation, integration, and interpolation
  • explain why time stepping for ODEs adds a new concern: stability over many steps, not only local approximation at one step

10 Sources and Further Reading

  • MIT 18.335J: Introduction to Numerical Methods - First pass - official current numerical-methods course page with the right opening emphasis on accuracy and efficiency. Checked 2026-04-25.
  • Cornell CS4220: Numerical Analysis - First pass - official current course introduction with a clean scientific-computing framing. Checked 2026-04-25.
  • Stanford CME108 bulletin - Second pass - official course description showing how the module naturally grows into interpolation, integration, optimization, and ODEs. Checked 2026-04-25.
Back to top