Stochastic Processes

Randomness evolving over time through Markov chains, martingales, counting processes, Brownian motion, SDEs, and MCMC-facing long-run behavior.
Modified

April 26, 2026

Keywords

stochastic processes, Markov chains, martingales, Brownian motion, SDEs

1 Why This Module Matters

Classical probability teaches random variables, conditioning, expectations, concentration, and a few asymptotic theorems.

Stochastic processes ask a different kind of question:

  • what if randomness evolves over time?
  • what if the present depends on the past?
  • what if long-run behavior matters more than one-step uncertainty?
  • what if the right object is not one random variable, but an entire random path?

That is why stochastic-process language keeps reappearing in:

  • queues and networks
  • hidden-state models and filtering
  • stochastic control and RL
  • Brownian motion and diffusion
  • MCMC and sampling algorithms

This module is the bridge from ordinary probability into time-indexed randomness.

Prerequisites Probability should come first. Linear Algebra helps because transition operators and stationary distributions are often matrix objects. Real Analysis helps later once paths, limits, and continuous-time objects become more explicit.

Unlocks Markov chains, martingales, Poisson processes, Brownian motion, SDEs, long-run behavior, and MCMC bridges

Research Use Reading papers or courses on stochastic control, sequential inference, diffusion models, MCMC, random walks, and stochastic modeling

2 First Pass Through This Module

The first-pass spine for this module is:

  1. Markov Chains and Stationary Distributions
  2. Martingales and Optional Stopping Intuition
  3. Poisson Processes and Counting Models
  4. Brownian Motion and Diffusion Intuition
  5. SDEs and Ito Intuition
  6. Mixing, Ergodicity, and MCMC Bridges

The full first-pass spine is now live, from Markov Chains and Stationary Distributions through Martingales and Optional Stopping Intuition, Poisson Processes and Counting Models, Brownian Motion and Diffusion Intuition, SDEs and Ito Intuition, and Mixing, Ergodicity, and MCMC Bridges.

3 How To Use This Module

The best current first-pass path is:

  1. start with Markov Chains and Stationary Distributions
  2. read Martingales and Optional Stopping Intuition when conditional expectation, stopping rules, or “fairness given current information” becomes the main idea
  3. continue to Poisson Processes and Counting Models and Brownian Motion and Diffusion Intuition if you want the two main continuous-time randomness lenses: jumps and diffusion
  4. use SDEs and Ito Intuition once drift-plus-noise language starts appearing in control or diffusion-style ML
  5. use Mixing, Ergodicity, and MCMC Bridges once long-run sampling, dependent averages, or MCMC becomes the main question
  6. keep Probability and Linear Algebra nearby whenever conditioning, expectation, transition operators, or stationary distributions need re-grounding

The design goal is to make stochastic evolution itself feel natural before the module branches into martingales, diffusions, and MCMC-like long-run behavior.

4 Core Concepts

5 Proof Patterns In This Module

  • One-step to long-run: use a local transition rule to study global behavior over many steps.
  • Conditional expectation as structure: replace pathwise complexity with cleaner conditional objects.
  • Invariant or stationary objects: identify distributions or functionals that remain stable under the dynamics.

6 Applications

6.1 Random Dynamics Over Time

The module organizes systems where randomness is not one-shot but evolves step by step or continuously over time.

6.2 Sequential Inference And Hidden State

Markov structure is the natural backbone for filtering, HMM-style reasoning, and many latent-state models.

6.3 Control, RL, Diffusion, And MCMC

This is the language that later supports Markov decision processes, stochastic control, diffusion modeling, and sampling algorithms.

7 Go Deeper By Topic

The strongest adjacent live pages right now are:

8 Optional Deeper Reading After First Pass

The strongest current references connected to this module are:

  • MIT 18.445 lecture notes page - official MIT route through Markov chains, martingales, mixing, Poisson processes, and beyond. Checked 2026-04-25.
  • MIT 6.262 Discrete Stochastic Processes - official MIT course hub for discrete-time chains, recurrence, and countable-state behavior. Checked 2026-04-25.
  • Stanford Stats 218 - official Stanford stochastic-processes page with martingales and diffusion-facing topics. Checked 2026-04-25.
  • Stanford MS&E 221 - official Stanford stochastic-modeling course page covering discrete- and continuous-time Markov chains and renewal-style models. Checked 2026-04-25.

9 Sources and Further Reading

  • MIT 18.445 lecture notes page - First pass - official MIT lecture hub for a broad first route through stochastic processes. Checked 2026-04-25.
  • MIT 18.445 lecture 2 - First pass - official MIT note on Markov chains and stationary distributions. Checked 2026-04-25.
  • MIT 6.262 Discrete Stochastic Processes - Second pass - official MIT course hub for discrete stochastic-process language. Checked 2026-04-25.
  • MIT 6.262 chapter 5 resource - Second pass - official MIT chapter resource for countable-state Markov chains. Checked 2026-04-25.
  • Stanford Stats366 Markov chains notes - Second pass - useful Stanford note for a concise first-pass view of discrete-time chains and stationary behavior. Checked 2026-04-25.
  • Stanford Stats 218 - Bridge outward - useful Stanford page once the module broadens into martingales and diffusions. Checked 2026-04-25.
  • Stanford MS&E 221 - Bridge outward - useful Stanford stochastic-modeling anchor once discrete and continuous-time models are both in play. Checked 2026-04-25.
Back to top