Stochastic Processes
stochastic processes, Markov chains, martingales, Brownian motion, SDEs
1 Why This Module Matters
Classical probability teaches random variables, conditioning, expectations, concentration, and a few asymptotic theorems.
Stochastic processes ask a different kind of question:
- what if randomness evolves over time?
- what if the present depends on the past?
- what if long-run behavior matters more than one-step uncertainty?
- what if the right object is not one random variable, but an entire random path?
That is why stochastic-process language keeps reappearing in:
- queues and networks
- hidden-state models and filtering
- stochastic control and RL
- Brownian motion and diffusion
- MCMC and sampling algorithms
This module is the bridge from ordinary probability into time-indexed randomness.
2 First Pass Through This Module
The first-pass spine for this module is:
- Markov Chains and Stationary Distributions
- Martingales and Optional Stopping Intuition
- Poisson Processes and Counting Models
- Brownian Motion and Diffusion Intuition
- SDEs and Ito Intuition
- Mixing, Ergodicity, and MCMC Bridges
The full first-pass spine is now live, from Markov Chains and Stationary Distributions through Martingales and Optional Stopping Intuition, Poisson Processes and Counting Models, Brownian Motion and Diffusion Intuition, SDEs and Ito Intuition, and Mixing, Ergodicity, and MCMC Bridges.
3 How To Use This Module
The best current first-pass path is:
- start with Markov Chains and Stationary Distributions
- read Martingales and Optional Stopping Intuition when conditional expectation, stopping rules, or “fairness given current information” becomes the main idea
- continue to Poisson Processes and Counting Models and Brownian Motion and Diffusion Intuition if you want the two main continuous-time randomness lenses: jumps and diffusion
- use SDEs and Ito Intuition once drift-plus-noise language starts appearing in control or diffusion-style ML
- use Mixing, Ergodicity, and MCMC Bridges once long-run sampling, dependent averages, or MCMC becomes the main question
- keep Probability and Linear Algebra nearby whenever conditioning, expectation, transition operators, or stationary distributions need re-grounding
The design goal is to make stochastic evolution itself feel natural before the module branches into martingales, diffusions, and MCMC-like long-run behavior.
4 Core Concepts
- Markov Chains and Stationary Distributions: the opening page that explains the Markov property, transition matrices, stationary distributions, and first-pass long-run behavior.
- Martingales and Optional Stopping Intuition: the live second page where stochastic fairness, conditional expectation, and stopping ideas become reusable.
- Poisson Processes and Counting Models: the live counting-process page where random event streams, independent increments, and exponential waiting times appear.
- Brownian Motion and Diffusion Intuition: the live continuous-time randomness page where Gaussian increments, diffusion scaling, and path roughness become central.
- SDEs and Ito Intuition: the live fifth page where drift-diffusion models, Ito corrections, and Euler-Maruyama simulation become explicit.
- Mixing, Ergodicity, and MCMC Bridges: the live sixth page where long-run convergence, ergodic averages, and MCMC-style sampling viewpoints become central.
5 Proof Patterns In This Module
One-step to long-run: use a local transition rule to study global behavior over many steps.Conditional expectation as structure: replace pathwise complexity with cleaner conditional objects.Invariant or stationary objects: identify distributions or functionals that remain stable under the dynamics.
6 Applications
6.1 Random Dynamics Over Time
The module organizes systems where randomness is not one-shot but evolves step by step or continuously over time.
6.3 Control, RL, Diffusion, And MCMC
This is the language that later supports Markov decision processes, stochastic control, diffusion modeling, and sampling algorithms.
7 Go Deeper By Topic
The strongest adjacent live pages right now are:
8 Optional Deeper Reading After First Pass
The strongest current references connected to this module are:
- MIT 18.445 lecture notes page - official MIT route through Markov chains, martingales, mixing, Poisson processes, and beyond. Checked
2026-04-25. - MIT 6.262 Discrete Stochastic Processes - official MIT course hub for discrete-time chains, recurrence, and countable-state behavior. Checked
2026-04-25. - Stanford Stats 218 - official Stanford stochastic-processes page with martingales and diffusion-facing topics. Checked
2026-04-25. - Stanford MS&E 221 - official Stanford stochastic-modeling course page covering discrete- and continuous-time Markov chains and renewal-style models. Checked
2026-04-25.
9 Sources and Further Reading
- MIT 18.445 lecture notes page -
First pass- official MIT lecture hub for a broad first route through stochastic processes. Checked2026-04-25. - MIT 18.445 lecture 2 -
First pass- official MIT note on Markov chains and stationary distributions. Checked2026-04-25. - MIT 6.262 Discrete Stochastic Processes -
Second pass- official MIT course hub for discrete stochastic-process language. Checked2026-04-25. - MIT 6.262 chapter 5 resource -
Second pass- official MIT chapter resource for countable-state Markov chains. Checked2026-04-25. - Stanford Stats366 Markov chains notes -
Second pass- useful Stanford note for a concise first-pass view of discrete-time chains and stationary behavior. Checked2026-04-25. - Stanford Stats 218 -
Bridge outward- useful Stanford page once the module broadens into martingales and diffusions. Checked2026-04-25. - Stanford MS&E 221 -
Bridge outward- useful Stanford stochastic-modeling anchor once discrete and continuous-time models are both in play. Checked2026-04-25.