Skip to content

Machine Learning Algorithms Ladder

Who This Is For

Use this ladder when you intentionally want one textbook-breadth spillover from algorithms into ML, while keeping the route algorithmic and narrow.

Warm-Up

  • state the classifier as sign(w · x + b)
  • explain what linearly separable means
  • say exactly when the perceptron updates

Core

  • deterministic perceptron
  • mistake-driven online updates
  • convergence only under separability

Repo Anchors

Stretch

  • compare perceptron with logistic-regression style thinking without pretending they are the same route
  • read one official course note on margins and explain what the repo lane still intentionally does not cover

Compare Points

This ladder is intentionally tiny.

The point is not to turn the repo into an ML curriculum. The point is to prevent this source-backed breadth topic from staying forgotten or fuzzy.

Exit Criteria

You are ready to move on when you can:

  • implement the perceptron with bias correctly
  • explain why separability matters for convergence
  • say clearly what this lane does not cover

External Practice