# | Topic | Code‑Lab Highlights | Why ML cares |
---|---|---|---|
1 | Limits & Continuity | zoom‐in plots, ε–δ checker | numerical stability, vanishing grads |
2 | Derivatives | finite diff vs autograd on torch.sin | gradients drive learning |
3 | Fundamental Theorem | trapezoid & Simpson vs autograd.grad | loss ↔ derivatives ↔ integrals |
4 | 1‑D Optimization | hand‑rolled gradient descent | baby training loop |
5 | Taylor/Maclaurin | animated truncations | activation approx., positional encodings |
# | Topic | Code‑Lab Highlights | Why ML cares |
---|---|---|---|
6 | Vectors & ∇ | quiver of ∇ | visual back‑prop intuition |
7 | Jacobian & Hessian | tiny‑MLP Hessian spectrum | curvature, 2‑nd‑order opt. |
8 | Multiple Integrals | Monte‑Carlo 2‑D Gaussian | expected loss, ELBO |
9 | Change of Variables | affine flow, log‑det via autograd | flow‑based generative models |
10 | Line & Surface Integrals | streamplots, path work | RL trajectories, gradient flow |
# | Topic | Code‑Lab Highlights | Why ML cares |
---|---|---|---|
11 | Divergence, Curl, Laplacian | heat‑equation on grid | diffusion models, graph Laplacian |
12 | ODEs | train Neural‑ODE on spirals | continuous‑time nets |
13 | PDEs | finite‑diff wave equation | physics‑informed nets, vision kernels |
# | Topic | Code‑Lab Highlights | Why ML cares |
---|---|---|---|
14 | Functional Derivatives | gradient of | weight decay as variational prob. |
15 | Back‑prop from Scratch | 50‑line reverse‑mode engine | demystify autograd |
16 | Hessian‑Vector / Newton | SGD vs L‑BFGS, BFGS sketch | faster second‑order ideas |