© 2026 Greg T. Chism · MIT License

Gradient Descent & Cost Functions — Interactive Explorer

Animate optimization on 2D loss landscapes — tune learning rate, compare GD variants, discover saddle points and flat regions


Learning Rate η
η 0.100
moderate — steady convergence
Gradient Descent Variant
exact gradient — smooth deterministic path
Simulation
Click the landscape to reposition the ball
Cost Function
What's happening?
Select a landscape and press Play.
Step 0
Loss
‖∇L‖
Loss Landscape (w₁, w₂) — contours & optimization path
click to set start · dragging resets
Low → High loss Ball ★ Global min
Loss Curve — iteration vs. L(w)