© 2026 Greg T. Chism · MIT License

LIME — Local Interpretable Model-Agnostic Explanations

Click any point in the decision space to generate a local linear explanation — see how complex model decisions can be approximated locally


Model & Data
Model
Dataset
LIME Parameters
Neighborhood Radius
r 0.30
Smaller radius = more local; larger = more global
Neighbor Samples
n 150
More samples = more stable explanation
Explain a Point
Click anywhere on the decision boundary
Selected point — click to select —
What's happening?
Click any point on the decision boundary to run LIME and see why the model makes that prediction at that location.
Key Concepts
What is LIME? LIME fits a simple interpretable model (linear) locally around a prediction — it explains what the complex model is doing near that specific point, not globally.
Local linearity: complex models are often locally linear — zoom in far enough and any smooth boundary looks like a line. LIME exploits this by fitting a line only in the small neighborhood of interest.
Neighborhood radius: too small = too few neighbors, noisy explanation. Too large = the linear approximation poorly captures the nonlinear boundary. Tune for your problem.
Model-agnostic: LIME only needs predict(x) — it works on any black-box model: neural nets, random forests, SVMs, even remote APIs. It never looks inside the model.
Interpreting weights: positive weight = feature pushes toward class 1. Negative weight = feature pushes toward class 0. Magnitude = strength of influence at this specific point.
Decision Boundary — Neural Net on Moons dataset
Decision boundary click to select a point and run LIME
Model → Class 0 Model → Class 1 Train class 0 Train class 1 LIME neighbors Local boundary
LIME Feature Importance — local linear approximation
Selected point in decision space
Mini decision boundary select a point on Tab 1
Feature weights (push toward class →)
Feature importance bars rendered by D3
Pushes toward class 1 Pushes toward class 0
Select a point on the Decision Boundary tab, then switch here to see the LIME feature importance explanation.
How LIME Works — 5-step algorithm
1
Select a point to explain
Pick any input point x* whose prediction you want to understand. This is the point whose local neighborhood LIME will explore.
decision boundary + selected point x* rendered by D3
2
Sample neighbors in the neighborhood
Randomly sample points z in the neighborhood of x* (within radius r), then query the black-box model f(z) to get their predicted classes.
neighborhood circle + sampled points rendered by D3
3
Weight neighbors by proximity
Assign weight πₓ*(z) = exp(−d(x*,z)²/σ²) to each neighbor — points closer to x* get higher weight. This ensures the local model focuses on the immediate region.
neighbor points sized by proximity weight rendered by D3
4
Fit a weighted linear model
Minimize ξ(x) = argmin_{g∈G} L(f, g, πₓ*) + Ω(g) — find the simplest linear model g that best approximates f locally. The weights πₓ* ensure fidelity near x*.
local linear boundary (dashed) fitted to neighbors rendered by D3
5
Coefficients = the explanation
The linear model coefficients w₁, w₂ are the LIME explanation. w₁ > 0 means feature x₁ pushes toward class 1; w₁ < 0 means it pushes toward class 0. Magnitude shows importance.
feature weight bar chart — the explanation rendered by D3
Selected Point
x₁
x₂
Class
Confidence
LIME Explanation
x₁
x₂
→ class 1 → class 0
Local Model Fit
R² of local linear model
R² near 1.0 = local model fits well. Low R² = neighborhood too large or boundary too curved.
Neighborhood
0.30
radius r
150
samples n