What is LIME? LIME fits a simple interpretable model (linear) locally around a prediction — it explains what the complex model is doing near that specific point, not globally.
Local linearity: complex models are often locally linear — zoom in far enough and any smooth boundary looks like a line. LIME exploits this by fitting a line only in the small neighborhood of interest.
Neighborhood radius: too small = too few neighbors, noisy explanation. Too large = the linear approximation poorly captures the nonlinear boundary. Tune for your problem.
Model-agnostic: LIME only needs predict(x) — it works on any black-box model: neural nets, random forests, SVMs, even remote APIs. It never looks inside the model.
Interpreting weights: positive weight = feature pushes toward class 1. Negative weight = feature pushes toward class 0. Magnitude = strength of influence at this specific point.