K-Nearest Neighbors
Predict by averaging the target values of the nearest training examples
KNN Regressor predicts the target value for each new point by averaging the targets of its K nearest training examples. No model is learned explicitly — prediction requires querying the stored training set.
When to use:
- Local pattern regression where nearby examples are the best predictors
- Small datasets with no clear functional form
- Situations where adding training data directly improves predictions without retraining
Input: Tabular data with the feature columns defined during training Output: Continuous predicted value
Model Settings (set during training, used at inference)
N Neighbors (default: 5) Number of nearest neighbors. Fewer neighbors create a more local, noisy model; more neighbors smooth predictions.
Weights (default: uniform)
uniform — equal weight for all neighbors. distance — closer neighbors have more influence.
Metric (default: minkowski)
Distance metric for neighbor lookup. euclidean is standard for continuous features.
Inference Settings
No dedicated inference-time settings. Each prediction queries the training set for neighbors.