Support Vector Regression
Support vector machine adapted for continuous value prediction
Support Vector Regression (SVR) finds a function that stays within an epsilon-margin of the true values for most training points. It is effective for nonlinear regression with kernel transformations.
When to use:
- Small-to-medium datasets with nonlinear feature-target relationships
- When a robust fit within a tolerance band is desired
- High-dimensional feature spaces
Input: Tabular data with the feature columns defined during training Output: Continuous predicted value
Model Settings (set during training, used at inference)
Kernel (default: rbf)
Kernel function. rbf is the standard nonlinear choice; linear for large sparse datasets.
C (default: 1.0) Penalty for points outside the epsilon tube. Higher C fits more closely.
Epsilon (default: 0.1) Width of the no-penalty tube around the prediction. Larger values create smoother models.
Gamma (default: scale)
Kernel coefficient for rbf and poly. Lower values give broader influence.
Degree (default: 3)
Degree for the poly kernel.
Inference Settings
No dedicated inference-time settings. The trained support vectors define the regression function.