Support Vector Machine
Hyperplane-based classifier with kernel support for nonlinear boundaries
Support Vector Machine (SVM) finds the maximum-margin hyperplane separating classes. With kernel functions, it can learn highly nonlinear decision boundaries, making it powerful for complex classification tasks.
When to use:
- High-dimensional feature spaces (e.g., text features, gene expression)
- Small-to-medium datasets where a clear margin separation exists
- When a strong nonlinear boundary is needed and data is not too large
Input: Tabular data with the feature columns defined during training
Output: Predicted class label and class probabilities (when probability=True)
Model Settings (set during training, used at inference)
C (default: 1.0) Regularization parameter. Lower C creates a wider margin with more misclassifications; higher C fits training data more tightly.
Kernel (default: rbf)
Kernel function for feature transformation. rbf handles most nonlinear problems; linear is faster for high-dimensional sparse data.
Gamma (default: scale)
Kernel coefficient for rbf, poly, and sigmoid. scale uses 1/(n_features * X.var()); smaller values create smoother boundaries.
Degree (default: 3)
Degree for the poly kernel only.
Class Weight (default: null)
Set to balanced for imbalanced datasets.
Inference Settings
No dedicated inference-time settings. The trained support vectors determine predictions.