LDA
Supervised linear projection maximizing class separability
Linear Discriminant Analysis (LDA) finds the linear combinations of features that best separate the target classes. Unlike PCA, it uses class labels during training, producing projections optimized for classification.
When to use:
- Supervised dimensionality reduction before a classifier
- Visualizing class separation in 2D or 3D
- When class label information should guide the reduction
Input: Tabular data with the feature columns defined during training (no labels needed at inference) Output: LDA-projected coordinates for each row
Model Settings (set during training, used at inference)
N Components (default: null — min(n_classes-1, n_features)) Number of discriminant components. Maximum is n_classes - 1.
Solver (default: svd)
Algorithm for LDA. svd is the standard; lsqr and eigen support shrinkage regularization.
Shrinkage (default: null)
Regularization parameter. auto uses the Ledoit-Wolf lemma. Useful for small samples relative to feature count.
Inference Settings
No dedicated inference-time settings. New data is projected using the discriminant vectors learned from training class separability.