LightGBM
Fast leaf-wise gradient boosting for regression
LightGBM Regressor uses leaf-wise growth and histogram binning for fast, memory-efficient training. It handles large datasets and high-cardinality categoricals natively.
When to use:
- Large datasets where training speed is critical
- High-cardinality categorical features
- Competitive accuracy with low training cost
Input: Tabular data with the feature columns defined during training Output: Continuous predicted value
Model Settings (set during training, used at inference)
N Estimators (default: 100) Number of boosting rounds.
Learning Rate (default: 0.1) Shrinkage per step.
Num Leaves (default: 31) Maximum leaves per tree. Key LightGBM parameter — more leaves fit more complex functions.
Max Depth (default: -1 — unlimited) Tree depth limit.
Min Child Samples (default: 20) Minimum data per leaf. Higher values regularize the model.
Subsample (default: 1.0) Row sampling fraction.
Objective (default: regression)
Loss function. regression for MSE; regression_l1 for MAE; huber for robust regression.
Inference Settings
No dedicated inference-time settings.