Documentation

Gradient Boosting

Sequential tree boosting with high accuracy on regression tasks

Gradient Boosting Regressor builds an additive model by fitting each new tree to the residuals of the previous ensemble, minimizing a differentiable loss function.

When to use:

  • High-accuracy regression where training time is acceptable
  • Complex nonlinear feature interactions
  • Feature importance alongside strong predictions

Input: Tabular data with the feature columns defined during training Output: Continuous predicted value

Model Settings (set during training, used at inference)

N Estimators (default: 100) Number of boosting stages.

Learning Rate (default: 0.1) Shrinks each tree's contribution. Pair with more estimators for better generalization.

Max Depth (default: 3) Depth of individual trees. Shallow trees (3–5) are standard.

Subsample (default: 1.0) Fraction of training samples per tree. Values < 1.0 introduce stochastic boosting.

Loss (default: squared_error) Loss function to minimize. squared_error for MSE; absolute_error for MAE; huber for robust regression.

Inference Settings

No dedicated inference-time settings.


Command Palette

Search for a command to run...

Keyboard Shortcuts
CTRL + KSearch
CTRL + DTheme switch
CTRL + LLanguage switch

Software details
Compiled about 8 hours ago
Release: v4.0.0-production
Buildnumber: master@d5b7269
History: 52 Items