Documentation

Gradient Boosting

Sequential tree boosting with strong accuracy on tabular data

Gradient Boosting builds an additive model by fitting new trees to the residual errors of the previous ensemble. It consistently achieves high accuracy on structured data with proper tuning.

When to use:

  • High-accuracy classification where training time is acceptable
  • Structured data with complex nonlinear feature interactions
  • When you need feature importance alongside strong performance

Input: Tabular data with the feature columns defined during training Output: Predicted class label and class probabilities

Model Settings (set during training, used at inference)

N Estimators (default: 100) Number of boosting stages. More estimators improve training fit; use with a low learning rate.

Learning Rate (default: 0.1) Shrinks each tree's contribution. Lower values with more estimators often generalize better.

Max Depth (default: 3) Depth of individual trees. Shallow trees (3–5) are typical for gradient boosting.

Min Samples Split (default: 2) Minimum samples to split an internal node.

Subsample (default: 1.0) Fraction of training samples used per tree. Values < 1.0 introduce stochastic gradient boosting.

Max Features (default: null) Features considered per split. Setting this can reduce overfitting.

Inference Settings

No dedicated inference-time settings. The additive ensemble of trees produces predictions.


Command Palette

Search for a command to run...

Keyboard Shortcuts
CTRL + KSearch
CTRL + DTheme switch
CTRL + LLanguage switch

Software details
Compiled about 8 hours ago
Release: v4.0.0-production
Buildnumber: master@d5b7269
History: 52 Items