XGBoost
Optimized gradient boosting for regression
XGBoost Regressor provides fast, regularized gradient boosting. It handles missing values natively and includes built-in L1/L2 regularization for strong out-of-box performance.
When to use:
- High-accuracy regression on structured tabular data
- Missing values in features (handled natively)
- When regularization is needed to reduce overfitting
Input: Tabular data with the feature columns defined during training Output: Continuous predicted value
Model Settings (set during training, used at inference)
N Estimators (default: 100) Number of boosting rounds.
Max Depth (default: 6) Maximum tree depth.
Learning Rate / ETA (default: 0.3) Step size shrinkage per round.
Subsample (default: 1.0) Row sampling ratio per tree.
Col Sample By Tree (default: 1.0) Feature sampling ratio per tree.
Objective (default: reg:squarederror)
Loss function. reg:squarederror for RMSE; reg:absoluteerror for MAE.
Lambda (default: 1) L2 regularization on leaf weights.
Alpha (default: 0) L1 regularization on leaf weights.
Inference Settings
No dedicated inference-time settings.