Dokumentation (english)

Decision Tree

Single interpretable tree-based classifier

A Decision Tree builds a flowchart-like structure by repeatedly splitting data on the most informative feature. It is the most interpretable tree model but prone to overfitting without depth constraints.

When to use:

  • When full interpretability and explainability are required
  • Quick baseline model before trying ensembles
  • Rule extraction for business logic

Input: Tabular data with the feature columns defined during training Output: Predicted class label and class probabilities

Model Settings (set during training, used at inference)

Max Depth (default: null — unlimited) Maximum tree depth. Constrain this to prevent overfitting — values of 3–10 are typical for production models.

Min Samples Split (default: 2) Minimum samples required to split a node. Higher values create simpler trees.

Min Samples Leaf (default: 1) Minimum samples in a leaf node. Increasing this smooths the model.

Criterion (default: gini) Impurity measure. gini or entropy for classification.

Max Features (default: null — all features) Number of features to consider per split.

Class Weight (default: null) Set to balanced for imbalanced datasets.

Inference Settings

No dedicated inference-time settings. Each input row follows the learned decision path.


Command Palette

Search for a command to run...

Schnellzugriffe
STRG + KSuche
STRG + DNachtmodus / Tagmodus
STRG + LSprache ändern

Software-Details
Kompiliert vor etwa 4 Stunden
Release: v4.0.0-production
Buildnummer: master@afa25ab
Historie: 72 Items