Dokumentation (english)

PCA

Linear projection onto principal components for dimensionality reduction

Principal Component Analysis (PCA) projects data onto the directions of maximum variance learned during training. It is the standard first choice for dimensionality reduction, visualization, and noise removal.

When to use:

  • Reducing high-dimensional features before feeding into another model
  • Visualizing high-dimensional data in 2D or 3D
  • Removing noise by discarding low-variance components

Input: Tabular data with the feature columns defined during training Output: Projected coordinates in the reduced-dimensional space (n_components columns)

Model Settings (set during training, used at inference)

N Components (default: 2) Number of principal components to keep. The trained eigenvectors define the projection applied at inference.

SVD Solver (default: auto) Algorithm for computing PCA. randomized is fast for large datasets; full is exact.

Whiten (default: false) If true, components are scaled to unit variance. Useful when the downstream model benefits from uncorrelated, equal-variance features.

Inference Settings

No dedicated inference-time settings. New data is projected using the eigenvectors learned during training.


Command Palette

Search for a command to run...

Schnellzugriffe
STRG + KSuche
STRG + DNachtmodus / Tagmodus
STRG + LSprache ändern

Software-Details
Kompiliert vor etwa 4 Stunden
Release: v4.0.0-production
Buildnummer: master@afa25ab
Historie: 72 Items