FastForestRegressor Class

Machine Learning Fast Forest

Inheritance
nimbusml.internal.core.ensemble._fastforestregressor.FastForestRegressor
FastForestRegressor
nimbusml.base_predictor.BasePredictor
FastForestRegressor
sklearn.base.RegressorMixin
FastForestRegressor

Constructor

FastForestRegressor(number_of_trees=100, number_of_leaves=20, minimum_example_count_per_leaf=10, normalize='Auto', caching='Auto', shuffle_labels=False, number_of_quantile_samples=100, parallel_trainer=None, number_of_threads=None, random_state=123, feature_selection_seed=123, entropy_coefficient=0.0, histogram_pool_size=-1, disk_transpose=None, feature_flocks=True, categorical_split=False, maximum_categorical_group_count_per_node=64, maximum_categorical_split_point_count=64, minimum_example_fraction_for_categorical_split=0.001, minimum_examples_for_categorical_split=100, bias=0.0, bundling='None', maximum_bin_count_per_feature=255, sparsify_threshold=0.7, first_use_penalty=0.0, feature_reuse_penalty=0.0, gain_conf_level=0.0, softmax_temperature=0.0, execution_time=False, feature_fraction=0.7, bagging_size=1, bagging_example_fraction=0.7, feature_fraction_per_split=0.7, smoothing=0.0, allow_empty_trees=True, feature_compression_level=1, compress_ensemble=False, test_frequency=2147483647, feature=None, group_id=None, label=None, weight=None, **params)

Parameters

feature

see Columns.

group_id

see Columns.

label

see Columns.

weight

see Columns.

number_of_trees

Specifies the total number of decision trees to create in the ensemble. By creating more decision trees, you can potentially get better coverage, but the training time increases.

number_of_leaves

The maximum number of leaves (terminal nodes) that can be created in any tree. Higher values potentially increase the size of the tree and get better precision, but risk overfitting and requiring longer training times.

minimum_example_count_per_leaf

Minimum number of training instances required to form a leaf. That is, the minimal number of documents allowed in a leaf of regression tree, out of the sub-sampled data. A 'split' means that features in each level of the tree (node) are randomly divided.

normalize

If Auto, the choice to normalize depends on the preference declared by the algorithm. This is the default choice. If No, no normalization is performed. If Yes, normalization always performed. If Warn, if normalization is needed by the algorithm, a warning message is displayed but normalization is not performed. If normalization is performed, a MaxMin normalizer is used. This normalizer preserves sparsity by mapping zero to zero.

caching

Whether trainer should cache input training data.

shuffle_labels

Shuffle the labels on every iteration. Useful probably only if using this tree as a tree leaf featurizer for multiclass.

number_of_quantile_samples

Number of labels to be sampled from each leaf to make the distribution.

parallel_trainer

Allows to choose Parallel FastTree Learning Algorithm.

number_of_threads

The number of threads to use.

random_state

The seed of the random number generator.

feature_selection_seed

The seed of the active feature selection.

entropy_coefficient

The entropy (regularization) coefficient between 0 and 1.

histogram_pool_size

The number of histograms in the pool (between 2 and numLeaves).

disk_transpose

Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose.

feature_flocks

Whether to collectivize features during dataset preparation to speed up training.

categorical_split

Whether to do split based on multiple categorical feature values.

maximum_categorical_group_count_per_node

Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features.

maximum_categorical_split_point_count

Maximum categorical split points to consider when splitting on a categorical feature.

minimum_example_fraction_for_categorical_split

Minimum categorical example percentage in a bin to consider for a split.

minimum_examples_for_categorical_split

Minimum categorical example count in a bin to consider for a split.

bias

Bias for calculating gradient for each feature bin for a categorical feature.

bundling

Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle.

maximum_bin_count_per_feature

Maximum number of distinct values (bins) per feature.

sparsify_threshold

Sparsity level needed to use sparse feature representation.

first_use_penalty

The feature first use penalty coefficient. This is a form of regularization that incurs a penalty for using a new feature when creating the tree. Increase this value to create trees that don't use many features.

feature_reuse_penalty

The feature re-use penalty (regularization) coefficient.

gain_conf_level

Tree fitting gain confidence requirement (should be in the range [0,1) ).

softmax_temperature

The temperature of the randomized softmax distribution for choosing the feature.

execution_time

Print execution time breakdown to stdout.

feature_fraction

The fraction of features (chosen randomly) to use on each iteration.

bagging_size

Number of trees in each bag (0 for disabling bagging).

bagging_example_fraction

Percentage of training examples used in each bag.

feature_fraction_per_split

The fraction of features (chosen randomly) to use on each split.

smoothing

Smoothing paramter for tree regularization.

allow_empty_trees

When a root split is impossible, allow training to proceed.

feature_compression_level

The level of feature compression to use.

compress_ensemble

Compress the tree Ensemble.

test_frequency

Calculate metric values for train/valid/test every k rounds.

params

Additional arguments sent to compute engine.

Examples


   ###############################################################################
   # FastForestRegressor
   from nimbusml import Pipeline, FileDataStream
   from nimbusml.datasets import get_dataset
   from nimbusml.ensemble import FastForestRegressor
   from nimbusml.feature_extraction.categorical import OneHotVectorizer

   # data input (as a FileDataStream)
   path = get_dataset('infert').as_filepath()
   data = FileDataStream.read_csv(path)
   print(data.head())
   #   age  case education  induced  parity  ... row_num  spontaneous  ...
   # 0   26     1    0-5yrs        1       6 ...       1            2  ...
   # 1   42     1    0-5yrs        1       1 ...       2            0  ...
   # 2   39     1    0-5yrs        2       6 ...       3            0  ...
   # 3   34     1    0-5yrs        2       4 ...       4            0  ...
   # 4   35     1   6-11yrs        1       3 ...       5            1  ...
   # define the training pipeline
   pipeline = Pipeline([
       OneHotVectorizer(columns={'edu': 'education'}),
       FastForestRegressor(feature=['parity', 'edu'], label='age')
   ])

   # train, predict, and evaluate
   metrics, predictions = pipeline.fit(data).test(data, output_scores=True)

   # print predictions
   print(predictions.head())
   #       Score
   # 0  33.039101
   # 1  32.743298
   # 2  33.039101
   # 3  32.778347
   # 4  32.413570

   # print evaluation metrics
   print(metrics)
   #    L1(avg)  L2(avg)  RMS(avg)  Loss-fn(avg)  R Squared
   # 0  4.160395  24.0247    4.9015       24.0247   0.125348

Remarks

Decision trees are non-parametric models that perform a sequence of simple tests on inputs. This decision procedure maps them to outputs found in the training dataset whose inputs were similar to the instance being processed. A decision is made at each node of the binary tree data structure based on a measure of similarity that maps each instance recursively through the branches of the tree until the appropriate leaf node is reached and the output decision returned.

Decision trees have several advantages:

  • They are efficient in both computation and memory usage during training and prediction.

  • They can represent non-linear decision boundaries.

  • They perform integrated feature selection and classification.

  • They are resilient in the presence of noisy features.

Fast forest regression is a random forest and quantile regression forest implementation using the regression tree learner in FastTreesRegressor. The model consists of an ensemble of decision trees. Each tree in a decision forest outputs a Gaussian distribution by way of prediction. An aggregation is performed over the ensemble of trees to find a Gaussian distribution closest to the combined distribution for all trees in the model.

This decision forest classifier consists of an ensemble of decision trees. Generally, ensemble models provide better coverage and accuracy than single decision trees. Each tree in a decision forest outputs a Gaussian distribution.

Reference

Wikipedia: Random forest

Quantile regression forest

From Stumps to Trees to Forests

Methods

get_params

Get the parameters for this operator.

get_params

Get the parameters for this operator.

get_params(deep=False)

Parameters

deep
default value: False