org.apache.spark.ml.regression
Construct a GBTRegressionModel
Construct a GBTRegressionModel
Decision trees in the ensemble.
Weights for the decision trees in the ensemble.
An alias for getOrDefault()
.
An alias for getOrDefault()
.
If false, the algorithm will pass trees to executors to match instances with nodes.
If false, the algorithm will pass trees to executors to match instances with nodes. If true, the algorithm will cache node IDs for each instance. Caching can speed up training of deeper trees. Users can set how often should the cache be checkpointed or disable it by setting checkpointInterval. (default = false)
Param for set checkpoint interval (>= 1) or disable checkpoint (-1).
Param for set checkpoint interval (>= 1) or disable checkpoint (-1). E.g. 10 means that the cache will get checkpointed every 10 iterations. Note: this setting will be ignored if the checkpoint directory is not set in the SparkContext.
Clears the user-supplied value for the input param.
Clears the user-supplied value for the input param.
Creates a copy of this instance with the same UID and some extra params.
Creates a copy of this instance with the same UID and some extra params.
Subclasses should implement this method and set the return type properly.
See defaultCopy()
.
Copies param values from this instance to another instance for params shared by them.
Copies param values from this instance to another instance for params shared by them.
This handles default Params and explicitly set Params separately.
Default Params are copied from and to defaultParamMap
, and explicitly set Params are
copied from and to paramMap
.
Warning: This implicitly assumes that this Params instance and the target instance
share the same set of default Params.
the target instance, which should work with the same set of default Params as this source instance
extra params to be copied to the target's paramMap
the target instance with param values copied
Default implementation of copy with extra params.
Default implementation of copy with extra params. It tries to create a new instance with the same UID. Then it copies the embedded and extra parameters over and returns the new instance.
Method to compute error or loss for every iteration of gradient boosting.
Method to compute error or loss for every iteration of gradient boosting.
Dataset for validation.
The loss function used to compute error. Supported options: squared, absolute
Explains a param.
Explains a param.
input param, must belong to this instance.
a string that contains the input param name, doc, and optionally its default value and the user-supplied value
Explains all params of this instance.
Explains all params of this instance. See explainParam()
.
extractParamMap
with no extra values.
extractParamMap
with no extra values.
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.
Estimate of the importance of each feature.
Estimate of the importance of each feature.
Each feature's importance is the average of its importance across all trees in the ensemble The importance vector is normalized to sum to 1. This method is suggested by Hastie et al. (Hastie, Tibshirani, Friedman. "The Elements of Statistical Learning, 2nd Edition." 2001.) and follows the implementation from scikit-learn.
DecisionTreeRegressionModel.featureImportances
The number of features to consider for splits at each tree node.
The number of features to consider for splits at each tree node. Supported options:
These various settings are based on the following references:
Param for features column name.
Param for features column name.
Returns the SQL DataType corresponding to the FeaturesType type parameter.
Returns the SQL DataType corresponding to the FeaturesType type parameter.
This is used by validateAndTransformSchema()
.
This workaround is needed since SQL has different APIs for Scala and Java.
The default value is VectorUDT, but it may be overridden if FeaturesType is not Vector.
Optionally returns the user-supplied value of a param.
Optionally returns the user-supplied value of a param.
Gets the default value of a parameter.
Gets the default value of a parameter.
Number of trees in ensemble
Number of trees in ensemble
Gets the value of a param in the embedded param map or its default value.
Gets the value of a param in the embedded param map or its default value. Throws an exception if neither is set.
Gets a param by its name.
Gets a param by its name.
Tests whether the input param has a default value set.
Tests whether the input param has a default value set.
Tests whether this instance contains a param with a given name.
Tests whether this instance contains a param with a given name.
Indicates whether this Model has a corresponding parent.
Criterion used for information gain calculation (case-insensitive).
Criterion used for information gain calculation (case-insensitive). Supported: "variance". (default = variance)
Checks whether a param is explicitly set or has a default value.
Checks whether a param is explicitly set or has a default value.
Checks whether a param is explicitly set.
Checks whether a param is explicitly set.
Param for label column name.
Param for label column name.
Loss function which GBT tries to minimize.
Loss function which GBT tries to minimize. (case-insensitive) Supported: "squared" (L2) and "absolute" (L1) (default = squared)
Maximum number of bins used for discretizing continuous features and for choosing how to split on features at each node.
Maximum number of bins used for discretizing continuous features and for choosing how to split on features at each node. More bins give higher granularity. Must be >= 2 and >= number of categories in any categorical feature. (default = 32)
Maximum depth of the tree (>= 0).
Maximum depth of the tree (>= 0). E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes. (default = 5)
Param for maximum number of iterations (>= 0).
Param for maximum number of iterations (>= 0).
Maximum memory in MB allocated to histogram aggregation.
Maximum memory in MB allocated to histogram aggregation. If too small, then 1 node will be split per iteration, and its aggregates may exceed this size. (default = 256 MB)
Minimum information gain for a split to be considered at a tree node.
Minimum information gain for a split to be considered at a tree node. Should be >= 0.0. (default = 0.0)
Minimum number of instances each child must have after split.
Minimum number of instances each child must have after split. If a split causes the left or right child to have fewer than minInstancesPerNode, the split will be discarded as invalid. Should be >= 1. (default = 1)
Returns the number of features the model was trained on.
Returns the number of features the model was trained on. If unknown, returns -1
Number of trees in ensemble
Returns all params sorted by their names.
Returns all params sorted by their names. The default implementation uses Java reflection to list all public methods that have no arguments and return Param.
Developer should not use this method in constructor because we cannot guarantee that this variable gets initialized before other params.
The parent estimator that produced this model.
The parent estimator that produced this model.
For ensembles' component Models, this value can be null.
Predict label for the given features.
Predict label for the given features.
This method is used to implement transform()
and output predictionCol.
Param for prediction column name.
Param for prediction column name.
Saves this ML instance to the input path, a shortcut of write.save(path)
.
Saves this ML instance to the input path, a shortcut of write.save(path)
.
Param for random seed.
Param for random seed.
Sets a parameter in the embedded param map.
Sets a parameter in the embedded param map.
Sets a parameter (by name) in the embedded param map.
Sets a parameter (by name) in the embedded param map.
Sets a parameter in the embedded param map.
Sets a parameter in the embedded param map.
Sets default values for a list of params.
Sets default values for a list of params.
Note: Java developers should use the single-parameter setDefault
.
Annotating this with varargs can cause compilation failures due to a Scala compiler bug.
See SPARK-9268.
a list of param pairs that specify params and their default values to set respectively. Make sure that the params are initialized before this method gets called.
Sets a default value for a param.
Sets a default value for a param.
param to set the default value. Make sure that this param is initialized before this method gets called.
the default value
Sets the parent of this model (Java API).
Sets the parent of this model (Java API).
Param for Step size (a.k.a.
Param for Step size (a.k.a. learning rate) in interval (0, 1] for shrinking the contribution of each estimator. (default = 0.1)
Fraction of the training data used for learning each decision tree, in range (0, 1].
Fraction of the training data used for learning each decision tree, in range (0, 1]. (default = 1.0)
Full description of model
Full description of model
Summary of the model
Summary of the model
Total number of nodes, summed over all trees in the ensemble.
Total number of nodes, summed over all trees in the ensemble.
Transforms dataset by reading from featuresCol, calling predict
, and storing
the predictions as a new column predictionCol.
Transforms dataset by reading from featuresCol, calling predict
, and storing
the predictions as a new column predictionCol.
input dataset
transformed dataset with predictionCol of type Double
Transforms the dataset with provided parameter map as additional parameters.
Transforms the dataset with provided parameter map as additional parameters.
input dataset
additional parameters, overwrite embedded params
transformed dataset
Transforms the dataset with optional parameters
Transforms the dataset with optional parameters
input dataset
the first param pair, overwrite embedded params
other param pairs, overwrite embedded params
transformed dataset
:: DeveloperApi ::
:: DeveloperApi ::
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
:: DeveloperApi ::
:: DeveloperApi ::
Derives the output schema from the input schema and parameters, optionally with logging.
This should be optimistic. If it is unclear whether the schema will be valid, then it should be assumed valid until proven otherwise.
Weights for each tree, zippable with trees
Weights for each tree, zippable with trees
Trees in this ensemble.
Trees in this ensemble. Warning: These have null parent Estimators.
An immutable unique ID for the object and its derivatives.
An immutable unique ID for the object and its derivatives.
Validates and transforms the input schema with the provided param map.
Validates and transforms the input schema with the provided param map.
input schema
whether this is in fitting
SQL DataType for FeaturesType.
E.g., VectorUDT
for vector features.
output schema
Param for name of the column that indicates whether each row is for training or for validation.
Param for name of the column that indicates whether each row is for training or for validation. False indicates training; true indicates validation..
Threshold for stopping early when fit with validation is used.
Threshold for stopping early when fit with validation is used. (This parameter is ignored when fit without validation is used.) The decision to stop early is decided based on this logic: If the current loss on the validation set is greater than 0.01, the diff of validation error is compared to relative tolerance which is validationTol * (current loss on the validation set). If the current loss on the validation set is less than or equal to 0.01, the diff of validation error is compared to absolute tolerance which is validationTol * 0.01.
validationIndicatorCol
Returns an MLWriter
instance for this ML instance.
Returns an MLWriter
instance for this ML instance.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
(Since version 2.1.0) This method is deprecated and will be removed in 3.0.0.
A list of (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.
A list of advanced, expert-only (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.
Gradient-Boosted Trees (GBTs) model for regression. It supports both continuous and categorical features.