public final class RandomForestRegressionModel extends PredictionModel<Vector,RandomForestRegressionModel> implements scala.Serializable
Random Forest model for regression.
It supports both continuous and categorical features.
param: _trees Decision trees in the ensemble.
param: numFeatures Number of features used by this model| Modifier and Type | Method and Description |
|---|---|
RandomForestRegressionModel |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
Vector |
featureImportances()
Estimate of the importance of each feature.
|
Param<java.lang.String> |
featuresCol()
Param for features column name.
|
static RandomForestRegressionModel |
fromOld(RandomForestModel oldModel,
RandomForestRegressor parent,
scala.collection.immutable.Map<java.lang.Object,java.lang.Object> categoricalFeatures,
int numFeatures)
(private[ml]) Convert a model from the old API
|
java.lang.String |
getFeaturesCol() |
java.lang.String |
getLabelCol() |
java.lang.String |
getPredictionCol() |
Param<java.lang.String> |
labelCol()
Param for label column name.
|
int |
numFeatures()
Returns the number of features the model was trained on.
|
protected double |
predict(Vector features)
Predict label for the given features.
|
Param<java.lang.String> |
predictionCol()
Param for prediction column name.
|
java.lang.String |
toString() |
protected DataFrame |
transformImpl(DataFrame dataset) |
org.apache.spark.ml.tree.DecisionTreeModel[] |
trees() |
double[] |
treeWeights() |
java.lang.String |
uid()
An immutable unique ID for the object and its derivatives.
|
StructType |
validateAndTransformSchema(StructType schema,
boolean fitting,
DataType featuresDataType)
Validates and transforms the input schema with the provided param map.
|
featuresDataType, setFeaturesCol, setPredictionCol, transform, transformSchematransform, transform, transformtransformSchemaclone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitclear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn, validateParamsinitializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarningpublic static RandomForestRegressionModel fromOld(RandomForestModel oldModel, RandomForestRegressor parent, scala.collection.immutable.Map<java.lang.Object,java.lang.Object> categoricalFeatures, int numFeatures)
public java.lang.String uid()
Identifiableuid in interface Identifiablepublic int numFeatures()
PredictionModelnumFeatures in class PredictionModel<Vector,RandomForestRegressionModel>public org.apache.spark.ml.tree.DecisionTreeModel[] trees()
public double[] treeWeights()
protected DataFrame transformImpl(DataFrame dataset)
transformImpl in class PredictionModel<Vector,RandomForestRegressionModel>protected double predict(Vector features)
PredictionModeltransform() and output predictionCol.predict in class PredictionModel<Vector,RandomForestRegressionModel>features - (undocumented)public RandomForestRegressionModel copy(ParamMap extra)
Paramscopy in interface Paramscopy in class Model<RandomForestRegressionModel>extra - (undocumented)defaultCopy()public java.lang.String toString()
toString in interface IdentifiabletoString in class java.lang.Objectpublic Vector featureImportances()
This generalizes the idea of "Gini" importance to other losses, following the explanation of Gini importance from "Random Forests" documentation by Leo Breiman and Adele Cutler, and following the implementation from scikit-learn.
This feature importance is calculated as follows: - Average over trees: - importance(feature j) = sum (over nodes which split on feature j) of the gain, where gain is scaled by the number of instances passing through node - Normalize importances for tree based on total number of training instances used to build tree. - Normalize feature importance vector to sum to 1.
public StructType validateAndTransformSchema(StructType schema, boolean fitting, DataType featuresDataType)
schema - input schemafitting - whether this is in fittingfeaturesDataType - SQL DataType for FeaturesType.
E.g., VectorUDT for vector features.public Param<java.lang.String> labelCol()
public java.lang.String getLabelCol()
public Param<java.lang.String> featuresCol()
public java.lang.String getFeaturesCol()
public Param<java.lang.String> predictionCol()
public java.lang.String getPredictionCol()