public class RankingMetrics<T> extends Object implements Logging, scala.Serializable
Java users should use RankingMetrics$.of
to create a RankingMetrics
instance.
param: predictionAndLabels an RDD of (predicted ranking, ground truth set) pairs.
Constructor and Description |
---|
RankingMetrics(RDD<scala.Tuple2<Object,Object>> predictionAndLabels,
scala.reflect.ClassTag<T> evidence$1) |
Modifier and Type | Method and Description |
---|---|
double |
meanAveragePrecision() |
double |
meanAveragePrecisionAt(int k)
Returns the mean average precision (MAP) at ranking position k of all the queries.
|
double |
ndcgAt(int k)
Compute the average NDCG value of all the queries, truncated at ranking position k.
|
static <E,T extends Iterable<E>> |
of(JavaRDD<scala.Tuple2<T,T>> predictionAndLabels)
Creates a
RankingMetrics instance (for Java users). |
double |
precisionAt(int k)
Compute the average precision of all the queries, truncated at ranking position k.
|
double |
recallAt(int k)
Compute the average recall of all the queries, truncated at ranking position k.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initializeForcefully, initializeLogging, initializeLogIfNecessary, initializeLogIfNecessary, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public static <E,T extends Iterable<E>> RankingMetrics<E> of(JavaRDD<scala.Tuple2<T,T>> predictionAndLabels)
RankingMetrics
instance (for Java users).predictionAndLabels
- a JavaRDD of (predicted ranking, ground truth set) pairspublic double precisionAt(int k)
If for a query, the ranking algorithm returns n (n is less than k) results, the precision value will be computed as #(relevant items retrieved) / k. This formula also applies when the size of the ground truth set is less than k.
If a query has an empty ground truth set, zero will be used as precision together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
k
- the position to compute the truncated precision, must be positivepublic double meanAveragePrecision()
public double meanAveragePrecisionAt(int k)
k
- the position to compute the truncated precision, must be positivepublic double ndcgAt(int k)
If a query has an empty ground truth set, zero will be used as ndcg together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
k
- the position to compute the truncated ndcg, must be positivepublic double recallAt(int k)
If for a query, the ranking algorithm returns n results, the recall value will be computed as #(relevant items retrieved) / #(ground truth set). This formula also applies when the size of the ground truth set is less than k.
If a query has an empty ground truth set, zero will be used as recall together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
k
- the position to compute the truncated recall, must be positive