an RDD of (predicted ranking, ground truth set) pairs.
Returns the mean average precision (MAP) of all the queries.
Returns the mean average precision (MAP) of all the queries. If a query has an empty ground truth set, the average precision will be zero and a log warining is generated.
Compute the average NDCG value of all the queries, truncated at ranking position k.
Compute the average NDCG value of all the queries, truncated at ranking position k. The discounted cumulative gain at position k is computed as: sumi=1k (2{relevance of ith item} - 1) / log(i + 1), and the NDCG is obtained by dividing the DCG value on the ground truth set. In the current implementation, the relevance value is binary.
If a query has an empty ground truth set, zero will be used as ndcg together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
the position to compute the truncated ndcg, must be positive
the average ndcg at the first k ranking positions
Compute the average precision of all the queries, truncated at ranking position k.
Compute the average precision of all the queries, truncated at ranking position k.
If for a query, the ranking algorithm returns n (n < k) results, the precision value will be computed as #(relevant items retrieved) / k. This formula also applies when the size of the ground truth set is less than k.
If a query has an empty ground truth set, zero will be used as precision together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
the position to compute the truncated precision, must be positive
the average precision at the first k ranking positions
::Experimental:: Evaluator for ranking algorithms.
Java users should use RankingMetrics$.of to create a RankingMetrics instance.