public class DoubleRDDFunctions extends Object implements Logging, scala.Serializable
org.apache.spark.SparkContext._
at the top of your program to use these functions.Constructor and Description |
---|
DoubleRDDFunctions(RDD<Object> self) |
Modifier and Type | Method and Description |
---|---|
long[] |
histogram(double[] buckets,
boolean evenBuckets)
Compute a histogram using the provided buckets.
|
scala.Tuple2<double[],long[]> |
histogram(int bucketCount)
Compute a histogram of the data using bucketCount number of buckets evenly
spaced between the minimum and maximum of the RDD.
|
double |
mean()
Compute the mean of this RDD's elements.
|
PartialResult<BoundedDouble> |
meanApprox(long timeout,
double confidence)
:: Experimental ::
Approximate operation to return the mean within a timeout.
|
double |
sampleStdev()
Compute the sample standard deviation of this RDD's elements (which corrects for bias in
estimating the standard deviation by dividing by N-1 instead of N).
|
double |
sampleVariance()
Compute the sample variance of this RDD's elements (which corrects for bias in
estimating the variance by dividing by N-1 instead of N).
|
StatCounter |
stats()
Return a
StatCounter object that captures the mean, variance and
count of the RDD's elements in one operation. |
double |
stdev()
Compute the standard deviation of this RDD's elements.
|
double |
sum()
Add up the elements in this RDD.
|
PartialResult<BoundedDouble> |
sumApprox(long timeout,
double confidence)
:: Experimental ::
Approximate operation to return the sum within a timeout.
|
double |
variance()
Compute the variance of this RDD's elements.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public DoubleRDDFunctions(RDD<Object> self)
public double sum()
public StatCounter stats()
StatCounter
object that captures the mean, variance and
count of the RDD's elements in one operation.public double mean()
public double variance()
public double stdev()
public double sampleStdev()
public double sampleVariance()
public PartialResult<BoundedDouble> meanApprox(long timeout, double confidence)
public PartialResult<BoundedDouble> sumApprox(long timeout, double confidence)
public scala.Tuple2<double[],long[]> histogram(int bucketCount)
public long[] histogram(double[] buckets, boolean evenBuckets)
Note: if your histogram is evenly spaced (e.g. [0, 10, 20, 30]) this can be switched from an O(log n) inseration to O(1) per element. (where n = # buckets) if you set evenBuckets to true. buckets must be sorted and not contain any duplicates. buckets array must be at least two elements All NaN entries are treated the same. If you have a NaN bucket it must be the maximum value of the last position and all NaN entries will be counted in that bucket.