Interface | Description |
---|---|
Encoder<T> |
:: Experimental ::
Used to convert a JVM object of type
T to and from the internal Spark SQL representation. |
Row |
Represents one row of output from a relational operator.
|
Class | Description |
---|---|
Column |
A column that will be computed based on the data in a
DataFrame . |
ColumnName |
A convenient class used for constructing schema.
|
DataFrameNaFunctions |
Functionality for working with missing data in
DataFrame s. |
DataFrameReader |
Interface used to load a
Dataset from external storage systems (e.g. |
DataFrameStatFunctions |
Statistic functions for
DataFrame s. |
DataFrameWriter<T> |
Interface used to write a
Dataset to external storage systems (e.g. |
Dataset<T> |
A Dataset is a strongly typed collection of domain-specific objects that can be transformed
in parallel using functional or relational operations.
|
DatasetHolder<T> |
A container for a
Dataset , used for implicit conversions in Scala. |
Encoders |
:: Experimental ::
Methods for creating an
Encoder . |
ExperimentalMethods |
:: Experimental ::
Holder for experimental methods for the bravest.
|
ForeachWriter<T> |
:: Experimental ::
A class to consume data generated by a
StreamingQuery . |
functions |
Functions available for DataFrame operations.
|
KeyValueGroupedDataset<K,V> |
:: Experimental ::
A
Dataset has been logically grouped by a user specified grouping key. |
RelationalGroupedDataset |
A set of methods for aggregations on a
DataFrame , created by Dataset.groupBy . |
RelationalGroupedDataset.CubeType$ |
To indicate it's the CUBE
|
RelationalGroupedDataset.GroupByType$ |
To indicate it's the GroupBy
|
RelationalGroupedDataset.PivotType$ | |
RelationalGroupedDataset.RollupType$ |
To indicate it's the ROLLUP
|
RowFactory |
A factory class used to construct
Row objects. |
RuntimeConfig |
Runtime configuration interface for Spark.
|
SparkSession |
The entry point to programming Spark with the Dataset and DataFrame API.
|
SparkSession.Builder |
Builder for
SparkSession . |
SQLContext |
The entry point for working with structured data (rows and columns) in Spark 1.x.
|
SQLImplicits |
A collection of implicit methods for converting common Scala objects into
Dataset s. |
TypedColumn<T,U> | |
UDFRegistration |
Functions for registering user-defined functions.
|
Enum | Description |
---|---|
SaveMode |
SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.
|
Exception | Description |
---|---|
AnalysisException |
Thrown when a query fails to analyze, usually because the query itself is invalid.
|