| Interface | Description |
|---|---|
| ContinuousQuery |
:: Experimental ::
A handle to a query that is executing continuously in the background as new data arrives.
|
| Encoder<T> |
:: Experimental ::
Used to convert a JVM object of type
T to and from the internal Spark SQL representation. |
| Row |
Represents one row of output from a relational operator.
|
| Trigger |
:: Experimental ::
Used to indicate how often results should be produced by a
ContinuousQuery. |
| Class | Description |
|---|---|
| Column |
:: Experimental ::
A column that will be computed based on the data in a
DataFrame. |
| ColumnName |
:: Experimental ::
A convenient class used for constructing schema.
|
| ContinuousQueryManager |
:: Experimental ::
A class to manage all the
ContinuousQueries active
on a SparkSession. |
| DataFrameNaFunctions |
:: Experimental ::
Functionality for working with missing data in
DataFrames. |
| DataFrameReader |
Interface used to load a
Dataset from external storage systems (e.g. |
| DataFrameStatFunctions |
:: Experimental ::
Statistic functions for
DataFrames. |
| DataFrameWriter |
Interface used to write a
Dataset to external storage systems (e.g. |
| Dataset<T> |
A
Dataset is a strongly typed collection of domain-specific objects that can be transformed
in parallel using functional or relational operations. |
| DatasetHolder<T> |
A container for a
Dataset, used for implicit conversions in Scala. |
| Encoders |
:: Experimental ::
Methods for creating an
Encoder. |
| ExperimentalMethods |
:: Experimental ::
Holder for experimental methods for the bravest.
|
| functions |
:: Experimental ::
Functions available for
DataFrame. |
| KeyValueGroupedDataset<K,V> |
:: Experimental ::
A
Dataset has been logically grouped by a user specified grouping key. |
| ProcessingTime |
:: Experimental ::
A trigger that runs a query periodically based on the processing time.
|
| RelationalGroupedDataset |
A set of methods for aggregations on a
DataFrame, created by Dataset.groupBy. |
| RelationalGroupedDataset.CubeType$ |
To indicate it's the CUBE
|
| RelationalGroupedDataset.GroupByType$ |
To indicate it's the GroupBy
|
| RelationalGroupedDataset.PivotType$ | |
| RelationalGroupedDataset.RollupType$ |
To indicate it's the ROLLUP
|
| RowFactory |
A factory class used to construct
Row objects. |
| RuntimeConfig |
Runtime configuration interface for Spark.
|
| SinkStatus |
:: Experimental ::
Status and metrics of a streaming
Sink. |
| SourceStatus |
:: Experimental ::
Status and metrics of a streaming
Source. |
| SparkSession |
The entry point to programming Spark with the Dataset and DataFrame API.
|
| SparkSession.Builder |
Builder for
SparkSession. |
| SQLContext |
The entry point for working with structured data (rows and columns) in Spark, in Spark 1.x.
|
| SQLImplicits |
A collection of implicit methods for converting common Scala objects into
DataFrames. |
| TypedColumn<T,U> | |
| UDFRegistration |
Functions for registering user-defined functions.
|
| Enum | Description |
|---|---|
| SaveMode |
SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.
|
| Exception | Description |
|---|---|
| AnalysisException |
:: DeveloperApi ::
Thrown when a query fails to analyze, usually because the query itself is invalid.
|
| ContinuousQueryException |
:: Experimental ::
Exception that stopped a
ContinuousQuery. |