FileConfig
Summaryβ
Configuration used to reference a file or directory (S3, etc.)
The FileConfig class is used to create a reference to a file or directory of files in S3, GCS, HDFS, or DBFS.
The schema of the data source is inferred from the underlying file(s). It can
also be modified using the post_processor
parameter.
This class is used as an input to a
BatchSource
βs parameter batch_config
.
Declaring this configuration class alone will not register a Data Source.
Instead, declare as a part of BatchSource
that takes this configuration class
instance as a parameter.
Exampleβ
from tecton import FileConfig, BatchSource
def convert_temperature(df):
from pyspark.sql.functions import udf, col
from pyspark.sql.types import DoubleType
# Convert the incoming PySpark DataFrame temperature Celsius to Fahrenheit
udf_convert = udf(lambda x: x * 1.8 + 32.0, DoubleType())
converted_df = df.withColumn("Fahrenheit", udf_convert(col("Temperature"))).drop("Temperature")
return converted_df
# declare a FileConfig, which can be used as a parameter to a `BatchSource`
ad_impressions_file_ds = FileConfig(
uri="s3://tecton.ai.public/data/ad_impressions_sample.parquet",
file_format="parquet",
timestamp_field="timestamp",
post_processor=convert_temperature,
)
# This FileConfig can then be included as an parameter a BatchSource declaration.
# For example,
ad_impressions_batch = BatchSource(name="ad_impressions_batch", batch_config=ad_impressions_file_ds)
If your files are partitioned, simply provide the path to the root folder. For
example: uri = "s3://<bucket-name>/<root-folder>/"
Tecton will use Spark partition discovery to find all partitions and infer the schema.
When reading a highly-partitioned file, Tecton recommends setting the
schema_uri
parameter to speed up schema inference. For more details, review
our documentation
here.