Last updated:
0 purchases
pysparkconfig 0.0.2.16
Pyspark-config
Pyspark-Config is a Python module for data processing in Pyspark by means of a configuration file, granting access to build distributed data piplines with configurable inputs, transformations and outputs.
Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Installation
To install the current release (Ubuntu and Windows):
$ pip install pyspark_config
Dependencies
Python (>= 3.6)
Pyspark (>= 2.4.5)
PyYaml (>= 5.3.1)
Dataclasses (>= 0.0.0)
Example
Given the yaml configuration file '../example.yaml':
input:
sources:
- type: 'Parquet'
label: 'parquet'
parquet_path: '../table.parquet'
transformations:
- type: "Select"
cols: ['A', 'B']
- type: "Concatenate"
cols: ['A', 'B']
name: 'Concatenation_AB'
delimiter: "-"
output:
- type: 'Parquet'
name: "example"
path: "../outputs"
With the input source saved in '../table.parquet', the following code can then be applied:
from pyspark_config import Config
from pyspark_config.transformations.transformations import *
from pyspark_config.output import *
from pyspark_config.input import *
config_path="../example.yaml"
configuration=Config()
configuration.load(config_path)
configuration.apply()
The output will then be saved in '../outputs/example.parquet'.
Changelog
See the changelog for a history of notable changes to pyspark-config.
License
This project is distributed under the 3-Clause BSD license. - see the LICENSE.md file for details.
For personal and professional use. You cannot resell or redistribute these repositories in their original state.
There are no reviews.