poniard 0.5.0

Creator: bradpython12

Last updated:

0 purchases

poniard 0.5.0 Image
poniard 0.5.0 Images
Add to Cart

Description:

poniard 0.5.0

Poniard




Introduction

A poniard /ˈpɒnjərd/ or poignard (Fr.) is a long, lightweight
thrusting knife (Wikipedia).

Poniard is a scikit-learn companion library that streamlines the process
of fitting different machine learning models and comparing them.
It can be used to provide quick answers to questions like these: * What
is the reasonable range of scores for this task? * Is a simple and
explainable linear model enough or should I work with forests and
gradient boosters? * Are the features good enough as is or should I
work on feature engineering? * How much can hyperparemeter tuning
improve metrics? * Do I need to work on a custom preprocessing
strategy?
This is not meant to be end to end solution, and you definitely should
keep on working on your models after you are done with Poniard.
The core functionality has been tested to work on Python 3.7 through
3.10 on Linux systems, and from 3.8 to 3.10 on macOS.
Installation
Stable version:
pip install poniard

Dev version with most up to date changes:
pip install git+https://github.com/rxavier/poniard.git@develop#egg=poniard

Documentation
Check the full Quarto docs,
including guides and API reference.
Usage/features
Basics
The API was designed with tabular tasks in mind, but it should also work
with time series tasks provided an appropiate cross validation strategy
is used (don’t shuffle!)
The usual Poniard flow is: 1. Define some estimators. 2. Define some
metrics. 3. Define a cross validation strategy. 4. Fit everything. 5.
Print the results.
Poniard provides sane defaults for 1, 2 and 3, so in most cases you can
just do…
from poniard import PoniardRegressor
from sklearn.datasets import load_diabetes

X, y = load_diabetes(return_X_y=True, as_frame=True)
pnd = PoniardRegressor(random_state=0)
pnd.setup(X, y)
pnd.fit()

<h2>Setup info</h2>
<h3>Target</h3>
<p><b>Type:</b> continuous</p>
<p><b>Shape:</b> (442,)</p>
<p><b>Unique values:</b> 214</p>
<h3>Metrics</h3>
<b>Main metric:</b> neg_mean_squared_error

Feature type inference
Minimum unique values to consider a number-like feature numeric: 44
Minimum unique values to consider a categorical feature high cardinality: 20
Inferred feature types:




numeric
categorical_high
categorical_low
datetime




0
age

sex



1
bmi





2
bp





3
s1





4
s2





5
s3





6
s4





7
s5





8
s6






0%| | 0/9 [00:00<?, ?it/s]

PoniardRegressor(random_state=0)

… and get a nice table showing the average of each metric in all folds
for every model, including fit and score times (thanks, scikit-learn
cross_validate function!)
pnd.get_results()





test_neg_mean_squared_error
test_neg_mean_absolute_percentage_error
test_neg_median_absolute_error
test_r2
fit_time
score_time




LinearRegression
-2977.598515
-0.396566
-39.009146
0.489155
0.005265
0.001960


ElasticNet
-3159.017211
-0.422912
-42.619546
0.460740
0.003509
0.001755


RandomForestRegressor
-3431.823331
-0.419956
-42.203000
0.414595
0.101435
0.004821


HistGradientBoostingRegressor
-3544.069433
-0.407417
-40.396390
0.391633
0.334695
0.009266


KNeighborsRegressor
-3615.195398
-0.418674
-38.980000
0.379625
0.003038
0.002083


XGBRegressor
-3923.488860
-0.426471
-39.031309
0.329961
0.055696
0.002855


LinearSVR
-4268.314411
-0.374296
-43.388592
0.271443
0.003470
0.001721


DummyRegressor
-5934.577616
-0.621540
-61.775921
-0.000797
0.003010
0.001627


DecisionTreeRegressor
-6728.423034
-0.591906
-59.700000
-0.145460
0.004179
0.001667



Alternatively, you can also get a nice plot of your different metrics by
using the PoniardBaseEstimator.plot.metrics method.
Type inference
Poniard uses some basic heuristics to infer the data types.
Float and integer columns are defined as numeric if the number of unique
values is greater than indicated by the categorical_threshold
parameter.
String/object/categorical columns are assumed to be categorical.
Datetime features are processed separately with a custom encoder.
For categorical features, high and low cardinality is defined by the
cardinality_threshold parameter. Only low cardinality categorical
features are one-hot encoded.
Ensembles
Poniard makes it easy to combine various estimators in stacking or
voting ensembles. The base esimators can be selected according to their
performance (top-n) or chosen by their names.
Poniard also reports how similar the predictions of the estimators are,
so ensembles with different base estimators can be built. A basic
correlation table of the cross-validated predictions is built for
regression tasks, while Cramér’s
V is used for
classification.
By default, it computes this similarity of prediction errors instead of
the actual predictions; this helps in building ensembles with good
scoring estimators and uncorrelated errors, which in principle and
hopefully should lead to a “wisdom of crowds” kind of situation.
Hyperparameter optimization
The
PoniardBaseEstimator.tune_estimator
method can be used to optimize the hyperparameters of a given estimator,
either by passing a grid of parameters or using the inbuilt ones
available for default estimators. The tuned estimator will be added to
the list of estimators and will be scored the next time
PoniardBaseEstimator.fit
is called.
Plotting
The plot accessor provides several plotting methods based on the
attached Poniard estimator instance. These Plotly plots are based on a
default template, but can be modified by passing a different
PoniardPlotFactory
to the Poniard plot_options argument.
Plugin system
The plugins argument in Poniard estimators takes a plugin or list of
plugins that subclass
BasePlugin.
These plugins have access to the Poniard estimator instance and hook
onto different sections of the process, for example, on setup start, on
fit end, on remove estimator, etc.
This makes it easy for third parties to extend Poniard’s functionality.
Two plugins are baked into Poniard. 1. Weights and Biases: logs your
data, plots, runs wandb scikit-learn analysis, saves model artifacts,
etc. 2. Pandas Profiling: generates an HTML report of the features and
target. If the Weights and Biases plugin is present, also logs this
report to the wandb run.
The requirements for these plugins are not included in the base Poniard
dependencies, so you can safely ignore them if you don’t intend to use
them.
Design philosophy
Not another dependency
We try very hard to avoid cluttering the environment with stuff you
won’t use outside of this library. Poniard’s dependencies are:

scikit-learn (duh)
pandas
XGBoost
Plotly
tqdm
That’s it!

Apart from tqdm and possibly Plotly, all dependencies most likely
were going to be installed anyway, so Poniard’s added footprint should
be small.
We don’t do that here (AutoML)
Poniard tries not to take control away from the user. As such, it is not
designed to perform 2 hours of feature engineering and selection, try
every model under the sun together with endless ensembles and select the
top performing model according to some metric.
Instead, it strives to abstract away some of the boilerplate code needed
to fit and compare a number of models and allows the user to decide what
to do with the results.
Poniard can be your first stab at a prediction problem, but it
definitely shouldn’t be your last one.
Opinionated with a few exceptions
While some parameters can be modified to control how variable type
inference and preprocessing are performed, the API is designed to
prevent parameter proliferation.
Cross validate all the things
Everything in Poniard is run with cross validation by default, and in
fact no relevant functionality can be used without cross validation.
Use baselines
A dummy estimator is always included in model comparisons so you can
gauge whether your model is better than a dumb strategy.
Fast TTFM (time to first model)
Preprocessing tries to ensure that your models run successfully without
significant data munging. By default, Poniard imputes missing data and
one-hot encodes or target encodes (depending on cardinality) inferred
categorical variables, which in most cases is enough for scikit-learn
algorithms to fit without complaints. Additionally, it scales numeric
data and drops features with a single unique value.
Similar projects
Poniard is not a groundbreaking idea, and a number of libraries follow a
similar approach.
ATOM is perhaps the most similar
library to Poniard, albeit with a different approach to the API.
LazyPredict is
similar in that it runs multiple estimators and provides results for
various metrics. Unlike Poniard, by default it tries most scikit-learn
estimators, and is not based on cross validation.
PyCaret is a whole other beast
that includes model explainability, deployment, plotting, NLP, anomaly
detection, etc., which leads to a list of dependencies several times
larger than Poniard’s, and a more complicated API.

License

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Files In This Product:

Customer Reviews

There are no reviews.