pyglmnet 1.1

Last updated:

0 purchases

pyglmnet 1.1 Image
pyglmnet 1.1 Images
Add to Cart

Description:

pyglmnet 1.1

A python implementation of elastic-net regularized generalized linear models

[Documentation (stable version)] [Documentation (development version)]
Generalized linear
models are
well-established tools for regression and classification and are widely
applied across the sciences, economics, business, and finance. They are
uniquely identifiable due to their convex loss and easy to interpret due
to their point-wise non-linearities and well-defined noise models.
In the era of exploratory data analyses with a large number of predictor
variables, it is important to regularize. Regularization prevents
overfitting by penalizing the negative log likelihood and can be used to
articulate prior knowledge about the parameters in a structured form.
Despite the attractiveness of regularized GLMs, the available tools in
the Python data science eco-system are highly fragmented. More
specifically,

statsmodels
provides a wide range of link functions but no regularization.
scikit-learn
provides elastic net regularization but only for linear models.
lightning
provides elastic net and group lasso regularization, but only for
linear and logistic regression.

Pyglmnet is a response to this fragmentation. It runs on Python 3.5+,
and here are some of the highlights.

Pyglmnet provides a wide range of noise models (and paired canonical
link functions): 'gaussian', 'binomial', 'probit',
'gamma', ‘poisson’, and 'softplus'.
It supports a wide range of regularizers: ridge, lasso, elastic net,
group
lasso,
and Tikhonov
regularization.
Pyglmnet’s API is designed to be compatible with scikit-learn, so you
can deploy Pipeline tools such as GridSearchCV() and
cross_val_score().
We follow the same approach and notations as in Friedman, J.,
Hastie, T., & Tibshirani, R.
(2010) and the
accompanying widely popular R
package.
We have implemented a cyclical coordinate descent optimizer with
Newton update, active sets, update caching, and warm restarts. This
optimization approach is identical to the one used in R package.
A number of Python wrappers exist for the R glmnet package (e.g.
here and
here) but in contrast to
these, Pyglmnet is a pure python implementation. Therefore, it is
easy to modify and introduce additional noise models and regularizers
in the future.



Installation
Install the stable PyPI version with pip
$ pip install pyglmnet
For the bleeding edge development version:
Clone the repository.
$ pip install https://api.github.com/repos/glm-tools/pyglmnet/zipball/master


Getting Started
Here is an example on how to use the GLM estimator.
import numpy as np
import scipy.sparse as sps
from pyglmnet import GLM, simulate_glm

n_samples, n_features = 1000, 100
distr = 'poisson'

# sample a sparse model
beta0 = np.random.rand()
beta = np.random.random(n_features)
beta[beta < 0.9] = 0

# simulate data
Xtrain = np.random.normal(0.0, 1.0, [n_samples, n_features])
ytrain = simulate_glm('poisson', beta0, beta, Xtrain)
Xtest = np.random.normal(0.0, 1.0, [n_samples, n_features])
ytest = simulate_glm('poisson', beta0, beta, Xtest)

# create an instance of the GLM class
glm = GLM(distr='poisson', score_metric='deviance')

# fit the model on the training data
glm.fit(Xtrain, ytrain)

# predict using fitted model on the test data
yhat = glm.predict(Xtest)

# score the model on test data
deviance = glm.score(Xtest, ytest)
More pyglmnet examples and use
cases.


Tutorial
Here is an extensive
tutorial on GLMs,
optimization and pseudo-code.
Here are
slides from a
talk at PyData Chicago
2016,
corresponding tutorial
notebooks and a
video.


How to contribute?
We welcome pull requests. Please see our developer documentation
page for more
details.


Acknowledgments

Konrad Kording for funding and support
Sara
Solla
for masterful GLM lectures



License
MIT License Copyright (c) 2016-2019 Pavan Ramkumar

License:

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Customer Reviews

There are no reviews.