ADGT 0.0.2

Creator: bradpython12

Last updated:

0 purchases

ADGT 0.0.2 Image
ADGT 0.0.2 Images

Languages

Categories

Add to Cart

Description:

ADGT 0.0.2

ADGT is a model interpretability and understanding library for PyTorch.
ADGT means Attribution Draws Ground Truth contains general purpose implementations
of Saliency, InputXGradient, Deconv, LRP, Guided_BackProp, GradCAM, SmoothGrad, DeepLIFT, IntegratedGradients, RectGrad, FullGrad, CAMERAS, GIG, and others for PyTorch models. It provide users a quick and simple start for state-of-the-art modified-BP attribution methods.
ADGT is currently in beta and under active development!
Installation
Installation Requirements

Python >= 3.6
PyTorch >= 1.2
captum

Installing the latest release
You can just copy this code and install ADGT with
python setup.py install

or you can choose to install ADGT with pip
pip install ADGT

Getting Started
Just three lines code, you can use ADGT to interpret why the target model make a decision on input images.
import ADGT

adgt = ADGT.ADGT(use_cuda=True, name='ImageNet')

attribution=adgt.pure_explain(img, model, method, pth))

Note that img is the input image (pytorch tensor), model is the target model (pytorch model), method is the name of attribution methods (algorithms listed below), pth is the save path, the visualization of explanation results (see demo dir) are exported to this dir, if pth is None, it will not export such visualization, attribution is the attrubtion maps (pytorch tensor).
References of Algorithms

Saliency: Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps, K. Simonyan, et. al. 2014
InputXGradient: Not Just a Black Box: Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2016
Deconv: Visualizing and Understanding Convolutional Networks, Matthew D Zeiler et al. 2014
Guided_Backprop: Striving for Simplicity: The All Convolutional Net, Jost Tobias Springenberg et al. 2015
LRP: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, Sebastian Bach et al 2015
GradCAM: Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Ramprasaath R. Selvaraju et al. 2017
SmoothGrad: SmoothGrad: removing noise by adding noise, Daniel Smilkov et al. 2017
DeepLift: Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2017 and Towards better understanding of gradient-based attribution methods for deep neural networks, Marco Ancona et al. 2018
IntegratedGradients: Axiomatic Attribution for Deep Networks, Mukund Sundararajan et al. 2017 and Did the Model Understand the Question?, Pramod K. Mudrakarta, et al. 2018
RectGrad: Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps, Beomsu Kim et al 2019
FullGrad: Full-Gradient Representation for Neural Network Visualization, Suraj Srinivas et al 2019
GIG: Guided Integrated Gradients: an Adaptive Path Method for Removing Noise, Andrei Kapishnikov et al 2021
CAMERAS: CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency, Mohammad A. A. K. Jalwana et al 2021

License
ADGT is BSD licensed, as found in the LICENSE file.

License

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Customer Reviews

There are no reviews.