opennn-pytorch 1.1.3

Creator: railscoder56

Last updated:

Add to Cart

Description:

opennnpytorch 1.1.3

Open Neural Networks library for image classification.






Table of content

Quick start
Warnings
Encoders
Decoders
Pretrained
Pretrained old configs fixes
Datasets
Losses
Metrics
Optimizers
Schedulers
Examples
Wandb

Quick start
1. Straight install.
1.1 Install torch with cuda.
pip install -U torch --extra-index-url https://download.pytorch.org/whl/cu113

1.2 Install opennn_pytorch.
pip install -U opennn_pytorch

2. Dockerfile.
cd docker/
docker build -t opennn:latest .

Warnings

Cuda is only supported for nvidia graphics cards.
Alexnet decoder doesn't support bce losses family.
Sometimes combination of dataset/encoder/decoder/loss will give bad results, try to combine others.
Custom cross-entropy support only mode when preds have (n, c) shape and labels have (n) shape.
Not all options in transform.yaml and config.yaml are required.
Mean and std in datasets section must be used in transform.yaml, for example [mean=[0.2859], std=[0.3530]] -> normalize: [[0.2859], [0.3530]]

Encoders

LeNet [paper] [code]
AlexNet [paper] [code]
GoogleNet [paper] [code]
ResNet18 [paper] [code]
ResNet34 [paper] [code]
ResNet50 [paper] [code]
ResNet101 [paper] [code]
ResNet152 [paper] [code]
MobileNet [paper] [code]
VGG-11 [paper] [code]
VGG-16 [paper] [code]
VGG-19 [paper] [code]

Decoders

LeNet [paper] [code]
AlexNet [paper] [code]
Linear [docs] [code]

Pretrained

LeNet




Encoder
Decoder
Dataset
Weights
Configs
Logs




LeNet
LeNet
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


LeNet
LeNet
FASHION-MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


LeNet
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


LeNet
Linear
FASHION-MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


LeNet
AlexNet
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


LeNet
AlexNet
FASHION-MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL






AlexNet




Encoder
Decoder
Dataset
Weights
Configs
Logs




AlexNet
LeNet
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


AlexNet
LeNet
FASHION-MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


AlexNet
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


AlexNet
Linear
FASHION-MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


AlexNet
AlexNet
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


AlexNet
AlexNet
FASHION-MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL






GoogleNet




Encoder
Decoder
Dataset
Weights
Configs
Logs




GoogleNet
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


GoogleNet
Linear
FASHION-MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL






ResNet




Encoder
Decoder
Dataset
Weights
Configs
Logs




ResNet18
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


ResNet34
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


ResNet50
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


ResNet101
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


ResNet152
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL






MobileNet




Encoder
Decoder
Dataset
Weights
Configs
Logs




MobileNet
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL






VGG




Encoder
Decoder
Dataset
Weights
Configs
Logs




VGG-11
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


VGG-16
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL


VGG-19
Linear
MNIST
BEST, PLAN
CONFIG, TRANSFORM
TRAINVAL





Pretrained configs issues
Config file changed, check configs folder!!!

Config must include test_part value, (train_part + valid_part + test_part) value can be < 1.0.
You will able to add wandb structure for logging in wandb.
Full restructure into branches structure.

Datasets

MNIST [files] [code] [classes=10] [mean=[0.1307], std=[0.3801]]
FASHION-MNIST [files] [code] [classes=10] [mean=[0.2859], std=[0.3530]]
CIFAR-10 [files] [code] [classes=10] [mean=[0.491, 0.482, 0.446], std=[0.247, 0.243, 0.261]]
CIFAR-100 [files] [code] [classes=100] [mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761]]
GTSRB [files] [code] [classes=43] [mean=unknown, std=unknown]
CUSTOM [docs] [code] [example] [classes=nc] [mean=unknown, std=unknown]

Losses

Cross-Entropy [pytorch, custom] [docs] [code]
Binary-Cross-Entropy [pytorch, custom] [docs] [code]
Binary-Cross-Entropy-With-Logits [pytorch, custom] [docs] [code]
Mean-Squared-Error [pytorch, custom] [docs] [code]
Mean-Absolute-Error [pytorch, custom] [docs] [code]

Metrics

Accuracy [sklearn] [docs] [code]
Precision [sklearn] [docs] [code]
Recall [sklearn] [docs] [code]
F1 [sklearn] [docs] [code]

Optimizers

Adam [pytorch] [docs] [code]
AdamW [pytorch] [docs] [code]
Adamax [pytorch] [docs] [code]
RAdam [pytorch] [docs] [code]
NAdam [pytorch] [docs] [code]

Schedulers

StepLR [pytorch] [docs] [code]
MultiStepLR [pytorch] [docs] [code]
PolynomialLRDecay [custom] [docs] [code]

Examples

Run from yaml config.

from opennn_pytorch import run

config = 'path to yaml config' # check configs folder
run(config)


Get encoder and decoder.

from opennn_pytorch.encoders import get_encoder
from opennn_pytorch.decoders import get_decoder

encoder_name = 'resnet18'
decoder_name = 'alexnet'
decoder_mode = 'decoder'
input_channels = 1
number_classes = 10
device = 'cuda'

encoder = get_encoder(encoder_name,
input_channels).to(device)

model = get_decoder(decoder_name,
encoder,
number_classes,
decoder_mode,
device).to(device)

3.1 Get dataset.
from opennn_pytorch.datasets import get_dataset
from torchvision import transforms

transform_config = 'path to transform yaml config'
dataset_name = 'mnist'
datafiles = None
train_part = 0.7
valid_part = 0.2
test_part = 0.05

transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)

train_data, valid_data, test_data = get_dataset(dataset_name,
train_part,
valid_part,
test_part,
transform,
datafiles)

3.2 Get custom dataset.
from opennn_pytorch.datasets import get_dataset
from torchvision import transforms

transform_config = 'path to transform yaml config'
dataset_name = 'custom'
images = 'path to folder with images'
annotation = 'path to annotation yaml file with image: class structure'
datafiles = (images, annotation)
train_part = 0.7
valid_part = 0.2
test_part = 0.05

transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)

train_data, valid_data, test_data = get_dataset(dataset_name,
train_part,
valid_part,
test_part,
transform,
datafiles)


Get optimizer.

from opennn_pytorch.optimizers import get_optimizer

optim_name = 'adam'
lr = 1e-3
betas = (0.9, 0.999)
eps = 1e-8
weight_decay = 1e-6

optimizer = get_optimizer(optim_name,
model,
lr=lr,
betas=betas,
eps=opt_eps,
weight_decay=weight_decay)


Get scheduler.

from opennn_pytorch.schedulers import get_scheduler

scheduler_name = 'steplr'
step = 10
gamma = 0.5
milestones = None
max_decay_steps = None
end_lr = None
power = None

scheduler = get_scheduler(scheduler_name,
optimizer,
step=step,
gamma=gamma,
milestones=milestones,
max_decay_steps=mdsteps,
end_lr=end_lr,
power=power)


Get loss function.

from opennn_pytorch.losses import get_loss

loss_fn = 'custom_mse'
loss_fn, one_hot = get_loss(loss_fn)


Get metrics functions.

from opennn_pytorch.metrics import get_metrics

metrics_names = ['accuracy', 'precision', 'recall', 'f1_score']
number_classes = 10
metrics_fn = get_metrics(metrics_names,
nc=number_classes)


Train/Test.

from opennn_pytorch.algo import train, test, prediction
import random

algorithm = 'train'
batch_size = 16
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
number_classes = 10
save_every = 5
epochs = 20
wandb_flag = True
wandb_metrics = ['accuracy', 'f1_score']

train_dataloader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size,
shuffle=True)

valid_dataloader = torch.utils.data.DataLoader(valid_data,
batch_size=batch_size,
shuffle=False)

test_dataloader = torch.utils.data.DataLoader(test_data,
batch_size=1,
shuffle=False)

if algorithm == 'train':
train(train_dataloader,
valid_dataloader,
model,
optimizer,
scheduler,
loss_fn,
metrics_fn,
epochs,
checkpoints,
logs,
device,
save_every,
one_hot,
number_classes,
wandb_flag,
wandb_metrics)
elif algorithm == 'test':
test_logs = test(test_dataloader,
model,
loss_fn,
metrics_fn,
logs,
device,
one_hot,
number_classes,
wandb_flag,
wandb_metrics)
if pred:
indices = random.sample(range(0, len(test_data)), 10)
os.mkdir(test_logs + '/prediction', 0o777)
for i in range(10):
tmp_range = range(number_classes)
tmp_dct = {i: names[i] for i in tmp_range}
prediction(test_data,
model,
device,
tmp_dct,
test_logs + f'/prediction/{i}',
indices[i])

Wandb
Wandb is very powerful logging tool, you will able to log metrics, hyperparamets, model hooks etc.
wandb login
<your api token from wandb.ai>



Citation
Project citation.
License
Project is distributed under MIT License.

License

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Customer Reviews

There are no reviews.