hidet 0.4.1

Creator: bradpython12

Last updated:

Add to Cart

Description:

hidet 0.4.1

Hidet: An Open-Source Deep Learning Compiler
Documentation |
Research Paper |
Releases |
Contributing


Hidet is an open-source deep learning compiler, written in Python.
It supports end-to-end compilation of DNN models from PyTorch and ONNX to efficient cuda kernels.
A series of graph-level and operator-level optimizations are applied to optimize the performance.
Currently, hidet focuses on optimizing the inference workloads on NVIDIA GPUs, and requires

Linux OS
CUDA Toolkit 11.6+
Python 3.8+

Getting Started
Installation
pip install hidet

You can also try the nightly build version or build from source.
Usage
Optimize a PyTorch model through hidet (require PyTorch 2.0):
import torch

# Define pytorch model
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).cuda().eval()
x = torch.rand(1, 3, 224, 224).cuda()

# Compile the model through Hidet
# Optional: set optimization options (see our documentation for more details)
# import hidet
# hidet.torch.dynamo_config.search_space(2) # tune each tunable operator
model_opt = torch.compile(model, backend='hidet')

# Run the optimized model
y = model_opt(x)

See the following tutorials to learn other usages:

Quick Start
Optimize PyTorch models
Optimize ONNX models

Publication
Hidet originates from the following research work:

Hidet: Task-Mapping Programming Paradigm for Deep Learning Tensor Programs
Yaoyao Ding, Cody Hao Yu, Bojian Zheng, Yizhi Liu, Yida Wang, and Gennady Pekhimenko.
ASPLOS '23

If you used Hidet in your research, welcome to cite our
paper.
Development
Hidet is currently under active development by a team at CentML Inc.
Contributing
We welcome contributions from the community. Please see
contribution guide
for more details.
License
Hidet is released under the Apache 2.0 license.

License

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Customer Reviews

There are no reviews.