Repository Summary
Description | TensortRT installation and Conversion from PyTorch Models |
Checkout URI | https://github.com/sithu31296/pytorch-onnx-trt.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2020-09-14 |
Dev Status | UNKNOWN |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (-)
Good First Issues (-) Pull Requests to Review (-) |
Packages
Name | Version |
---|---|
efficientdet | 0.0.0 |
yolov4 | 0.0.0 |
yolov5 | 0.0.0 |
README
TensorRT Conversion
PyTorch -> ONNX -> TensorRT
This repo includes installation guide for TensorRT, how to convert PyTorch models to ONNX format and run inference with TensoRT Python API.
The following table compares the speed gain got from using TensorRT running YOLOv5.
Device/ Env | PyTorch (FP16) | TensorRT (FP16) |
---|---|---|
RTX 2060 | 60-61 | 96-97 |
Jetson Xavier | 17-18 | 38-39 |
Notes: YOLO model in comparison is using YOLOv5-L with image size of 352x416. Units are in FPS.
Example conversion of YOLOv5 PyTorch Model to TensorRT is described in examples
folder.
Installation
Recommended CUDA version is
- cuda-10.2 + cuDNN-7.6
Tested environments:
- CUDA 10.2 + cuDNN 7.6
- TensorRT 7.0.0.11
- ONNX 1.7
- ONNXRuntime 1.3
- Protobuf >= 3.12.3
- CMake 3.15.2/ CMake 3.17.3
- PyTorch 1.5 + CUDA 10.2
Protobuf
Only Protobuf version >= 3.12.3 is supported in ONNX_TENSORRT package. So, you need to build the latest version from source.
To build protobuf from source, the following tools are needed:
sudo apt install autoconf automake libtool curl make g++ unzip
Clone protobuf repository and make sure to also clone submodules and generated the configure script.
git clone --recursive https://github.com/protocolbuffers/protobuf.git
cd protobuf
./autogen.sh
./configure --prefix=/usr
make -j$(nproc)
sudo make install
sudo ldconfig # refresh shared library cache
Verify the installation:
protoc --version
You should see the installed libprotoc version.
NVIDIA Driver
First detect your graphics card model and recommended driver.
ubuntu-drivers devices
If you don’t find your desired driver version, you can enable Nvidia beta driver repository.
sudo add-apt-repository ppa:graphics-drivers/ppa
Then install the desired driver version using:
sudo apt install nvidia-driver-440
sudo reboot
CUDA
Go to CUDA toolkit archive and download your desired CUDA version and installation method.
Below is the sample installation method for CUDA 10.2 deb file.
```bash wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb sudo apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub sudo apt-get update sudo apt-get -y install cuda
File truncated at 100 lines see the full file