Skip to content

grimoire/mmdetection-to-tensorrt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MMDet to TensorRT

Note

The main branch is used to support model conversion of MMDetection>=3.0. If you want to convert model on older MMDetection, Please switch to branch:

News

  • 2024.02: Support MMDetection>=3.0

Introduction

This project aims to support End2End deployment of models in MMDetection with TensorRT.

Mask support is experiment.

Features:

  • fp16
  • int8(experiment)
  • batched input
  • dynamic input shape
  • combination of different modules
  • DeepStream

Requirement

  • install MMDetection:

    pip install openmim
    mim install mmdet==3.3.0
  • install torch2trt_dynamic:

    git clone https://github.com/grimoire/torch2trt_dynamic.git torch2trt_dynamic
    cd torch2trt_dynamic
    pip install -e .
  • install amirstan_plugin:

    • Install tensorrt: TensorRT

    • clone repo and build plugin

      git clone --depth=1 https://github.com/grimoire/amirstan_plugin.git
      cd amirstan_plugin
      git submodule update --init --progress --depth=1
      mkdir build
      cd build
      cmake -DTENSORRT_DIR=${TENSORRT_DIR} ..
      make -j10

      [!NOTE]

      DON'T FORGET setting the environment variable(in ~/.bashrc):

      export AMIRSTAN_LIBRARY_PATH=${amirstan_plugin_root}/build/lib

Installation

Host

git clone https://github.com/grimoire/mmdetection-to-tensorrt.git
cd mmdetection-to-tensorrt
pip install -e .

Docker

Build docker image

sudo docker build -t mmdet2trt_docker:v1.0 docker/

Run (will show the help for the CLI entrypoint)

sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} mmdet2trt_docker:v1.0

Or if you want to open a terminal inside de container:

sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} --entrypoint bash mmdet2trt_docker:v1.0

Example conversion:

sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} mmdet2trt_docker:v1.0 ${bind_path}/config.py ${bind_path}/checkpoint.pth ${bind_path}/output.trt

Usage

Create a TensorRT model from mmdet model. detail can be found in getting_started.md

CLI

# conversion might take few minutes.
mmdet2trt ${CONFIG_PATH} ${CHECKPOINT_PATH} ${OUTPUT_PATH}

Run mmdet2trt -h for help on optional arguments.

Python

shape_ranges=dict(
    x=dict(
        min=[1,3,320,320],
        opt=[1,3,800,1344],
        max=[1,3,1344,1344],
    )
)
trt_model = mmdet2trt(cfg_path,
                      weight_path,
                      shape_ranges=shape_ranges,
                      fp16_mode=True)

# save converted model
torch.save(trt_model.state_dict(), save_model_path)

# save engine if you want to use it in c++ api
with open(save_engine_path, mode='wb') as f:
    f.write(trt_model.state_dict()['engine'])

Note

The input of the engine is the tensor after preprocess. The output of the engine is num_dets, bboxes, scores, class_ids. if you enable the enable_mask flag, there will be another output mask. The bboxes output of the engine did not divided by scale_factor.

how to perform inference with the converted model.

from mmdet.apis import inference_detector
from mmdet2trt.apis import create_wrap_detector

# create wrap detector
trt_detector = create_wrap_detector(trt_model, cfg_path, device_id)

# result share same format as mmdetection
result = inference_detector(trt_detector, image_path)

Try demo in demo/inference.py, or demo/cpp if you want to do inference with c++ api.

Read getting_started.md for more details.

How does it works?

Most other project use pytorch=>ONNX=>tensorRT route, This repo convert pytorch=>tensorRT directly, avoid unnecessary ONNX IR. Read how-does-it-work for detail.

Support Model/Module

Note

Some models have only been tested on MMDet<3.0. If you found any failed model, Please report in the issue.

  • Faster R-CNN
  • Cascade R-CNN
  • Double-Head R-CNN
  • Group Normalization
  • Weight Standardization
  • DCN
  • SSD
  • RetinaNet
  • Libra R-CNN
  • FCOS
  • Fovea
  • CARAFE
  • FreeAnchor
  • RepPoints
  • NAS-FPN
  • ATSS
  • PAFPN
  • FSAF
  • GCNet
  • Guided Anchoring
  • Generalized Attention
  • Dynamic R-CNN
  • Hybrid Task Cascade
  • DetectoRS
  • Side-Aware Boundary Localization
  • YOLOv3
  • PAA
  • CornerNet(WIP)
  • Generalized Focal Loss
  • Grid RCNN
  • VFNet
  • GROIE
  • Mask R-CNN(experiment)
  • Cascade Mask R-CNN(experiment)
  • Cascade RPN
  • DETR
  • YOLOX

Tested on:

  • torch=2.2.0
  • tensorrt=8.6.1
  • mmdetection=3.3.0
  • cuda=11.7

FAQ

read this page if you meet any problem.

License

This project is released under the Apache 2.0 license.