-
common
Some common code dependencies and utilities -
source
Source code of standalone Programmain.cpp
: Program main entrance where parameters are configured hereSampleYolo.hpp
: YOLOv4 inference class definition fileSampleYolo.cpp
: YOLOv4 inference class functions definition fileonnx_add_nms_plugin.py
: Python script to add BatchedNMSPlugin node into ONNX modelgenerate_coco_image_list.py
: Python script to get list of image names from MS COCO annotation or information file
-
data
This directory saves:yolov4.onnx
: the ONNX model (User generated)yolov4.engine
: the TensorRT engine model (would be generated by this program)demo.jpg
: The demo image (Already exists)demo_out.jpg
: Image detection output of the demo image (Already exists, but would be renewed by the program)names.txt
: MS COCO dataset label names (have to be downloaded or generated via COCO API)categories.txt
: MS COCO dataset categories where IDs and names are separated by"\t"
(have to be generated via COCO API)val2017.txt
: MS COCO validation set image list (have to be generated from corresponding COCO annotation file)valdev2017.txt
: MS COCO test set image list (have to be generated from corresponding COCO annotation file)coco_result.json
: MS COCO dataset output (would be generated by this program)
2.1 Download TensorRT (higher than 7.1, you can ignore this step if TensorRT 7.1 is already installed)
- Download TensorRT from NVIDIA developer page: https://developer.nvidia.com/nvidia-tensorrt-7x-download
- Install or depackage the deb file or tar file.
-
Refer to README files in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS
- Go to https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson if you are working on Jetson platform
- Go to https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/x86 if you are working on x86 platform
-
Follow guidences in README to clone repository https://github.com/NVIDIA/TensorRT and build
libnvinfer_plugin.so.7.x.x
-
Rename
<TensorRT_dir>/lib/libnvinfer_plugin.so.7.x.x
to<TensorRT_dir>/lib/libnvinfer_plugin.so.7.x.x.back
-
Copy
<TensorRT_OSS_dir>/build/out/libnvinfer_plugin.so.7.x.x
into<TensorRT7.1_GA_dir>/lib
-
Here is one of the YOLOv4 Pytorch repositories https://github.com/Tianxiaomo/pytorch-YOLOv4 that can guide you to generate an ONNX model of YOLOv4. You can convert from the pretrained DarkNet model into ONNX directly; but you can also 1) convert the DarkNet model into Pytorch, 2) train the Pytorch model using your own dataset, and 3) then convert into ONNX.
-
Other famous YOLOv4 pytorch repositories as references:
Step 2 Add into YOLOv4 ONNX model the BatchedNMSPlugin (CSPDarknet-53 CNN + YOLO header CNN + YOLO layers + BatchedNMSPlugin
)
How can I add BatchedNMSPlugin
node into ONNX model?
-
Open
source_gpu_nms/onnx_add_nms_plugin.py
-
Update attribute values to suit your model
Example:
attrs["shareLocation"] = 1
attrs["backgroundLabelId"] = -1
attrs["numClasses"] = 80
attrs["topK"] = topK # from program arguments
attrs["keepTopK"] = keepTopK # from program arguments
attrs["scoreThreshold"] = 0.3
attrs["iouThreshold"] = 0.6
attrs["isNormalized"] = 1
attrs["clipBoxes"] = 1
-
Copy
onnx_add_nms_plugin.py
into<TensorRT_OSS_dir>/tools/onnx-graphsurgeon
-
Go to
<TensorRT_OSS_dir>/tools/onnx-graphsurgeon
and executeonnx_add_nms_plugin.py
cd <TensorRT_OSS_dir>/tools/onnx-graphsurgeon
python onnx_add_nms_plugin.py -f <yolov4_onnx_file> -t <topk_value> -k <keep_topk_value>
- This YOLOv4 standalone sample depends on the same common includes as other C++ samples of TensorRT.
- Option 1: Add a link to
<where_tensorRT_is_installed>/TensorRT-7.1.x.x/samples/common
intensorrt_yolov4
cd <dir_on_your_machine>/yolov4_sample/tensorrt_yolov4 ln -s <where_tensorRT_is_installed>/TensorRT-7.1.x.x/samples/common common
- Option 2: Simply copy common includes into
tensorrt_yolov4
cd <dir_on_your_machine>/yolov4_sample/tensorrt_yolov4 cp -r <where_tensorRT_is_installed>/TensorRT-7.1.x.x/samples/common common ./
- Option 1: Add a link to
-
Note: There are OpenCV dependencies in this program. Please check if there are OpenCV includes in /usr/include/opencv and if OpenCV libraries like
-lopencv_core
and-lopencv_imgproc
are installed. -
Follow README and documents of this repository https://github.com/opencv/opencv to install OpenCV if corresponding includes and libraries do not exist.
cd <dir_on_your_machine>/yolov4_sample/yolo_cpp_standalone/source_gpu_nms
make clean
make -j<num_processors>
-
Step1: Use text editor to open
main.cpp
in<dir_on_your_machine>/YOLOv4_Sample/tensorrt_yolov4/source
-
Step2: Go to where function
initializeSampleParams()
is defined -
Step3: You will find some basic configurations in
initializeSampleParams()
like follows:
// This argument is for calibration of int8
// Int8 calibration is not available until now
// You have to prepare samples for int8 calibration by yourself
params.nbCalBatches = 80;
// The engine file to generate or to load
// The engine file does not exist:
// This program will try to load onnx file and convert onnx into engine
// The engine file exists:
// This program will load the engine file directly
params.engingFileName = "../data/yolov4.engine";
// The onnx file to load
params.onnxFileName = "../data/yolov4.onnx";
// Input tensor name of ONNX file & engine file
params.inputTensorNames.push_back("input");
// Old batch configuration, it is zero if explicitBatch flag is true for the tensorrt engine
// May be deprecated in the future
params.batchSize = 0;
// Number of classes (usually 80, but can be other values)
params.outputClsSize = 80;
// topK parameter of BatchedNMSPlugin
params.topK = 2000;
// keepTopK parameter of BatchedNMSPlugin
params.keepTopK = 1000;
// Batch size, you can modify to other batch size values if needed
params.explicitBatchSize = 1;
params.inputImageName = "../data/demo.jpg";
params.cocoClassNamesFileName = "../data/coco.names";
params.cocoClassIDFileName = "../data/categories.txt";
// Config number of DLA cores, -1 if there is no DLA core
params.dlaCore = -1;
- Step4: Copy and rename the ONNX file (
BatchedNMSPlugin
node included) to the location defined byinitializeSampleParams()
- This program will automatically convert ONNX into engine if engine does not exist.
- Command:
- To generate Engine of fp32 mode:
../bin/yolov4
- To generate Engine of fp16 mode:
../bin/yolov4 --fp16
- Command:
../bin/yolov4 --demo
- This program will feed the demo image into YOLOv4 engine and write detection output as an image.
- Please make sure
params.demo = 1
if you want to run this program in demo mode.
// Configurations to run a demo image
params.demo = 1;
params.outputImageName = "../data/demo_out.jpg";
- Command:
../bin/yolov4 --speed
- This program will repeatedly feed the demo image into engine to accumulate time consumed in each iteration
- Please make sure
params.speedTest = 1
if you want to run this program in speed mode
// Configurations to run speed test
params.speedTest = 1;
params.speedTestItrs = 1000;
- Command:
../bin/yolov4 --coco
- Corresponding configuration in
initializeSampleParams()
would be like this:
// Configurations of Test on COCO dataset
params.cocoTest = 1;
params.cocoClassNamesFileName = "../data/coco.names";
params.cocoClassIDFileName = "../data/categories.txt";
params.cocoImageListFileName = "../data/val2017.txt";
params.cocoTestResultFileName = "../data/coco_result.json";
params.cocoImageDir = "../data/val2017";
Note: COCO dataset is just an example, you can use your own validation set or test set to validate YOLOv4 model trained by your own training set
-
Step 1: Download MS COCO images and annotations from https://cocodataset.org/#download
- Images for validation: http://images.cocodataset.org/zips/val2017.zip
- Annotations for training and validation: http://images.cocodataset.org/annotations/annotations_trainval2017.zip
- Images for test: http://images.cocodataset.org/zips/test2017.zip
- Image info for test: http://images.cocodataset.org/annotations/image_info_test2017.zip
-
Step 2: Clone COCO API repository from https://github.com/cocodataset/cocoapi and use COCO API to generate
categories.txt
- Format of
categories.txt
must follow this rule: IDs and names are separated by "\t".
1 person 2 bicycle 2 car 4 motorcycle 5 airplane
- COCO API example that can help you distill categories from COCO dataset (You can have a look at
cocoapi\PythonAPI\pycocoDemo.ipynb
of https://github.com/cocodataset/cocoapi for more details):
# display COCO categories and supercategories cats = coco.loadCats(coco.getCatIds()) nms=[cat['name'] for cat in cats] print('COCO categories: \n{}\n'.format(' '.join(nms)))
- Format of
-
Step 3: Generate image list file using python script
generate_coco_image_list.py
python generate_coco_image_list.py <json file of image annotations> <image list text>
-
For example, to generate validation image list, the command would be:
python generate_coco_image_list.py instances_val2017.json val2017.txt
-
For example, to generate test-dev image list, the command would be:
python generate_coco_image_list.py image_info_test-dev2017.json testdev2017.txt
-
This program will read image names from the list file whose path should be the same as
params.cocoImageListFileName
, and then feed these images located inparams.cocoImageDir
to YOLOv4 engine -
Please make sure
params.cocoTest = 1
and images exist inparams.cocoImageDir