Figure 1: FFNET OVERVIEW.
Project page | Paper |
FFNET: VEHICLE-INFRASTRUCTURE COOPERATIVE 3D OBJECT DETECTION VIA FEATURE FLOW PREDICTION.
Haibao Yu, Yingjuan Tang, Enze Xie, Jilei Mao, Jirui Yuan, Ping Luo, and Zaiqing Nie
Under review as a conference paper.
This repository contains the official Pytorch implementation of training & evaluation code and the pretrained models for FFNET.
FFNET is a simple, efficient and powerful VIC3D Object Detection method, as shown in Figure 1.
We use MMDetection3D v0.17.1 as the codebase.
We evaluation all the models with OpenDAIRV2X.
For more information about installing mmdet3d, please refer to the guidelines in MMDetectionn3D v0.17.1. For more inforation about installing OpenDAIRV2X, please refer to the guideline in OpenDAIRV2X.
Other requirements:
pip install --upgrade git+https://github.com/klintan/pypcd.git
An example (works for me): CUDA 11.1
and pytorch 1.9.0
pip install torchvision==0.10.0
pip install mmcv-full==1.3.14
pip install mmdet==2.14.0
pip install mmsegmentation==0.14.1
cd FFNET-VIC3D && pip install -e . --user
We train and evaluate the models on DAIR-V2X dataset. For downloading DAIR-V2X dataset, please refer to the guidelines in DAIR-V2X. After downloading the dataset, we should preprcocess the dataset as the guidelines in data_preprocess. We provide the preprocessed example data example-cooperative-vehicle-infrastructure, you can download and decompress it under './data/dair-v2x'.
Download trained weights
.
(
FFNET Trainded Checkpoint | FFNET without prediction
| FFNET-V2 without prediction
)
Please refer OpenDAIRV2X for evaluating FFNet with OpenDAIRV2X.
Example: evaluate FFNET
on DAIR-V2X-C-Example
with 100ms latency:
# modify the DATA to point to DAIR-V2X-C-Example in script ${OpenDAIRV2X_root}/v2x/scripts/lidar_feature_flow.sh
# bash scripts/lidar_feature_flow.sh [YOUR_CUDA_DEVICE] [YOUR_FFNET_WORKDIR] [DELAY_K]
cd ${OpenDAIRV2X_root}/v2x
bash scripts/lidar_feature_flow.sh 0 /home/yuhaibao/FFNet-VIC3D 1
Firstly, train the basemodel on DAIR-V2X
without latency
# Single-gpu training
CUDA_VISIBLE_DEVICES=$1 python tools/train.py ffnet_work_dir/config_basemodel.py
Secondly, put the trained basemodel in a folder ffnet_work_dir/pretrained-checkpoints
.
Thirdly, train FFNET
on DAIR-V2X
with latenncy
# Single-gpu training
python tools/train.py ffnet_work_dir/config_ffnet.py
@inproceedings{yu2023ffnet,
title={Vehicle-Infrastructure Cooperative 3D Object Detection via Feature Flow Prediction},
author={Yu, Haibao and Tang, Yingjuan and Xie, Enze and Mao, Jilei and Yuan, Jirui and Luo, Ping and Nie, Zaiqing },
booktitle={Under Review},
year={2023}
}