This project is an implementation of the paper YUDO: YOLO for Uniform Directed Object Detection. The codebase is an adaptation of the popular YOLOv7 model, used for detection of directed objects with uniform dimensions.
docker build --rm --no-cache -t yudo:version_1 -f Dockerfile .
docker run --gpus device=0 --rm --shm-size=1G -ti -v {YOUR CODE PATH}:/yudo --name yudo yudo:version_1
Install in addition:
pip install 'git+https://github.com/facebookresearch/detectron2.git'
The dataset used in this project is obtained from Honeybee Segmentation and Tracking Datasets. An image cropping and adapting labels' format to the yolo format can be done using the gen_yolo_anns.py
script.
A training command example:
python train.py \
--epochs 200 \
--workers 4 \
--device 0 \
--batch-size 2 \
--data data/data.yaml \
--img-size 512 512 \
--cfg cfg/training/yolov7-tiny.yaml \
--weights 'yolov7-tiny.pt' \
--name model_001 \
--hyp data/hyp.scratch.yaml \
--image-weights \
--exist-ok \
--adam
If you find this project usefull, please consider citing the paper:
@article{nedeljkovic2023yudo,
title={YUDO: YOLO for Uniform Directed Object Detection},
author={Nedeljkovi{\'c}, {\DJ}or{\dj}e},
journal={arXiv preprint arXiv:2308.04542},
year={2023}
}