This package is an improved version of my previous work depth_yolo. There were several problems reported by some guys who used depth_yolo:
- depth_yolo uses a Kinect V2 RGB-D camera, which is too old. Most people don't use such cameras nowadays.
- depth_yolo uses darknet_ros which is based on YOLOv3. It's 2024 now, and there are so many advanced object detection algorithms released.
I deveopled depth_yolo when I was a sophomore, originally intending to achieve a central point grasping demo. But frankly speaking central point grasping method is out of time (both in performace and novelty). If you are also trying to achive central point grasping, I'd recommend you to use advanced grasping algorithms such as anygrasp(graspnet).
- Get rid of hardware dependency. Give a demo in Gazebo so that everyone can try it.
- Use advanced object detection method. I'm planning to try SAM(Segment Anything Model).
TODO
roslaunch rgbd_align gazebo.launch