A simulation of a Bot in Gazebo and RViz which creates a map of it's environment and navigates through it while avoiding obstacles.
-
The aim of the project is to design a bot that could map and navigate in the environment provided to it and simultaneously follow an object while avoiding obstacles.
-
ROS was used to make the model of the bot functional, Gazebo and RViz were used for simulation and python for scripting.
|--📁urdf
| |-- 📄differential_bot.xacro # xacro file containing all necessary information to define
| |-- 📄differential_bot.gazebo # gazebo file with all necessary sensor plugins
|
|--📁meshes
| |--📄hokuyo.dae # mesh for hokuyo lidar
| |--📄kinect.dae # mesh for kinect sensor
|
|--📁rviz
| |--📄differential_bot.rviz # rviz config file for bot launch
|
|--📁world
| |--📄iscas_museum.sdf # bot environment file
|
|--📁launch
| |--📄world.launch # world launch file to launch bot with environment
| |--📄amcl.launch # launch file used for mapping the environment
| |--📄robot_description.launch # launches the bot xacro along with necessary nodes
| |--📄navigation_stack.launch # launch file for autonomous navigation of the bot
|
|--📁Maps
| |--📄iscas_museum.pgm # map portable grey map file
| |--📄iscas_museum.yaml # environment map yaml
|
|--📁config
| |--📄global_costmap_params.yaml # global and local costmap parameter files
| |--📄local_costmap_params.yaml
| |--📄base_local_planner_params.yaml
| |--📄costmap_common_params.yaml
|
|--📁Assets
| |--Slam-CV-Navigation.pdf
|
|--📄CMakeLists.txt
|--📗package.xml
-
Clone the repo
git clone https://github.com/notad22/SLAM-OpenCV-Navigation.git
-
Install the dependencies
sudo apt-get install ros-noetic-cv-bridge
sudo apt-get install ros-noetic-navfn
sudo apt-get install ros-noetic-amcl
sudo apt-get install ros-noetic-gmapping
sudo apt-get install ros-noetic-map-server
sudo apt-get install ros-noetic-move-base
sudo apt-get install ros-noetic-tf
-
After cloning the repo, source the following
source /opt/ros/noetic/setup.bash
source ~/catkin_ws/devel/setup.bash
(Alternative: You can add these commands to your bashrc.)
-
For launching the bot navigate to the launch folder
cd ~/catkin_ws/src/slam_simulations/launch
-
Launch the world.launch file at first to get the bot with it's sensors and it's environment
-
Open a new terminal tab or window and source the above mentioned commands
-
Launch the launchfile based on your use-case:
roslaunch slam_simulations file_name.launch
-
gmapping.launch - uses laser readings and pose data to create a 2D occupancy grid map of the robot's surroundings.
-
amcl.launch - used to create a map of the environment using monte carlo localization techniques.
(suitable controller can be used here) -
navigation_stack.launch - used to autonomously navigate the bot in the environment.
-
darknet_ros.launch - Darknet is an open source neural network which implements the YOLO algorithm. This launch file will detect objects in the surroundings of the robot and identify them.
You can visit the repo below and follow the given steps for implementation of YOLO.
darknet_rosYou can clone the above repo using:
git clone --recursive https://github.com/leggedrobotics/darknet_ros.git
Make sure to clone it recursively.
After cloning it navigate to the package directory.
cd ~/catkin_ws/src/darknet_ros/darknet_ros/launch
Launch the launchfile.
roslaunch darknet_ros darknet_ros.launch
-
To run the object followimg script
rosrun slam_simulations obj_following.py
- The project started off by learning ROS commands and it's file structures and learning how to create a launch file.
- A model bot was then used to which links, joints and sensors were added for it's usage in gazebo.
- We then used the sensor readings from our hokuyo lidar to create a map of our surroundings using the gmapping package in ROS.
- Various deep learning modules for object detection, and filters to localize the bot in it's environment using adaptive monte carlo localization were studied.
- YOLO was then used to detect objects in the environment of the bot, darknet opensource framework was used for this.
Navigation.mp4
- To improve the bot's object following capabilities
- To merge tracking with SLAM for a more lucrative output.
- To implement the bot on real hardware.
- Clone the darknet ROS repository recursively to avoid possible CMake errors.
- To achieve better navigation yaml parameters in the config can be tuned.
- SRA Vjti Eklavya 2022
A special thanks to our mentors for this project:
- Linear Algebra
- Deep Learning Specialisation
- Notes of Linear Algebra and DL
- Object Detection in ROS
- Playlist for Mobile Robotics
The license used for this project.