Skip to content

Dishie2498/SLAM-OpenCV-Navigation

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚙 SLAM-OpenCV-Navigation 🚗

A simulation of a Bot in Gazebo and RViz which creates a map of it's environment and navigates through it while avoiding obstacles.

Table of Contents

About

  • The aim of the project is to design a bot that could map and navigate in the environment provided to it and simultaneously follow an object while avoiding obstacles.

  • ROS was used to make the model of the bot functional, Gazebo and RViz were used for simulation and python for scripting.

Tech Stack and Tools

       ROS Noetic

             OpenCV

          Gazebo

       RViz

       Python

File Structure

|--📁urdf
|    |--   📄differential_bot.xacro            # xacro file containing all necessary information to define 
|    |--   📄differential_bot.gazebo           # gazebo file with all necessary sensor plugins                                                        
|
|--📁meshes
|    |--📄hokuyo.dae                           # mesh for hokuyo lidar
|    |--📄kinect.dae                           # mesh for kinect sensor
|
|--📁rviz
|    |--📄differential_bot.rviz                # rviz config file for bot launch
|  
|--📁world
|    |--📄iscas_museum.sdf                      # bot environment file
|
|--📁launch
|    |--📄world.launch                          # world launch file to launch bot with environment
|    |--📄amcl.launch                           # launch file used for mapping the environment
|    |--📄robot_description.launch              # launches the bot xacro along with necessary nodes
|    |--📄navigation_stack.launch               # launch file for autonomous navigation of the bot
|
|--📁Maps
|    |--📄iscas_museum.pgm                      # map portable grey map file
|    |--📄iscas_museum.yaml                     # environment map yaml
|
|--📁config
|    |--📄global_costmap_params.yaml            # global and local costmap parameter files 
|    |--📄local_costmap_params.yaml
|    |--📄base_local_planner_params.yaml
|    |--📄costmap_common_params.yaml
|
|--📁Assets
|    |--Slam-CV-Navigation.pdf
|
|--📄CMakeLists.txt
|--📗package.xml

Flowchart(3)

Getting Started

Prerequisites

  1. ROS Noetic

  2. Gazebo Version: 11.0.0

  3. RViz

  4. Ubuntu or it's flavours Versioned 20.04

Installation

  1. Clone the repo
    git clone https://github.com/notad22/SLAM-OpenCV-Navigation.git

  2. Install the dependencies
    sudo apt-get install ros-noetic-cv-bridge
    sudo apt-get install ros-noetic-navfn
    sudo apt-get install ros-noetic-amcl
    sudo apt-get install ros-noetic-gmapping
    sudo apt-get install ros-noetic-map-server
    sudo apt-get install ros-noetic-move-base
    sudo apt-get install ros-noetic-tf

Usage

  1. After cloning the repo, source the following

    source /opt/ros/noetic/setup.bash

    source ~/catkin_ws/devel/setup.bash

(Alternative: You can add these commands to your bashrc.)

  1. For launching the bot navigate to the launch folder

    cd ~/catkin_ws/src/slam_simulations/launch
  2. Launch the world.launch file at first to get the bot with it's sensors and it's environment

  3. Open a new terminal tab or window and source the above mentioned commands

  4. Launch the launchfile based on your use-case:

    roslaunch slam_simulations file_name.launch
  • gmapping.launch - uses laser readings and pose data to create a 2D occupancy grid map of the robot's surroundings.

  • amcl.launch - used to create a map of the environment using monte carlo localization techniques.
    (suitable controller can be used here)

  • navigation_stack.launch - used to autonomously navigate the bot in the environment.

  1. darknet_ros.launch - Darknet is an open source neural network which implements the YOLO algorithm. This launch file will detect objects in the surroundings of the robot and identify them.

    You can visit the repo below and follow the given steps for implementation of YOLO.

    darknet_ros

    You can clone the above repo using:

    git clone --recursive https://github.com/leggedrobotics/darknet_ros.git
    Make sure to clone it recursively.
    After cloning it navigate to the package directory.
    cd ~/catkin_ws/src/darknet_ros/darknet_ros/launch
    Launch the launchfile.
    roslaunch darknet_ros darknet_ros.launch

  2. To run the object followimg script
    rosrun slam_simulations obj_following.py

Project Methodology

  • The project started off by learning ROS commands and it's file structures and learning how to create a launch file.
  • A model bot was then used to which links, joints and sensors were added for it's usage in gazebo.
  • We then used the sensor readings from our hokuyo lidar to create a map of our surroundings using the gmapping package in ROS.
  • Various deep learning modules for object detection, and filters to localize the bot in it's environment using adaptive monte carlo localization were studied.
  • YOLO was then used to detect objects in the environment of the bot, darknet opensource framework was used for this.

Results and Demo

Gmapping

Navigation.mp4

detection

Future Work

  • To improve the bot's object following capabilities
  • To merge tracking with SLAM for a more lucrative output.
  • To implement the bot on real hardware.

Troubleshooting

  • Clone the darknet ROS repository recursively to avoid possible CMake errors.
  • To achieve better navigation yaml parameters in the config can be tuned.

Contributors

Acknowledgements

A special thanks to our mentors for this project:

Resources

License

The license used for this project.

About

Eklavya Project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • CMake 71.7%
  • Python 28.3%