Project: Rough terrain navigation using Deep Reinforcement Learning Part 5 - SLAM and Occupancy Mapping

I’ve been taking a bit of a break from the ArduPilot side of the project to focus on some of the computer vision stuff required for the DRL algorithm. This is a complete SLAM operation using the Realsense D435i and T265 depth and tracking cameras along with the Rtabmap and Octomap ROS packages to create a 3D pointcloud map which can then be used generate an occupancy (level) map. This level map will be what the DRL algorithm uses determine the state space of the environment and trials future actions from.

It’s turned out pretty well so far, only a little bit of noise and some object duplicates that still need to be tidied up but otherwise I am happy enough with it to start testing out some simulated states and actions for the start of the algorithm. The simulation will most likely use ROS commands to move while the real rover will use the companion computer to command attitude targets over MAVLink.

It was a little bit hard to fit all the visuals into to one window but the obstacle course is almost identical to the one used in part 3: Project: Rough terrain navigation using Deep Reinforcement Learning Part 3 - RC test run.

If anyone has any questions feel free to ask.

1 Like


more information would greatly help making it a more interesting blog, for example:

  • Picture of the actual configuration
  • What OS is running, What version of ROS and openCV and else
  • How you have installed Rtabmap and Octomap ROS packages
  • How you have configured the parameters
  • What difficulties you have experienced and methods to alleviate
  • What is the throughput of the system and will it qualify for navigation > 15fps
  • If not, how will you work it out

Keep on the good work

I wonder what you mean by “occupancy (level) map”? I guess this is something different then ROS OccupancyGrid? (a 2d projection of your 3d octomap)

So that’s a big rover! What does it look like? I also built a big one and put a snow blade on its front this winter.

1 Like

@ppoirier I might do a full config spec blog post when I have everything completely done but I have Ubuntu 20.04 on the companion computer (Intel NUC) running ROS Noetic with the following packages:

All have installation instructions on their github/ros wiki page.

Still playing with the some of the noise parameters to try and get the rtabmap to classify objects better for its loop closure, otherwise almost all other parameters are on their default setting. The main issue I’m facing is the camera will sometimes disconnect and cause problems with either the localisation and/or loop closure so I’ve enabled some reconnection and reset parameters to help with this. I’ve also made sure the physical connections are solid and the proper cables are being used.

The pointcloud from the depth camera is generated at 30fps but the cloud map is only updated a 1fps however this should still be fine for navigation as the robot will be moving quite slowly (<0.5 m/s?) and the cameras have enough range to fully view the upcoming patch of terrain before the robot has even reached it. If you look in the video you can see the next set of bricks/pallet is pretty well defined in the occupancy grid before the front wheels hit and the robot starts to climb.

This is the camera setup:

@soldierofhell sorry yes it is the ColorOccupancyGrid that comes from the octomap_rviz_plugins ROS package, I’ve just been calling it a few different names.