GSOC 2022: Rover AutoDocking conclusion

Hi everyone! This summer I got an opportunity to do a project with ArduPilot under the Google Summer of Code programme. I am writing this blog post to share my experience and accomplishments during this time.
First of all, I would begin with thanking my mentors @rmackay9 and @peterbarker for all their support and guidance. Thanks for always being helpful and kind. A lot of credit for me successfully completing this project goes to my mentors. Next, I would like to thank Rishabh for his valuable suggestions and feedback on my work. Your expertise in the libraries related to the project helped me a lot while doing project. I would like to thank the whole ArduPilot community for all their support.

Project summary
The project was to implement a new control mode (named DOCK mode) in ArduRover. The new DOCK mode will automatically maneuver the vehicle towards a visual docking target. Currently, we have added support for sending the information of docking target through a camera attached to a companion computer. The companion computer would detect the aruco_marker/apriltag on the docking station and calculate the distance and angle towards the target. It will then send mavlink message to the autopilot containing that information about the target. These messages will be handled by the AC_PrecLand library (the same library which we use to perform precision landing in copter) which will estimate the position of docking target w.r.t. EKF origin. This location will we used by our algorithm in DOCK mode which will maneuver the vehicle towards the dock.
I was successfully able to complete the project within the stipulated time and was able to successfully test its working on my skid steering rover. I would like to thank the funding team for providing the funds required to build the rover. Now, I would give a brief overview of the work done during the project.

Improved precision landing target simulation
Getting the simulations right for a feature is a really important task. This was needed for writing the autotest for the feature so that we can know if it ever gets broken in the future. Before this improvement, we could only simulate the precision landing which can only be seen by a copterr. After the changes, we can now simulate a precision landing target in SITL with numerous orientations such that it can be seen by a rover approaching towards it from various directions.

Reusing precision landing library on rover
We already have a very nice and powerful library which can detect and estimate the position of target for doing precision landing on copters. Since the autonomous docking and precision landing are very similar features, we always wanted to reuse the precision landing library to avoid duplicating the code and keep the things simple. We were able to use the same library on rover after making few changes to the code dealing with the target vector construction and the results were great.

Adding the dock mode
The next task we worked on was to write the new dock mode. We had to retrieve the target vector information from the precision landing library and feed it into the rover’s position controller to get the desired speed and turn rate which were then fed to the steering and speed controllers to get the desired output. This task was pretty simple and we got an initial prototype of the DOCK mode working in a very less time.

Developing simulation environment in Gazebo
Testing is an important part of any project. After getting the initial prototype of the DOCK mode working, I went on to set up the simulation environment in Gazebo. I designed a docking station in blender and placed it in a Gazebo world. I simulated a rover in gazebo environment and tried docking it to the virtual docking station. We were successfully able to dock the vehicle at the docking target in Gazebo environment. Here is a video of one of the gazebo tests we did.

Heading correction
During the gazebo tests, we faced a problem of the vehicle not docking head-on to the docking station when dock mode is switched such that the vehicle is not exactly in front but at some angle with the docking station. Getting the information about the docking station’s slant was difficult to do with the information provided by the camera (since those values depend on factors such as distance to the docking target, camera resolution etc.), hence we developed a heading correction algorithm which can correct the heading of vehicle such that it always docks at docking station from a certain desired direction. A detailed explanation about the working of the heading correction algorithm can be found in this blog post.

Final testing on real vehicle
After completing the coding part within the planned time, I started preparing for the real vehicle tests. I had never built a rover before and I faced few problems while setting up my rover. Thanks to ardupilot’s detailed wiki guide and the videos about tuning the rover by Randy, I was able to resolve the issues. The tests results on real vehicle were awesome.
Here is a video of one of the tests I did on my rover.

Wiki
The work on wiki is almost completed. The instructions about how to set up the vehicle to use the DOCK mode will be available once the related PR is merged.

Pull Requests
Following pull requests were created as a part of this project and I’m please to say that most of these have been merged into the ArduPilot’s main repository.

Link to my previous blog posts

GSoC 2022: Rover AutoDocking
GSoC 2022: Rover/Boats AutoDocking progress update

7 Likes

@shiv-tyagi congratulations! You have handled all my precision landing questions very nicely and the project has come along very well. Well done!

1 Like

Hi, this is very interesting. I am a beginner. Is there a step by step guideline for us to learn please?
Thank you.

We have documented the instructions to set up and use this mode here - Dock Mode — Rover documentation

Hi @shiv-tyagi, could you please post a link to your repos for the Gazebo docking example (models and apriltag detection). I was going to look at upgrading your example to Gazebo Garden / ROS2 and hoped to use your working version as a starting point. I thought we could start with a rover and then move onto a boat / ocean version. The VRX project have some dock models that could have an apriltag placed on them, and we can add environment disturbances such as wind and waves to see how the docking algorithm handles that.

Hey @rhys! Sorry for a late reply. I got busy with some really important stuff lately.

I did all of my testing on gazebo using swift gust’s plugin as base. I made some changes to the plugin to make it good for my need. Also, I used gstreamer to get the camera feed in a python script and do processing on it (I don’t know if it is the best way to do so, but it got things working for me back then). Here is a link for my repository.

Thanks @shiv-tyagi, that’s what I was looking for. Should be able to use ros_gz to extract the image and process in a ros2 Python node. I’ll post an update when I get it running.

1 Like