GSoC 2019: Integration of ArduPilot and VIO tracking camera for GPS-less localization and navigation

1. Introduction

Navigation through unfamiliar spaces while avoiding obstacles is always of utmost importance for any autonomous robots. In many applications for ground and aerial robots, there is a growing need for accurate navigation and localization in areas where GPS is not accessible. One such technology that has the capability to provide accurate, robust GPS-less localization and navigation is visual inertial odometry (VIO), which combines information from a camera and an inertial sensor to tell the drone where it is in relation to its environment.

In this proposal, I would like to contribute new functionalities to the ArduPilot codebase to better utilize the Intel® RealSense™ Tracking Camera T265 for accurate localization and navigation, hence freeing up resources for the companion computer to perform other high-level tasks, as well as documentation with step-by-step hardware and software integration procedure for real-life experiments, so that anybody can follow and even more amazing applications can be developed in the future.

2. New features for the codebase

Below is the list of new features that I wish to contribute to the ArduPilot codebase in this project.

A. Add/ improve support for ArduPilot to incorporate VIO data for external navigation (ROS and non-ROS).

Most open-source VIO algorithms are implemented in ROS (Robot Operating System), a standard framework for robotic researches in academia. However, with ROS there might be quite a steep learning curve for anyone to dive into before any real tests can be done. In this project, a general framework will be implemented, in ROS as well as non-ROS environment, that let ArduPilot incorporate VIO data for precision localization and navigation.

Moreover, most of the current support in ArduPilot is implemented in EKF2. Adding support for EKF3 would prove to be of great benefit for future development. The key idea for this part is to implement the same methods and interfaces currently available in EKF2 to EKF3, before adding new features in both EKFs (if it ain’t broke, don’t fix it).

B. Add support for different VIO camera orientation setups.

VIO algorithms can only achieve robust performance if the camera can see a surface with as many distinctive features as possible. For that reason, different operating scenarios will only permit certain camera orientations on the UAV frame. For example, a UAV flying indoor and low altitude will likely need a forward-facing camera, while flying outdoor at high altitude will only allow the camera to face downward.

With different orientation angles, the data from VIO camera will require a transformation for the frames to be aligned before the position data can be used by ArduPilot. Frames alignment method and associated parametrization will be developed, thus simplify the setup process and make the system more flexible.

C. Hardware + software setup guide for a quadcopter platform.

A step-by-step guide would help everyone to be able to replicate and validate the code quickly. The setup guide will include:

  • Hardware setup: quadcopter platform with ArduPilot + VIO tracking camera (Intel Realsense T265) + companion computer.

  • Software setup: ROS and non-ROS, Realsense SDK (on companion computer) + ArduPilot firmware.

  • Parameter configuration: on companion computer and ArduPilot to send and receive VISION_POSE_ESTIMATE messages and camera pose transformation.

  • Ground testing: how to monitor and plot data messages on GCS to ensure that everything is running according to plan.

  • Flight test: Explore the operational envelope in different modes (Loiter, Guided, Auto ) and environment

  • Troubleshooting guide.

D. Experiments and flight test results with quadcopter.

Besides validating the stability in real flights (by visually observing the quad), data logs will also be analyzed to verify:

  • EKF2, EKF3 outputs, states and status data.

  • Fusing complementary states estimates as Optical Flow

  • VIO accuracy for z estimates IRT to RangeFinder

  • VIO operation under different light condition and vibration.

E. Add documentation for simple setup and testing.

The documents and reports that will be created along the way will be added to the wiki, specifically:

  • Blog posts on https://discuss.ardupilot.org/: report for progress as well as receiving feedback from the community to adjust and improve.

  • New wiki page “VIO for Non-GPS Navigation” (similar to Optitrack) which will include: hardware setup, software setup, configuration, ground testing, real flight.

3. Expecting things to go wrong

With a nice tool like the T265, it might be tempting to try out all sorts of things. However, always keep in mind that just like any other equipment, the T265 has its own limits and we should learn about them, anticipate when things can go wrong and prevent them from happening in the first place.

  • The T265’s underlying algorithm is susceptible to vibration (as is the case for most VIO/SLAM algorithms), which can lead to divergence or complete lost of the tracking position. Providing sufficient damping for the camera is therefore crucial to achieve robust and reliable performance. Additionally, adding wheel odometry or other sensors (optical flow, LiDAR, etc.) can help the T265 works nicer.

  • At longer distance, the output scale is reported to be off by 20-30% of the actual scale.

  • With its limited memory, the T265 can store a maximum map size of “roughly couple of rooms/house scale”, depends on how complicated the environment is (source).

References

I am certainly not the only one interested in this kind of project. There have been numerous works in this area before and my proposal is made possible only thanks to their awesome contributions. Just to name a few:

For this project, I will have the privilege of working with @ppoirier and @rmackay9. Hopefully, at the end of this summer we will see the Realsense T265 tracking camera become a sort of plug-and-play “sensor” that anybody can get it up and running for their vehicle. All of the progress will be documented in the coming blog posts, so stay tuned! And as usual, your input is welcome and I look forward to hearing from you all.


Progress Update

As part of for my journey for the 2019 ArduPilot GSoC project, a series of lab works will be introduced in the form of blog posts. These labs serve not only as milestones for the project but also a step-by-step guideline for anyone who wishes to learn about using the power of computer vision for autonomous robot to follow. The labs include:

  1. Lab 1: Indoor non-GPS flight using AprilTags (ROS-based)
  2. Lab 2: Getting started with the Intel Realsense T265 on Rasberry Pi using librealsense and ROS.
  3. Lab 3: Indoor non-GPS flight using Intel T265 (ROS-based)
  4. Lab 4: Autonomous indoor non-GPS flight using Intel T265 (ROS-based).
  5. Lab 5: MAVLink bridge between Intel T265 and ArduPilot (non-ROS).
  6. Lab 6: Calibration and camera orientation for vision positioning with Intel T265.

Lab 1
In lab 1, we will apply the awesome works that have been done by @anbello @ppoirier @chobitsfan in a new scenario, that is: using AprilTag, a system of fiducial marker that can provide highly accurate 3D position data, to facilitate indoor non-GPS vision-based positioning.


Lab 2
Lab 2 will be our first step in the process of integrating the T265 camera with ArduPilot in ROS. Instruction will be laid out on how to install:

on a Raspberry Pi 3 Model B, along with examples and demos to verify that things are working in the end.


Lab 3
Continue from the previous lab, this time we will finish up all the installation and have the vehicle up and flying (finally). We will also dive a little into some behind-the-scene stuffs, to understand what’s needed to be done and why, in order to make changes and further improvements in the future.


Lab 4
In this lab, we will perform simple autonomous tests in indoor environment using Mission Planner and ROS scripts. Additionally, experiments will also be carried out under several challenging scenarios to verify the tracking performance of the Intel Realsense T265.


Lab 5
This time we will step out of the ROS framework. Instead, the Python wrapper for the Realsense T265 (pyrealsense2) will be utilized to capture 6-DoF pose and tracking confidence level data from the device, send it to FCU through MAVLink, and even automatically set EKF home position.

Lab 6
This blog is a direct next-step from part 4. We will further enhance the performance of our non-ROS system by taking into account other factors of a robotic platform that are oftentimes ignored.


Conclusion

GSoC is now over and here is what has been accomplished during this period:

I want to thanks everyone involved with this GSoC and hopefully this new type of hardware will open the door for further experimentation and successful VIO/SLAM systems operations.

20 Likes

Thanks for this well detailled blog.

Thien will update the progress on this blog on a weekly base, with detailled Lab Reports and code status.
I invite anyone interested to join us in this ‘‘next level’’ of autonomous flight.

4 Likes

Thank you @ppoirier! I will try my best!

I am interested in this project. What is needed and how to start?

Hello, The goal is to make the RealSense T265 work with a RPI3 as a Companion Computer.

Thanks for the info. So both of these are hardware pieces, integrated camera module from Intel and Raspberry Pi 3. Is this something I need to purchase or get them free?

As for software/firmware, where to get these source codes if available? What is the platform for code development?

As you can read above:

Bear in mind that we are at beginning of project, so the puzzle pieces will be added progressively.