Integration of ArduPilot and VIO tracking camera (Part 5): Camera Position Offsets Compensation, Scale Calibration and Compass North Alignment (Beta)

Introduction

In continuation of our ongoing labs, we have demonstrated how to let ArduPilot make full use of the Intel Realsense T265, a new off-the-shelf VIO tracking camera that can provide accurate position feedback in GPS-denied environment, with and without the use of ROS.

This blog is a direct next-step from part 4. We will further enhance the performance of our non-ROS system by taking into account other factors of a robotic platform that are oftentimes ignored. Specifically, we will look at:

  • Camera position offset compensation
  • Scale calibration
  • Compass north alignment (beta)

Let’s dive in.

1. Prerequisite

If you have just started with this series, check out part 1 for a detailed discussion on hardware requirements, then follow the installation process until librealsense is verified to be working.

In summary, basic requirements for this lab include:

  • An onboard computer capable of polling pose data from the T265, i.e. USB2 port. A Raspberry Pi 3 Model B has been proven to be sufficient for our labs.
  • A working installation of librealsense and pyrealsense2.
  • Newest version of t265_to_mavlink.py and verified that the script is working as established in part 4.
  • This lab is about improving the performance of the system introduced in part 4, thus it’s best if you have completed some handheld / flight tests so you can have a good sense of before-after comparison throughout the process.

2. Camera Position Offset Compensation

What is the problem?

If the camera is positioned far away from the center of the vehicle (especially on large frames), the pose data from the tracking camera might not entirely reflect the actual movement of the vehicle. Most noticeably, when the frame does a pure rotation (no translation at all) the feedback will become a combination of rotation and translation movement on a curve, which is reasonable since that’s how the camera actually moves. Conversely, for small frames this effect is neglectable.

To fix that, we need to take a step back and see how all the coordinate systems are related:

What we actually want is the pose data of IMU body frame {B} (also called IMU frame or body frame). What we have from the tracking camera, after all of the transformations performed in part 4, is pose data of camera frame {C}. Up until now, we have treated the camera frame {C} and IMU body frame {B} as identical, and that’s where the problem lies.

General solution

Suppose the pose measurement provided from the tracking camera at any given time is Pc, and the camera - IMU relative transformation is H_BC (transforming {C} into {B}), then the desired measurement in the IMU body frame Pb is:
with H_BC being a homogeneous transformation matrix, complete with rotation and translation parts.

Solution in our case

Figuring out all of the elements of H_BC can be quite tricky and complicated. Thankfully, in our case, the rotation portion has been taken care of in part 4 of this series. That is to say, {C} and {B} are ensured to have their axes always point to the same directions, so no further rotation is required to align them. All we need to do now is to find out the camera position offsets d_x, d_y, d_z.

  1. Measuring camera position offsets: x, y and z distance offsets (in meters) from the IMU or the center of rotation/gravity of the vehicle, defined the same as in ArduPilot’s wiki page Sensor Position Offset Compensation:
    ../_images/sensor-position-offsets-xyz.png
The sensor’s position offsets are specified as 3 values (X, Y and Z) which are distances in meters from the IMU (which can be assumed to be in the middle of the flight controller board) or the vehicle’s center of gravity.

* X : distance forward of the IMU or center of gravity. Positive values are towards the front of the vehicle, negative values are towards the back.
* Y : distance to the right of the IMU or center of gravity. Positive values are towards the right side of the vehicle, negative values are towards the left.
* Z : distance below the IMU or center of gravity. Positive values are lower, negative values are higher.

  1. Modify the script: Within the script t265_to_mavlink.py:

    • Change the offset values in the script body_offset_x, body_offset_y and body_offset_z accordingly.
    • Enable using offset compensation: body_offset_enabled = 1.
  2. Testing: Next time you run the script, notice the difference in pure rotation movements. Other than that, the system should behave the same.

# Navigate to and run the script

cd /path/to/the/script

python3 t265_to_mavlink.py

Note: Similar to the note of sensor offset wiki: In most vehicles which have all their sensors (camera and IMU in this case) within 15cm of each other, it is unlikely that providing the offsets will provide a noticeable performance improvement.

3. Scale calibration

What is the problem?

At longer distance, the output scale in some cases are reported to be off by 20-30% of the actual scale. Hre are some Github issues related to this problem:

Solution

We need to find a scale factor that can up/downscale the output position to the true value. To achieve that, we will go with the simple solution of measuring the actual displacement distance of the camera, then divide that with the estimated displacement received from the tracking camera.

Once the scale factor is determined, position output from the camera will be re-scaled with this factor. Scale factor can then be saved in the script for subsequent runs.

Procedure:

Note: In the steps below, I would recommend doing the tests in the same environment, same moving trajectory (for example, walks on a 2m x 2m square every time). If the environment changes to a new location or frame configuration changes, it’s best to first restore the scale factor to default value (1.0) and calibrate again if the scale is incorrect.

  1. Do at least a few tests to determine whether the scale is arbitrarily changing, or if it stays the same (however incorrect), between runs. The behavior will determine the action in the last step.

  2. Run the script with scale calibration option enabled:
    python3 t265_to_mavlink.py --scale_calib_enable true

  3. Perform handheld tests and calculate the new scale based on actual displacement and position feedback data. Actual displacement can also come from other sensors’ reading (rangefinder for example), provided they are accurate enough.

  4. Input new scale: By default, the scale factor is 1.0. Scale should be input as a floating point number, i.e. 1.1 is a valid value. At any time, type in new scale into the terminal, finish by pressing Enter. The new scale will take effect immediately. Note that scale should only be modified when the vehicle is not moving, otherwise local position (quadcopter icon on the map) might diverge.

  5. Observe the changes on Mission Planner’s map. Below are some walking tests by @ppoirier in the same trajectory, with two different scale factors.

  6. Flight tests: Once you have obtained a good scale, flight tests can be carried out like normal.

  7. Save or discard new scale value: if the original scale is incorrect but consistent between runs, modify the script and change the default scale_factor. If scale changes arbitrarily between runs, you might have to perform this calibration again next time.

4. Compass north alignment (BETA)

Note 1: Of the new features introduced in this post, compass north alignment is verified to work with internal compass, with and without optical flow. However, more evaluations are needed in various cases where multiple data sources are being used at the same time (multiple compasses, GPS, optical flow etc.). Any beta testers are thus much appreciated.

Note 2: In outdoor testing, optical flow can really help to improve stability, especially against wind.

What is the problem?

If you wish to have the heading of the vehicle aligned with real world’s north, i.e. 0 degree always means facing magnetic north direction, then this section is for you.

General solution

The main implementation idea comes from this post, adapted to our robotic application, workflow and other modules.

Solution in our case

  • Assumptions:

    • Suppose that the Realsense T265 and compass are rigidly attached to the frame, meaning the translation and rotation offsets won’t change over time. Furthermore, compass’s heading and pose data (already transformed into NED frame) are assumed to align with the vehicle’s forward direction.
    • Compass data is available through MAVLink ATTITUDE message.
  • Procedure:

    • Compass yaw data is updated in the background by listening to MAVLink ATTITUDE message.
    • In our main loop, we will capture the latest T265’s raw pose, perform other submodules, before multiply by latest compass’s transformation matrix, which only takes into account the rotation. The translation offsets do not contribute to our solution and will be ignored.
    • The final pose with rotation aligned to magnetic north will be used.
  • ArduPilot parameters:

# Enable compass and use at least one of them

COMPASS_ENABLE = 1

COMPASS_USE = 1

Optical flow can also be enabled. The use of other systems (multiple compasses, GPS etc.) requires more beta testing.

  • Enable compass north alignment in the script:
# Navigate to the script
nano /path/to/t265_to_mavlink.py

# compass_enabled = 1 : Enable using yaw from compass to align north (zero degree is facing north) 

# Run the script:
python3 /path/to/t265_to_mavlink.py
  • Verify the new heading: Now the vehicle should point toward the current actual heading. Other than that, everything should work similarly.

    (Same vehicle position and heading in real world. Initialization without compass on the left, with compass on the right)

  • Ground test, handheld test and flight test: This feature is still in beta since consistent performance across systems is still a question mark.

    • Ground test: Once the system starts running, let the vehicle grounded and see if final heading is drifting. It should align north even when vision position starts streaming. If it drifts, it’s a bad sign for flight tests. See if compass data is stable and not jumping around, retry again.

    • Handheld test: move the vehicle in a specific pattern (square, circle etc.) and verify that trajectory follows on the map. For example, if you moved in a cross you should see a cross and not a cloverleaf.

    • Flight test: If all is well, you can go ahead with flight tests. Note that things can still go wrong at any time, so proceed with caution and always be ready to regain control if the vehicle starts to move erratically.

Here are some outdoor tests by @ppoirier, with all of the offsets, scale and compass settings that are introduced in this blog.

[Development Update]

Voice and message notification for confidence level with Mission Planner

With the latest version of the script t265_to_mavlink.py, the confidence level will be displayed on Mission Planner’s message panel, HUD, as well as by speech.

  • Notification will be sent only when the system starts and when confidence level changes to a new state, from Medium to High, for example.
  • If there are some messages constantly displayed on the HUD, you might not be able to see the confidence level notification.
  • To enable speech of Mission Planner: Tab Config/Tuning > Planner > Speech > tick enable speech.

5. Open issues and further development

As a relatively new kind of device, it’s not surprising that there are multiple open questions regarding the performance and features of the Intel Realsense T265. A good source of new information and updates is the librealsense and realsense-ros repos. Interesting info can be found in some of these issues:

All in all, check the repos regularly if you are developing applications using this device, or just want to learn and explore as much as possible.

6. Conclusion and next steps

In this blog, we have taken the next steps to improve and enhance the performance of our system, which incorporates a VIO tracking camera with ArduPilot using Python and without ROS. Specifically, we have added support for:

  • camera position offsets compensation,
  • scale calibration and correction, with a step-by-step guide
  • a working (beta) method to align the heading of the system with real world’s north.

We have now completed the sequence of weekly labs for our ArduPilot GSoC 2019 project. But we still have even more exciting developments on the way, so stay tuned! Here are a few Github issues to give you a glimpse of what’s next: #11812, #10374 and #11671.

If you are interested in contributing, or just keen on discussing the new developments, feel free to join ArduPilot’s many Gitter channels: ArduPilot and VisionProjects.

Hope this helps. Happy flying!

11 Likes

Thanks again for this awesome series labs on the T265.

This constitute the foundation of a specific WIKI that will be released later this summer.
More than a monograph of a specific subject, this series include an great introduction to ROS, good explanation of the transformation library and an experimentation methodology that we can apply to a broader scale.

So , I tell you ‘‘Au Revoir’’ as you start diving into code and hopefully bring back some enhancement as how this 6 DoF state estimator can be optimistically be fusionned and analyzed within the differents EKF.

3 Likes

These are my notes on the state of experimentation so far:

  • We are still at early Beta stage, this is a highly experimental system

  • You cannot “Plug&Go” at the moment , you need a constant preflight verification:
    1- Make sure the systems kick-in (GPS Glitch) and that vision is Good (Ctrl-F / Mavlink Inspector / check vision confidence level)
    2- Compass is correctly aligned with system
    3- Walk the bird on a square or cross like pattern, making sure it is tracking straight and not ballooning or clover-leaf patterns that are sign of misalignment an a certain fail on flight as the EKF will Toilet Bowl or worse , act like a Psychotic Wasp (Copyright @fnoop)

  • So far all my tests are forward facing as I try to get the system super stable before experimenting above 3 Meters

  • Most of test are Guided or manual takeoff in Loiter followed by Circle at different speeds and diameters.
    Here is an example of a flight outdoor on a full cloudy sky: Takeoff guided, initiate circle (remember to set throttle) with a 1 Meter diameter at 2 Meter altitude.

  • Direct and indirect sunlight is bad for the T265, We need more experimentation with Exposure Control and Sun Filtering (basically install polarized sunglasses…)
  • More data is required from the EKF , we need a better integration with vision_position and associated EKF1, EKF2, EKF3 and EKF4 log messages
3 Likes

I have successfully flown my drone with T265 providing vision position estimate. However I am hoping to combine the precision of the T265 with the accuracy of Apriltag precision landing. Any thoughts on detecting Apriltags with the T265?
I successfully ran the Librealsense example here: https://github.com/IntelRealSense/librealsense/tree/development/examples/pose-apriltag

How would I get this stream to run through the realsense ros wrapper?
https://github.com/IntelRealSense/realsense-ros

I can see of two ways to combine the T265 with precision landing using apriltags, but of course these are just my two cents:

  • non-ROS: combining the t265_to_mavlink.py and pose-apriltag examples, then add in new functions to send the landing target to ArduPilot. All can be done in one script or split into multiple scripts.
  • ROS-based: as of this writing the realsense-ros wrapper does not contain the pose-apriltag example, so you can either put the functions of pose-apriltag into the wrapper, or link the image streams with the apriltag-ros wrapper and modify it to send necessary messages to ArduPilot.

Ok thanks for your input. I will look into both solutions.

Hi there,
great post. Been following and trying to adapt to my system. With up-facing camera.
I currently trying to do the displacement in ROS for the pose . Any thoughts on how I could do that in the existing code?
Thanks

Thanks @dookei for your interest.

In the ROS side of the code, You can apply the body frame displacement in this part, before the transformation of the frames take place.

However, I should note that currently there are ongoing developments in the ArduPilot codebase that will turn the camera-body translation/rotation into user-defined parameters, along with a host of new features, hence no further modification on the computer side would be required. If you are interested, take a look at these PRs: https://github.com/ArduPilot/ardupilot/pull/13982 and https://github.com/ArduPilot/ardupilot/pull/14056 and https://github.com/ArduPilot/ardupilot/pull/14103.

Nevertheless, I also intend to bring the features in this blog post to the ROS side, so let me know if you want to try it out.

Hi @LuckyBird ,thank you for fast reply.
I am actually using PX4 at the moment. But your code with some minor modifications worked with it too.
For my project I am currently using Several D435 cameras for the depth cloud input to ROS and the T265 for the pose estimation. I ended up creating more reference frames for the several cameras and other relevant components from the drone.
Yes would be great to try it out. I have other things going on too that I need to develop and try.

Many thanks to @LuckyBird et al for all your efforts. I am currently running Rover’s 4.0 Beta with an old Pixhawk next to a RP4 running the APSYNC. I tweaked one of the .py files so I see my rover in my backyard. As a real beginner, that’s progress! :wink: It does track but occasionally burps on simple missions. I have so many questions, but lets start with a fun one. Does the rover firmware have the same T265 capabilities as the copter versions y’ll been using? I’m running the latest MP Beta and am unable to see the confidence level on my HUD.

Thanks

Bill

Hi @RoboBill, many thanks for your interest in the project and for starting the discussion.

From the firmware point of view, the input and processing parts are the same: we have tracking data coming from the camera, which is fed into the EKF of your choosing, and final local position as output.

Most of the tests are done with copter, so the performance on rover is not something I can say for sure, but if there is anything wrong you raise an issue on GitHub - ArduPilot/ardupilot: ArduPlane, ArduCopter, ArduRover, ArduSub source or here and we can work it out.

I see, here’s a step-by-step check:

  • Make sure the confidence update thread is enabled: enable_update_tracking_confidence_to_gcs = True.
  • If it is enabled, you should at least see the tracking confidence updated on the terminal every 1 second (or the period that you set).
  • On MP, you can check on the Messages tab to see if the message is being received. Additionally, you can open the MAVLink inspector window (Ctrl+F → Mavlink inspector) and find the message STATUSTEXT.
  • If the message is being received but not displayed, you might need to play around with the MAV_SEVERITY level (try 0 to 6).

I am curious about this bit. Did you have to change something to make it work (i.e. fundamental), or is it something for your use case (i.e. specific)?

Hi @LuckyBird,

OOPS! Sorry for not specifying the file. It is the t265_to_mavlink.py file at the home_lat and Home_lon lines. It took me a few minutes to realize you multiplied the latitude and longitude by 10^7.

As for the enable_update_tracking_confidence_to_gcs = True, I was unable to find that line in my version of the t265_to_mavlink.py file. After pasting in text from your file, I’m unable to see my rover. Darn.

I assume there are other changes that have been made since that image was posted. How do I update them?

I also realize this is all BETA… so it may or may not work today… but its all fun (except when my (100 lb) rover runs into my shop. :wink:

Thanks

Bill

Hi @RoboBill, if you follow the instruction in this blog from the beginning, you can do it again to download the file, apply the necessary changes for your system, and test it again.
There are also other options, such as this cpp program where the parameters are separated from the code so you can update the system much faster without modifying anything (just copy paste the param files that you used previously).