Integration of ArduPilot and VIO tracking camera (Part 4): non-ROS bridge to MAVLink in Python

Introduction

In this part of the ongoing labs, we will demonstrate how to make a MAVLink bridge between the Intel Realsense T265 and ArduPilot in Python, without the use of ROS. The choice of Python is strictly optional, and you can use any other wrappers supported by librealsense.

Although this blog (non ROS-based) shares the same structure as part 2 (ROS-based), the content of the two are vastly different:

  1. Prerequisite
  2. System overview
  3. ArduPilot parameters
  4. Python packages installation
  5. Workflow process
  6. Ground test and flight test
  7. Conclusions and next steps

1. Prerequisite

Please refer to part 1 for a detailed discussion on hardware requirements, then follow the installation process until librealsense. Since we are not using ROS, realsense-ros is not required for this blog.

In a nutshell, what we need on our vehicle is:

  • An onboard computer capable of polling pose data from the T265, i.e. USB2 port. In this case, we will be using a Raspberry Pi 3 Model B for this experiment.
  • A working installation of librealsense.
  • A facing forward Realsense T265 camera. Different camera orientations can also be used with correct modification of transformation matrix. We shall take a closer look in the next section.

2. System overview

From a physical standpoint, here’s how we connect everything:

  • Physical connection:

The physical connection diagram and testing procedure are about all that are similar between this blog and part 2 (ROS based). From this point onward we are dipping into uncharted waters.

  • Data flow:

Name Details
T265: Pose data Running at 200Hz, each sample contains:
- 6-DOF pose data: Position vector [xyz] + Orientation quaternion [xyzw]
- Confidence level: [high / medium / low / failed]
- Linear velocity and acceleration vectors
- Angular velocity and acceleration vectors
T265: Pose data → Confidence level Confidence level (or pose data confidence, tracking confidence) has the value of {0x0 - Failed, 0x1 - Low, 0x2 - Medium, 0x3 - High}.

The value corresponds to the local tracking quality of the device and for most applications you should trust the full 6dof pose only in high confidence. If you only need the rotation (3dof), lower confidence poses can be used (source).

The confidence level depends on various parameters such as lighting of the scene, number of features, etc. It’s advised that the T265 gains a high confidence in order to obtain a high quality of tracking (source).
FCU: VISION_POSITION_ESTIMATE Accept Euler angles (roll, pitch, yaw). Used to transfer external navigation data to ArduPilot.
FCU: VISION_POSITION_DELTA Has a field named confidence - normalised confidence value from 0 to 100 (%). Used as a dummy message to send confidence level to GCS.

We are interested in using 6-DOF pose data (position + orientation) for localization and viewing confidence level obtained from the T265 camera. For that to happen, we need a script that does the following:

  • 6-DOF pose data: after getting a new data from the T265, we perform frame transformation to align the internal frame of T265 with NED convention, then send the pose to ArduPilot through the MAVLink message VISION_POSITION_ESTIMATE. Since the FCU might get “frozen” with too much incoming data (remember that the pose data is coming at 200Hz), the message needs to be sent at a predetermined frequency that the FCU is capable of handling but also not too slow (relative to the speed of the vehicle), i.e. 10-30Hz.

  • Confidence level: Embedded within the data coming from the T265. I believe it would be of immense benefit to users if the level of tracking confidence can be viewed directly from the GCS. For lack of a dedicated message, we will utilize a message that is not being used by the FCU which contains a field that is as similar to what we are sending as possible. VISION_POSITION_DELTA with its field confidence seems like a good candidate for the job. Since we do not need to update the confidence as often as pose data, it should be updated as a slower frequency, i.e. 1Hz.

Note 1: Relative to VISION_POSITION_DELTA message are VISO_ params, consumed in AP_VisualOdom.cpp.
Note 2: confidence level is not yet supported in realsense-ros until this PR is merged.

I hope everything is clear up till now, because here comes the confusing parts.

  • Frame coordinates: First up, the documentation of T265 on master branch of librealsense is behind the one on development branch, so you should look for information mainly on the development branch.

Within the T265 device itself there are multiple frames coordinates for different sensors. Specifically, we have: pose stream, gyro and accelerometer (IMU), fisheye cameras (source). As can be seen in this picture:

Note: realsense-viewer seems to display data in each sensor’s own coordinate and the documentation does not reflect that so well for now.

What we are looking for is the frame coordinate of pose data. Taken from the wiki (dev branch):

T265 Pose orientation


To aid AR/VR integration, the T265 tracking device uses the defacto VR framework standard coordinate system instead of the SDK standard:

1. Positive X direction is towards right imager
2. Positive Y direction is upwards toward the top of the device
3. Positive Z direction is inwards toward the back of the device

The center of tracking corresponds to the center location between the right and left monochrome imagers on the PCB.

When T265 tracking starts, an origin coordinate system is created and RealSense SDK provides T265 poses relative to it. Origin's Y axis is always aligned with gravity and points to the sky. Origin's X and Z axes are not globally set, but determined when tracking starts depending on the initial orientation of the T265 device. The origin coordinate system is always right-handed.

And from this thread:

[The origin coordinate system] is deterministic, though, so if you start tracking with T265 at the same position and orientation, your origin will be defined the same always. In the specific case in which you start tracking with the fisheye sensor looking forward, origin will be defined the same as the T265 pose stream (X pointing right, Y up, Z backward).

In summary:

  • There are multiple frame coordinates on the T265. Suffice to say, not everything is well documented.
  • The T265’s pose data is measured in anorigin frame.
  • The origin frame always has Y-axis upwards, but X and Z depend on the camera’s initial position + orientation, which should produce the same origin frame every time the system runs.

Note: for downfacing configuration, there seems to be a problem with inconsistent yaw angle between runs. The temporary fix is to tilt the camera a little (not facing flat down) when the system starts.

  • Frame transformation: Suppose that we have the T265 fixed on the frame in some orientation, whether forward facing or downfacing, and we can verify that the origin frame is the same every run. To send the pose data to FCU through MAVLink, the data needs to follow NED convention through the following transformations:

What we have is (2), which is the data coming from T265. In order to obtain (4), which is the pose that can be sent to FCU, we need to define (1) and (3), then multiply them in the right order: (4) = (1).(2).(3). For the case of forward facing camera, as depicted in the picture, (1) and (3) are simply the reverse of one another.

For a more in-depth explanation, you can go through the following references:

  • Euler angle rotation: Understanding Euler angles

  • In the picture, (1),(2),(3),(4) are homogeneous transformation matrices: http://planning.cs.uiuc.edu/node111.html, each consists of a rotation and displacement (translation) part. For (1) and (3), the displacement components are zeros.

  • Putting it all together: This script t265_to_mavlink.py, which is a part of the vision_to_mavros package, will do the following tasks:

    • Capture 6-DoF pose and confidence level data from librealsense.
    • Convert pose data from T265 coordinate system into NED coordinate system.
    • Encode the aligned pose data into VISION_POSITION_ESTIMATEmessage and send it to the FCU as a determined frequency.
    • Map the value of confidence level data from T265 from [0,1,2,3] to [0,33.3,66.6,100]
    • Encode the remapped confidence data into VISION_POSITION_DELTA message and send it to FCU as a determined frequency.
    • Automatically set EKF home: the script will listen to the message GPS Glitch and GPS Glitch Clear from ArduPilot, which is an indicator that the EKF is receiving external navigation data and EKF origin can now be set, then send commands to set EKF home and origin. This will be further explained in the next section.

Explanation completed. Let’s go flying.

3. ArduPilot parameters

  • FW version: tested with ArduCopter 3.7-dev. Latest stable version can be download from Mission Planner or here.

  • Which EKF is supported? As of this writing, only EKF2 will accept external navigation data.

  • Params:

AHRS_EKF_TYPE = 2
EK2_ENABLE = 1
EK3_ENABLE = 0
EK2_GPS_TYPE = 3
EK2_POSNE_M_NSE = 0.1
EK2_VELD_M_NSE = 0.1
EK2_VELNE_M_NSE = 0.1
BRD_RTC_TYPES = 2
GPS_TYPE = 0
COMPASS_USE = 0
COMPASS_USE2 = 0
COMPASS_USE3 = 0
SERIAL5_BAUD = 921              (the serial port used to connect to companion computer)
SERIAL5_PROTOCOL = 1
SYSID_MYGCS = 1                 (to accept control from mavros)

4. Python packages installation

Installation:

# pip install may require sudo, proceed accordingly
pip install pyrealsense2
pip3 install transformations
pip3 install dronekit
pip3 install apscheduler
# Install serial packages if necessary
sudo pip3 install pyserial

# Navigate to the location of the scripts
cd ~/path/to/the/script/

# Download the scripts if you haven’t already:
wget https://raw.githubusercontent.com/hoangthien94/vision_to_mavros/master/scripts/t265_test_streams.py
wget https://raw.githubusercontent.com/hoangthien94/vision_to_mavros/master/scripts/t265_to_mavlink.py

chmod +x t265_test_streams.py
chmod +x t265_to_mavlink.py
  • Update the PYTHONPATH environment variable to add the path to the pyrealsense library. Alternatively, copy the build output (librealsense2.so and pyrealsense2.so in ~/librealsense/build/) next to the script.
export PYTHONPATH=$PYTHONPATH:/usr/local/lib
  • Run a test script to verify python3, librealsense, pyrealsense and connection with the T265.
# Run the test script
python3 t265_test_streams.py
# You should see a short stream of pose data coming from the T265 on the terminal
  • Modify transformation matrices: The script works out of the box for forward facing camera, for other configurations, change the following two transformations matrices, corresponding to (1) and (3) above: H_T265body_aeroBody and H_aeroRef_T265Ref.
  • Modify parameters of t265_to_mavlink.py for your system configuration, which can also be passed as input arguments from the command line, specifically:

#######################################

# Parameters

#######################################

# Default configurations for connection to the FCU

connection_string_default = '/dev/ttyUSB0'

connection_baudrate_default = 921600

# Default frequency for pose and confidence messages

vision_msg_hz_default = 30

confidence_msg_hz_default = 1

  • Run the main script:
# For serial connection: set udev.rules in order to get the USB available; allow permission to serial
sudo chmod 666 /dev/ttyUSB0

# When everything is working and all defaults are set:

python3 t265_to_malink.py

# View input arguments:

python3 t265_to_mavlink.py --help

# Run the script with input arguments, example:
python3 t265_to_mavlink.py --connect /dev/ttyACM0 --baudrate 57600 --vision_msg_hz 20--confidence_msg_hz 0.5

Verify that ArduPilot is receiving pose data

Check that ArduPilot is receiving position data by viewing the topic VISION_POSITION_ESTIMATE on GCS. For Mission Planner, press Ctrl+F and click on “MAVLink Inspector”, you should be able to see data coming in under Vehicle 1.

Note on viewing VISION_POSITION_ESTIMATE topic on QGC: check this thread if you have some problems with receiving this topic on QGC.

Verify that ArduPilot is receiving confidence level data

View the topic VISION_POSITION_DELTA on GCS, field confidence should show the remapped value of confidence level.

Voice and message notification on Mission Planner for tracking confidence level

Changes in tracking confidence level can also be notified on Mission Planner’s message panel, HUD and by speech. These notifications will pop up when the system starts and when confidence level changes to a new state, for example from Medium to High.

  • To enable speech in Mission Planner: Tab Config/Tuning > Planner > Speech > tick enable speech.
  • If there are some messages constantly displayed on the HUD, you might not be able to see / hear the confidence level notification.
  • If telemetry is slow, notification might be dropped. You can still see the latest message in MAVLink Inspector, message STATUSTEXT.

5. Workflow process

The tasks that are needed to be done are the same as part 2. However, this time around the process is simplified by letting things run automatically using code.

Short description:

  1. After boot, log in to the companion computer, navigate to the script and run it: python3 t265_to_mavlink.py
  2. Wait until the quadcopter icon appears on the map of Mission Planner.
  3. All done. Now you can carry on with the flying.
  4. If the external navigation data is lost for any reason (tracking lost; script is interrupted etc.), reboot the FCU/system and go back to step 2/1.

Long description:

Let’s start at the top:

  • After power on (red block), you can run the script: python3 t265_to_mavlink.py, assuming all the default values are correct.

  • Once the FCU starts receiving VISION_POSITION_ESTIMATE message, you will see the “GPS Glitch” and “GPS Glitch cleared” message confirming that the external localization data is being recognized by the system.

  • Incoming data will not be fused unless the EKF knows where it is in the world (home). The script will attempt to send SET_GPS_GLOBAL_ORIGIN and SET_HOME_POSITION MAVLink messages. You should see the quadcopter icon appear on the map. The location of home is defined here and you can change to anywhere, as long as the values are non-zero. If the quadcopter icon does not appear, you can also set EKF home directly from Mission Planner and soft-reboot the FCU.

  • You now need to zoom in to the maximum in order to see the actual movement.

  • At this point, you can carry on with ground test, flight test, autonomous test, etc.

  • Along the way, for whatever reason if VISION_POSITION_ESTIMATE data is lost, the quadcopter icon will disappear. As far as I know, it seems that you cannot re-set EKF home, thus the only option is to reboot the FCU, either through Mission Planner (Ctrl-F > Reboot Pixhawk) or power cycle the system.

Now we can move on to the actual experiments.

6. Ground test and flight test

Ground test:

  • Hold the vehicle up, move around and observe the trajectory of the vehicle on Mission Planner.
  • Do a back and forth, left and right, square or any pattern you like. The trajectory of the vehicle on the should reflect accordingly without too much distortion or overshoot.
  • If you are flying in a confined environment, it might be best to go around the safety perimeter of flying, view the trajectory on the map, then remember not to fly/perform mission beyond that perimeter.
  • View the confidence level and verify tracking performance. As suggested here, for most applications you should trust the full 6dof pose only in high confidence. If you only need the rotation (3dof), lower confidence poses can be used.

Flight tests:

  • Confirm that position feedback is running ok before switching to Loiter mode.
  • Verify tracking performance (confidence, scale, oscillation etc.).
  • Look out for the safety boundary in your environment.

Note that in the video, the scene has very few features (mostly black background and glass), but the tracking and localization quality is still quite accurate for most of my experiments.

[Optional] Get the script self-start as part of a service

If you are going to do a lot of testing (especially for beta testers), we can go one step further and let the script start automatically at boot time. Huge thanks to @ppoirier for your suggestion.

  • Make sure you are using the latest version of t265_to_mavlink.py, which includes the following changes:
    – Added this to the beginning to make the script self-executable: #!/usr/bin/env python3
    – Added env variable so it can find pyrealsense2:
# Set the path for IDLE
import sys
sys.path.append("/usr/local/lib/")
  • Download the following shell or create a shell file t265.sh:
cd /path/to/scripts/
wget https://raw.githubusercontent.com/hoangthien94/vision_to_mavros/master/scripts/t265.sh
  • Modify the path to t265_to_mavlink.py in the shell file t265.sh, then make it executable:
nano /path/to/t265.sh

# In t265.sh, change the path to t265_to_mavlink.py, in my case:
# /home/ubuntu/catkin_ws/src/vision_to_mavros/scripts/t265_to_mavlink.py

chmod +x /path/to/t265.sh
  • Test that the shell can run:
./path/to/t265.sh
# the script t265_to_mavlink.py should run as normal
  • In the steps below, we will use systemd to turn it into a service. Depends on your system, use any other methods that work for your case.
  • Let’s create a file called /etc/systemd/system/t265.service :
[Unit]
Description=Realsense T265 Service
After==multi-user.target
StartLimitIntervalSec=0
Conflicts=

[Service]
User=ubuntu
EnvironmentFile=
ExecStartPre=
ExecStart=/home/ubuntu/catkin_ws/src/vision_to_mavros/scripts/t265.sh

Restart=on-failure
RestartSec=1

[Install]
WantedBy=multi-user.target
  • Make the following changes to t265.service above:
    – Set your actual username after User=
    – Set the proper path to your t265.sh in ExecStart=

  • That’s it. We can now start the service:

systemctl start t265
  • Run top to see if the service is running. Use Mission Planner to verify data (VISION_POSITION_ESTIMATE and VISION_POSITION_DELTA messages) is coming through.

  • Finally, automatically get it to start on boot:

systemctl enable t265

7. Conclusion and next steps

In this blog, we have discussed the underlying principles of how to incorporate a VIO tracking camera with ArduPilot using Python and without ROS. After installing necessary packages, configuring FCU params, ArduPilot can now integrate the tracking data and perform precise navigation in GPS-less environment. Pose confidence level is also available for viewing directly on GCS to quickly analyse the performance of the tracking camera. Thanks to the new simplified operating procedure, experiments can now be carried out as a much faster rate.

With this, we have completed the sequence of weekly labs for my ArduPilot GSoC 2019 project. From this point onwards, more work will be done on the code implementation side. As such, schedule for the next lab will depend on different factors (devs & PR approval). However, I believe the completed labs will be a strong foundation for any folks who are interested in beta testing.

Hope this helps. Happy flying!

12 Likes

CALLING ALL TESTERS !!!
This blog provide us with all we need to test and experiment Visual Inertial Odometry using the T265.

Thanks to Thien for this awesome series of Labs :clap::clap::clap:

1 Like

@LuckyBird congrats, as always a really good job.
Have you noticed relevant difference in flight test between this implementation and the ROS based one?

If the confidence on pose estimation is good you can try to lower this params (till 0.01):
EK2_POSNE_M_NSE = 0.1
EK2_VELD_M_NSE = 0.1
EK2_VELNE_M_NSE = 0.1

Another thing to try is to increase the rate limit for external navigation fusion (near 14Hz now)

For what I know to half that value (70ms) we have to double the relevant buffer here:


we can substitute OBS_BUFFER_LENGTH with 2*OBS_BUFFER_LENGTH.

But I don’t know if those are the only mods to do to have the rate limit doubled in ext nav data fusion, here @rmackay9, @ppoirier and @tridge could help us.

2 Likes

I loaded the software and scripts as per instructions (minors issues - typos that will be corrected in the text above), and the system worked as expected.

First outdoor test was conducted in the shade, facing side of building and results where generally OK. Will continue testing and tuning to get my setup rock solid in loiter before switching to guided. Confidence level was steady @ 100% throughout the test.

This is my test ‘‘mule’’ : Flamewheel 450 + CUBE + Jetson NANO :

3 Likes

Little Add-On : CONFIDENCE LED
Connected to the NANO - GPIO + FET Driver

LED turn on when level is HIGH, giving visual feedback without having to look at the GCS. Basically it is writing to GPIO using the User-Space Mapping : sys/class/gpio

3 Likes

Thanks to Thien ,I followed your blog and tested it,It works very well. I tested fly at giude and auto and smart return . But my copter loiter accuracy is still not very good. It drifts a little. Especially it can hardly be used outdoors. So I tried to turn t265 toward the ground and modify the matrix. When I start it, I let t265 point forward. But the location information of the copter is wrong. Are there any problems? Does the downward effect of t265 really improve?
for downward I noly modified
H_aeroRef_T265Ref = np.array([[0,1, 0,0],[1,0,0,0],[0,0,-1,0],[0,0,0,1]])

Hi @zleo, if the T265 is facing down with the USB port pointing to the right, the matrices should be like as follows:

H_aeroRef_T265Ref = np.array([[0,1, 0,0],[1,0,0,0],[0,0,-1,0],[0,0,0,1]]), 
H_T265body_aeroBody = np.linalg.inv(np.array([[0,0,-1,0],[1,0,0,0],[0,-1,0,0],[0,0,0,1]]))

I have added this configuration to the script’s example, you can also use the latest one.

Additionally, note that yaw angle can be off by a bit if the camera is completely flat when the script starts/ros node launches, so tilt the head up a bit at the beginning (in my experience tilting the nose up will provide the most consistent yaw).

@LuckyBird Thanks a lot,According to your instructions, I modified matrices to like this
H_aeroRef_T265Ref = np.array([[0,1, 0,0],[1,0,0,0],[0,0,-1,0],[0,0,0,1]])
H_T265body_aeroBody = np.linalg.inv(np.array([[0,0,-1,0],[1,0,0,0],[0,-1,0,0],[0,0,0,1]]))
or like this
H_aeroRef_T265Ref = np.array([[0,1, 0,0],[1,0,0,0],[0,0,-1,0],[0,0,0,1]])
H_T265body_aeroBody =np.linalg.inv(H_aeroRef_T265Ref)
and tilt forward copter before start script,but the tracking path in missionplanner is still incorrect ,
Is there anything else to pay attention to?

Just for confirmation, is your T265 facing down, with the USB port to the right of the vehicle (with respect to moving forward direction)? If it is, then the first set of matrices should be correct.

I usually tilt the copter’s nose up like below, to have a consistent yaw:

When the quad icon appears on the Mission Planner map, does it face to the north or to some random direction? Could you provide more details on how is the path incorrect?

T265 is facing down and quad icon to random direction




in two restarts tracking trajectory is to take the drone forward for 5 meters and then come back
I tilt the copter nose up 45 degrees until a confidence prompt appears.

That sounds like the problem with down-facing cases, as described in this issue. I would suggest you experiment with this a bit:

  • Test with various inclination angles (from small to large, like 10, 20, 30 then 45 degrees) at launch and see if there are any differences. For me, small angles like 10 or 20 degrees are enough.

  • You can also add a debug line into the code, just like this one and see whether the output data, specifically yaw angle, are the same between runs with the same starting angle.

  • If you can run ROS (see part 2 in particular), you will have more tools to debug the system, like RViz. realsense-ros uses librealsense so this issue still applies.

thanks I find Yaw rotates clockwise after run , tracking path problem should be related to this,I’ll add some debugging information and report any problems to you.

Hey Thien,

As I’m up to this stage, i thought i’d reply to your post here.

Again, thanks for the quick reply - everything seems to be working after copying the .so files over

Ill keep going and let you know how i go!

Thanks mate,

Hugh

Just a quick update here
As part of next Lab ( 1. Lab 6 : Calibration and camera orientation for vision positioning with Intel T265.), @LuckyBird is presently working on both the Python Scripting and on the code (EKF , Time Synchronisation and others libraries). As we wrote above , this next step could take few weeks but in the mean time , we would like to provide everyone with a minimal support for Beta tests.

There is a couple of known issues with the T265, regarding the orientation, initialization and other intrinsic parameters that are issued on Intel’s github. It would be preferable to get these fixed in the Firmware instead of building ‘‘work around’’ ;-)… but we are aware that they might not get resolved during the GSoC period.

So I asked Thien if he could make a ''quick fix" to get the T265 used as a standalone positioning system (without use of Optical Flow) aided with Compass fusion. This requires a method to align internal references to Magnetic North so that we dont get random initial vectors at startup and the associated ‘‘toilel bowl effect’’. We will keep you updated on this.

Thien,

Yesterday, I finally got the whole system working and tested it on a ground rover.

Firstly, congratulations; the results were nothing short of spectacular. In 6 years of testing non-gps localisation systems, I’ve never seen anything that performed as well as your system did.

Today i’m going to strap the system to a quadrotor and see how it performs in the air.

Ill let you know how it goes!

Thanks for your reliable support,

Hugh

2 Likes

Glad it works well for you! Really looking forward to hear your results and experience with both ground and air performance.

Hello , i try t265_to_mavlink.py it work, i receive VISION_POSITION_ESTIMATE message in Mision Planner , but the quadcopter is not appears on the map , i think the Ardupilot doesn’t send GPS Glitch and GPS Glitch Clear message ,what is the reason for that ? i folow the same arducopter parameters (only i use serial_2 baud 115 ) ? is it about flight modes ? you use guided mode ?

The GPS Glitch and GPS Glitch Clear messages sometimes are not sent by the FCU, even though the EKF still works fine.

You can always set EKF home directly from Mission Planner’s map (right click > Set Home Here > Set EKF Origin Here).

If that also does not work, you need to reboot the flight controller, either through Mission Planner or power cycle the vehicle.

I removed the IF condition (not check Glitch and GPS Glitch Clear message )to send SET_GPS_GLOBAL_ORIGIN message,but noting change .
I set EKF home directly from Mission Planner , also doesn’t work ;no drone icon
what strange I see some times the message “Need 3d fix” while I disabled gps!

Can you check if param AHRS_GPS_USE = 0 or 1? I believe this param also needs to be 0 for GPS to be disabled completely, which might be your issue.