Integration of ArduPilot and VIO tracking camera (Part 1): Getting started with the Intel Realsense T265 on Rasberry Pi 3B

Introduction

The Intel RealSense Tracking Camera T265 is a type of smart camera that uses proprietary V-SLAM (Visual-Inertial Simultaneous Localization and Mapping) technology to combine data from cameras and Inertial Measurement Units (IMU) to track the camera’s position around unknown spaces where GPS might not be available. The software uses all of these data to construct and continually update maps of the environment and the location of a device with high accuracy.

Besides the small form factor, the T265’s selling point is that all of the complicated V-SLAM algorithms and software run directly on the device itself, thus the computational resources required for the companion computer to run VIO/V-SLAM algorithms, much like these projects on ROVIO (part 1 and part 2) by @ppoirier, would be greatly reduced.

As part of my ongoing series to incorporate the T265 with ArduPilot, in this blog we will start with installing all necessary packages to use the T265 with Raspberry Pi 3B, specifically:

System requirements

The T265 is supported via librealsense on Windows and Linux. Depends on what you need from the T265, the companion computer should have USB2 or USB3:

  • For pose (x y z position and orientation) data: Any board with USB2 should be sufficient.
  • For fisheye image streams: USB3 is required.

For localization and navigation use case, we need to capture pose data and send the data to the flight controller for localization. A system consists of:

should be good to go since no images are needed by our application (the RPi 3 does not have USB3 anyway).

Install librealsense

Before we begin, it is worth pointing out that the installation process for librealsense varies widely depends on the architecture of your companion computer. The content of this blog works for RPi running Ubuntu. For instruction on different systems, refer to the document page:

Since no Debian packages are available for RPi, librealsense must be built from source.

  1. Install OS (if you have not done so): Ubuntu MATE 16.04 LTS
  2. Increase swap size: RPi does not have enough RAM to compile the SDK, hence swap size needs to be increased. Swap size of 2048 (2GB) seems to do the trick, but you can try other values. Here is the commands that I used, taken from this page.

# Toggle swap off

sudo dphys-swapfile swapoff

# Edit the config file and increase the swap size to 2048

# by editing variable CONF_SWAPSIZE=2048

sudo nano /etc/dphys-swapfile

# Toggle swap back on

sudo dphys-swapfile swapon

# Reboot your raspberry

sudo reboot

# After reboot, check that swap size changed

free

# Should show something like Swap: 2097148

  1. Clone librealsense repo and compile SDK:

# Update system

sudo apt update

sudo apt upgrade -y

# Install dependencies

sudo apt install git libssl-dev libusb-1.0-0-dev pkg-config -y

sudo apt install cmake python3-dev raspberrypi-kernel-headers -y

# Clone the repository under home

cd ~

git clone https://github.com/IntelRealSense/librealsense.git

cd librealsense

# Install udev rules

sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/

sudo udevadm control --reload-rules && udevadm trigger

# Create the destination directory

mkdir build

cd build

# Remove extra files if this is not your first run

xarg sudo rm < install_manifest.txt

rm CMakeCache.txt

export CC=/usr/bin/gcc-6

export CXX=/usr/bin/g++-6

cmake -D CMAKE_BUILD_TYPE="Release"\

-D FORCE_LIBUVC=ON \

-D BUILD_PYTHON_BINDINGS=ON \

-D BUILD_EXAMPLES=ON ..

make -j4

sudo make install

sudo ldconfig

# Finally, reboot the pi:

sudo reboot
  1. Test that librealsense is installed correctly:

The easiest way is to plug in the T265 and play with the SDK examples/tools.

  • If you have a monitor plugged in, you can open Intel Realsense Viewer by typing in: realsense-viewer. If the T265 is connected, the device will be available on the left panel. Click on the slider to start the device and switch to 3D view. Move the T265 around and you should see its trajectory.

Note 1: The RPi cannot handle the image streams with USB2. Trying to view the fisheye images in the 2D view or rviz or rqt_image_view might crash the application.

Note 2: Not all USB cables are created equal. If you use a different USB cable than the one came with the T265, check that the RPi / computer can recognize the device with that cable. Open the terminal and type in lsusb or rs-enumerate-devices to see if the device is recognized.

Note 3: In realsense-viewer, the T265’s tracking confidence will be shown in different color, i.e. “low” (red trace in 3D view), “medium” (yellow) or “high” (green). Press on the “i” button on the 2D/3D view to display T265’s info (including translation data).
image

You can also try the following tools and demos (just type the name in the terminal):

  • rs-pose - A basic pose retrieval example
  • rs-pose-predict - Demonstrates pose prediction using current system time and the callback API
  • rs-capture - 2D Visualization.
  • rs-enumerate-devices - list the IMU and tracking profiles (FPS rates and formats).
  • rs-data-collect - Store and serialize IMU and Tracking (pose) data in Excel-friendly csv format. The tool uses low-level sensor API to minimize software-imposed latencies. Useful for performance profiling.
  1. Test pyrealsense2 python wrapper:

The SDK provides a python wrapper named pyrealsense2, which is an essential part for one of our future work. The compiled library is located in the build folder: ~/librealsense/build/wrappers/python.

  • Update the PYTHONPATH environment variable to add the path to the pyrealsense library: export PYTHONPATH=$PYTHONPATH:/usr/local/lib
  • Alternatively, copy the build output (librealsense2.so and pyrealsense2.so in ~/librealsense/build/) next to your script.
  • The basic examples provided by Intel can be found in the folder ~/librealsense/wrappers/python/example. Run with python3.
export PYTHONPATH=$PYTHONPATH:/usr/local/lib

cd ~/librealsense/wrappers/python/example

python3 t265_example.py

You should see a stream of data coming from the T265.

Install realsense-ros

If you intend to use the T265 with ROS, you should install realsense-ros after librealsense. Note that the version of realsense-ros need to match the version of librealsense releases, so every time you update ros/lib, the other should be updated as well.

  1. Installation

The installation steps are straightforward and you can follow the instruction on the official repo here.

  1. Usage on the RPi

Since RPi cannot handle image streams, you should not open any image viewer on rviz or rqt_image_view. The system may crash on launch.

  • On RPi: start the camera node with roslaunch realsense2_camera rs_t265.launch. This will stream all camera sensors and publish on the appropriate ROS topics.
/camera/odom/sample
/camera/accel/sample
/camera/gyro/sample
/camera/fisheye1/image_raw (not viewable on RPi)
/camera/fisheye2/image_raw (not viewable on RPi)
  • If RPi is not connected to a display, view the data on PC with Linux Ubuntu: export ROS_MATER_URI=http://<rpi-ip>:11311 && rviz
  • You can now view the /camera/odom/sample and /tf data in rviz. Move the camera around and verify that the camera is indeed tracking its trajectory.
  • In another terminal on RPi: top to view current CPU usage.

Done. Can we start flying now?

Not quite. There are a few more steps that needed to be done:

  1. Install MAVROS
  2. Connect RPi to ArduPilot with MAVROS
  3. Convert the coordinates system of the T265 to MAVROS’ convention.
  4. Convert the data topic from T265 (we will use/tf, but you can also use /camera/odom/sample) to /mavros/vision_position/pose.
  5. Limit the publishing rate of pose data, from 200Hz to something slower, i.e. 30Hz.

Step 1 and 2 are straighforward and you can follow the wiki link to complete. Step 3, 4 and 5 will need more than a little explaination, and will be dicussed in the next blog.

Why do we need these extra steps?

Step 3, 4 is necessary since we are working with ROS, hence data topic and frame coordinates need to match between nodes (in this case, realsense-ros node and mavros node).

Why do we need to limit the T265 data rate in step 5? As it turns out, the 200Hz data stream from the T265 is too fast for the FCU, especially the older hardware versions, to handle. In my experiment with fmuv2 hardware, the moment you start sending pose data at 200 times per second, the FCU will be flooded and “freezed”, as no more data coming to telem channel. Slowing down the pose data is necessary. Furthermore, we don’t really need 200Hz of data. 10-15Hz localization data is the bare minimum threshold that is required for testing, and 30Hz should be more than enough for most cases (relative to how fast the vehicle is moving).

To elaborate on that last step, if you have worked with VIO/VI-SLAM algorithms before, you would know that there is always a trade-off between accuracy and efficiency that you have to tune for your specific use case. That is to say, unless you put an Intel NUC i7 CPU onboard the UAV, anything less and you will have to tweak the parameters related to computational cost to get borderline usable data (10-15Hz), at the expense of less accurate localization. With the T265, we got the opposite problem: accurate data can be obtained using the one of the most DIY-friendly and cheapest platform out there, the Raspberry Pi 3B, and even then it is still too fast. As @ppoirier put it, what we have here is “a beautiful problem”.

Conclusion and next steps

In this blog we have setup the RPi 3B to gather pose data from the Intel Realsense T265 tracking camera, with example and test for each step along the way.

Next time, we will take a deeper look at the above-mentioned extra steps, why we need to convert the frames between realsense-ros and MAVROS and what would happen if we don’t, and of course, start flying (after some successful ground test, that is).

15 Likes

This is a deep and complete installation WIKI that we will certainly add as a reference.

Having a working system on a RPI and entry level FC is great, working around limitations and inconsistencies and having it documented is really really great !

3 Likes

Really well done article @LuckyBird, it is evident that you do this stuff with passion and competence and that you done all this with the right scientific approach.

2 Likes

Thank you @anbello for your kind words!
I just hope others might find this helpful and we can see more state-of-the-art technology like this getting used in useful applications by folks in the community, not just on paper and “demo” videos.

1 Like

Great work!!!
To what altitude outdoor do you think the system could be used for navigation in place of a flow camera?

Do you think an RPi3B can do both companion computer for telemetry and run the T265 together or it’ll need a separated dedicated RPi? The companion is running raspbian and i see you use ubuntu, i wonder if this could run on raspian too.

This is an interesting question and one with no definite answer that I know of. Certainly, when tracking data is accurate, the T265 can be used to provide positional feedback. Below are a few outdoor tests using T265, taken from this GitHub thread:

  • Longer T265 recording including several flights of stairs and a daylight outdoor section:
    20190204084206

However, there are a number of factors that can affect the T265 performance:

  • Environment: The tracking confidence level depends on various parameters as lighting of the scene, number of features, etc. For outdoor environment which might not have much features, tracking may not be reliable. Which might lead to the next problem:

  • At longer distance, the output scale is reported to be off by 20-30% of the actual scale.

  • With its limited memory, the T265 can store a maximum map size of “roughly couple of rooms/house scale” , depends on how complicated the environment is (source).

A simple test if you have problem with tracking quality: in realsense-viewer, you have an option to display T265’s info (including translation data) by pressing on the “i” button on the 2D/3D view.
image

Perform a handheld test in the operating environment, observe the T265 confidence of the tracking stays “medium” (yellow trace in 3D view) or “high” (green). This might give you some hints on where the problem might be (not enough distinct features in the environment, too much vibration on the vehicle, etc.)

I think it’s definitely possible. Since the T265’s software is running on the device itself, your RPi is free to do everything else.

The main differences are the steps to install librealsense. I believe for raspian you can follow this guide.

Thank you for your answers, i’ll do some testing once all the instructions will be out. From your comments i believe i understand that it is better suited for indoor flying.

@ppoirier @LuckyBird I move the topic to blog for larger audience as the progress show are very nice !

1 Like

Hello Thien,

Thanks so much for this awesome walk-through, i’m really keen to test it out. I happen to have all the parts necessary to follow your labs, so i thought i’d give it a shot.

I ended up having an issue installing the raspberrypi-kernel-headers package, everything prior worked fine. I’ve attached a screenshot of the error message. Any help would be much appreciated!

Thanks,

Hugh

Hi @hugh, which kernel version are you using? uname -r should show you the version. You may need to upgrade to newer kernel version.
Have you tried compiling SDK and see if there are any errors?

Hey Thien, thanks for the quick reply.

currently using 4.15.0-1032-raspi2

going to try compiling the SDK now, ill edit the post and let you know how it goes

thanks!

hugh

Hey Thien,

Tried compiling librealsense on both a 3 and 3B+, both on 16.04 and 18.04 with no success

both times failed at 53% stating that makefile:129 recipe for target all failed.

thanks,

Hugh

Hi @hugh, I am using 4.19.20. I would suggest updating the kernel via the official instruction and try the installation steps before recompile librealsense again.

Awesome, thanks Thien,

I’ll give it a shot and let you know how it goes.

Hugh

Thien,

Success!

Finally compiled librealsense, I would note that a couple of other packages needed to be installed as per the Intel realsense installation instructions for linux on their GitHub repo.

Currently going through your instructions for the python mavlink bridge, I’ll let you know how it goes.

Quick question, is their any reason you chose to use a 3B instead of a 3B+? Also, just ordered a couple of pi 4’s in 2 and 4 GB, would they work too?

Thanks mate,

Hugh

Hi @hugh, glad it worked out for you.

One of the goals of our project is to demonstrate that the system can be implemented on even the not so powerful computer board, hence the RPi was selected. I used the 3B instead of 3B+ since it was available for me.

Any board that has USB 2.0 ports should be enough for this project, hence the Pi 4 should work with no issues. Furthermore, the Pi 4 has USB 3.0, so you can also stream images from the T265, which is not applicable for the Pi 3.

Awesome,

I figured i’d start with the python mavlink bridge, as im not too comfortable with ROS; however i can not get pyrealsense2 to install.

The command “sudo pip3 install pyrealsense2” continuously returns " no matching distribution found for pyrealsense2"

any ideas?

Thanks,

Hugh

If the libraries are not available for your system, I can see a few workarounds, you can try one by one:

  • Copy the built libraries librealsense2.so and pyrealsense2.so in ~/librealsense/build/ and place them next to the script that you want to run.

  • Install python2 version: sudo pip install pyrealsense2. In that case, you might need to change the scripts to fit python2 version of APIs.

Hello , i need help for step 3 ,4 ,5 ,I work with arducopter and mavlink but i did not work much on the ros and mavros , i have odroid xu4 and pixhawk ,i installed the library , the camera t265 work ,
what I understood is i must take the camera position ( translation , rotation ) from topic /tf and transmit it with mavros, VISION_POSITION_ESTIMATE [#102 ] message , 20 HZ , it is like that ?

@Tarek-H You are right, but with one missing part. You also need to perform frame transformation to take into account different camera orientation (if not using the default forward facing) and align 0 degree heading (otherwise you will see the vehicle facing east at startup).

We also have a non-ROS implementation in Python in part 4, so you can jump to that first to have a quick up-and-running system, and maybe come back when you feel adventurous with ROS.