- A new wiki page for the Realsense Depth Camera summarizing the relevant blog posts.
- APSync instructions for the UP2 board.
Following previous blog posts, ArduPilot now has as our disposal:
- The RealSense T265 tracking camera to obtain GPS-less navigation, and
- The Realsense D435/D415 depth camera to enable obstacle avoidance in various manners.
In this blog post, we will combine and enhance the usage of these two cameras leveraging our previous project as well as recent advancements in the ArduPilot’s codebase.
Due to limted access, the experiments of this blog post are constrained to only a few small areas (indoor room and garage) whereas outdoor scenarios would be the most demanding tests and results would be more valuable. The system will continue to be validated in the future when the situation becomes more appropriate.
All testers are welcome
While for the T265 camera a low-cost RPi3/4 is sufficient (a simple setup instruction for RPi4 can be found here), for the D415/D435 cameras I highly recommend using an x86-based CPU to ensure compatibility with Intel’s libraries and hardware.
The Up Squared SoC board is what I am using. For other boards of other architecture (RPi, Jetson etc.), you will likely encounter troubles with installation and running the code.
OS: Ubuntu 18.04 - highly recommended.
Python 3.6 and above, which is also the standard for Ubuntu 18.04. Check with
$ python3 -V, you should see the version, something like
- Installation of supporting packages:
# Only necessary if you installed the minimal version of Ubuntu sudo apt install python3-opencv # For T265 and D4xx script pip3 install pyrealsense2 pip3 install transformations pip3 install dronekit pip3 install apscheduler pip3 install pyserial # For serial connection pip3 install opencv-python # For using both scripts at the same time pip3 install dronekit-sitl -UI pip3 install MAVProxy
- Download the following scripts or clone the ROS package to keep track of all the updates:
# Clone the package in the ROS workspace or any location if you don't use ROS cd ~/catkin_ws/src git clone https://github.com/hoangthien94/vision_to_mavros.git cd vision_to_mavros/script chmod +x t265_to_mavlink.py chmod +x d4xx_to_mavlink.py chmod +x mavlink_control.py chmod +x rs_to_mavlink.py chmod +x opencv_depth_filtering.py
To avoid repetition, please refer to the following resources for the usage of each camera.
T265 tracking camera:
D435/D415 depth camera:
- Blog post for the current instruction and discussion. Wiki page to come.
3. Enhancements for tracking and depth camera
The following sections will assume the basics are established, meaning you can successfully use each camera and we will only focus on how to improve the cameras’ performance. It is important to keep in mind that each camera has its own set of problems and the troubleshooting would be easier if you get familiar with them separately.
Make sure you are using the latest firmware (at least
4.0) and scripts, otherwise the below features might not be available.
Using the script
t265_to_mavlink.py, the recent enhancements include:
Various types of information are now available from the T265’s raw data, most notably:
VISION_SPEED_ESTIMATEand more. While the basic
VISION_POSITION_ESTIMATEmessage is sufficient for precise non-GPS navigation, the new delta and speed messages allow more flexibility for the users.
The maximum data rate for the messages can now be up to 50Hz (as opposed to 15Hz previously):
- Set the max data rate in the script here. At first use the USB cable to connect the FCU and GSC to make sure the data is being received at the intended frequency.
- Check again using your system’s telemetry, which would likely show lower/unstable data rate than the setting because of limited bandwidth. The other messages might also get affected (no data appears on Mission Planner).
- You can use
SERIALx_OPTIONSbitmask to disable forwarding all MAVLink messages from the T265 to the GCS to preserve the bandwidth for the other messages (supported with this PR).
Using VISO_ advance parameters could improve the performance if your system has:
- Large position offsets between the T265 camera and the FCU (with
VISO_POS_X,Y,Z). Instructions for these params can be found in this wiki. The values do not have to be super precise, but if you would like to methodologically calibrate the offsets I recommend using the Kalibr toolbox.
- Noticeable time delay with the T265’s data received by the FCU (with
VISO_DELAY_MS). Here’s an example of the delay (see the peaks of the T265’s roll-
VISP.Rolland the internal IMU’s roll-
AHRS2.Roll) of about 10ms:
- Large position offsets between the T265 camera and the FCU (with
However, there are still multiple known issues reported from users regarding the T265. More developments are still underway but at the moment it’s best to keep them in mind before/while doing any tests:
- The position data sometimes does not have the correct scale (i.e. move 1m, reported 0.8m). The temporary solution is to use
VISO_SCALEon AP’s side (or
scale_factoron the script side). Unfortunately, the issue is not consistent so you need to check this with every flight.
- The position data can be unreliable sometimes (divergence - go to infinitive, or jump - instantly switch between values) which seem to happen most often at the takeoff and landing phase. Using multiple data types, adding optical flow and range finder, increasing noise level, etc. can help, but nonetheless always be ready to take back control of the vehicle.
- General issues with the T265 (hardware: USB cable connection, USB port power; software: position drift, velocity unstable etc.). You might want take a quick look at the
Tuning for better outdoor performance:
The default settings work well indoor. However, for outdoor experiments I recommended tuning the system until you get reliable detection of obstacle before any actual flight tests.
opencv_depth_filtering.pyscript, you can experiment different settings for the filters.
- Run the script:
- Once satisfied with what you see, close this script and change the default filtering values in the main
- Run the script:
d4xx_to_mavlink.pyscript, enable debugging option (
debug_enable_default = 1) to see both the input (left) and output (right) images. On the upper right corner is the current processing speed (fps).
Run the script:
In Mission Planner, open the Proximity View (
Ctrl + F>
Proximity). You should see three arcs indicating the distance to the obstacles in front of the cameras. Use the keys (+/-/[/]) and change the sizes to improve the visualization if necessary.
Update: In MP version 1.3.73 onwards, the Proximity View has been updated and would look like this:
Note: the proximity view shows the distance to the nearest individual point within each 45 degree arc but only on the obstacle detection line as shown in the output image. As such, sometimes it might seem incorrect what is appearing on the Proximity view but it is simply because we don’t see what ArduPilot is actually perceiving.
Improving the obstacle detection line
As stated above, the horizontal line on the output image (right) indicates the line on which we find the distances to the obstacles in front of the camera. Below are some improvements based on actual usage of the system:
When the vehicle is on the ground, it is possible that a large portion of the depth image will see the ground. In such cases, within the
d4xx_to_mavlink.pyscript, reduce the
obstacle_line_height_ratioparameter (closer to zero) to move the obstacle detection line up.
Additionally, you can increase the thickness of the obstacle line with
obstacle_line_thickness_pixel. At the time of this writing, the idea is to process a group of pixels within a certain boundary (defined by this parameter) and find the lowest value to use as indicator to the object. This can change in the future if a better scheme is found.
Lastly, the obstacle line will now be kept “fixed” when the vehicle pitches up and down, by compensating for the current pitch angle of the vehicle which is provided by the
4. Combining the tracking and depth camera
We provide several scripts to simplify the process of using the tracking and depth cameras simultaneously, be it with ArduPilot or any other systems (SLAM/navigation). The following scripts can be found in the same
rs_list_info.py: automatically show serial number and available data streams of all connected realsense cameras.
calibrate_extrinsics.py: find the extrinsics of the two cameras given a calibrating target (chessboard grid) is present in both cameras’ FOV. The details of the calibration process is outside the scope of this blog, but for anyone interested all the main steps can be found within the script.
rs_to_mavlink.py: using MAVProxy to create a software “bridge” between the scripts and the actual hardware serial connection. This way, the T265 and D435 scripts can be used simultaneously as well as seperately without any modifications in either.
connection_in_baudto the actual hardware connection.
- Run with:
python3 rs_to_mavlink.py. The behaviour (as well as problems) should be the same as if each camera was being used normally.
5. Example of autonomous flight in GUIDED mode
Finally, here is a simple example of autonomous flight with accurate GPS-less navigation (T265 tracking camera) and obstacle avoidance (D4xx camera).
- In the
rs_to_mavlink.pyscript, find and uncomment the thread4, which will add
mavlink_control.pyscript into the software stack.
The purpose of
mavlink_control.py script is to control the vehicle to follow a pre-defined pattern at the flick of an RC switch (activation switch).
- Within the
rc_control_channelto any channel that you want to use to activate the autonomous flight (default is
6). The activation threshold as well as the logic might need to be changed depends on how your RC is configured.
- The autonomous flight will start when the flight mode is switched to
GUIDEDAND the activation switch is on. The procedure is as follows: take off and flight normally; switch to
GUIDED; then flick the activation switch on.
- The autonomous flight is as follows:
- The vehicle will first attempt to yaw to the zero heading. If all compasses are disabled/not used, then zero heading is relative to the starting orientation. If compass is used, then zero heading is the magnetic north.
- The vehicle will flight in a 2m x 2m square pattern, changing the heading as it goes.
- Depends on the parameters, you will see the avoidance behaviour if there are any obstacles along the path.
- If your testing environment does not permit a full square pattern, simply comment out/modify the script as needed to accomodate your conditions.
Below is a short video summarizes the above steps and how a working system should behave:
6. Conclusion and next steps
With both the tracking camera and depth camera at our disposal, the next step would be to use them in a more elaborate manner. Specifically, there are a number of recent open-source systems utilizing both cameras for various high-level tasks such as mapping, path planning and 3D avoidance. Our next target is to adapt a simple, easy-to-replicate system to work with ArduPilot, enabling the pursuit of even more interesting projects in the future.
- List of GSoC 2020 projects and my project.
- Recommendation to improve D4xx depth data for drone: web article, paper, github.
- List of post-processing depth filters: link.
- Avoidance Experiments with the POC and Benewake TFMINI
- Avoidance experiments with the new POC and Benewake TFmini-S