1. Introduction
This is the second half of the 2-part blog posts. See part 1 if you are interested in a Python implementation of this same project but not related to Robot Operating System.
Synopsis
Utilizing the Intel RealSense Tracking Camera T265 for multiple purposes:
- Stable, accurate GPS-less flight with its standalone 6-dof (200Hz): The first and foremost usage of the VIO tracking camera, which has been realized in this GSoC 2019 project. Wikis are available for non-ROS (Python) and ROS (C++) support.
- Precision landing: without (part 1) and with (part 2, this blog post) ROS.
In the below sections I will first briefly outline the system, followed by a step-by-step guide to make the system work and actual experimental process, and finally some discussions on the many choices for implementation. Note that details on some parts will be omitted if they are already covered in the linked wikis or previous blog posts.
Let’s dive in.
2. Prerequisite
Hardware
- Vehicle: A copter vehicle to your liking. If you are new to the DIY world, the Copter wiki is your one-stop shop for all the basics that you need to learn.
- Onboard computer: a Linux system with USB3 port(s) is required. The Up board is what I am using, but the Jetson Nano or RPi 4 are also suitable alternatives for low budget setup.
- Sensor: the Realsense T265 camera in downfacing configuration (USB port pointing to the right of the vehicle). Different camera orientations require some parameter modifications.
- Landing target: One AprilTag’s fiducial marker printed, measured and fixed to the landing surface.
Software
-
Follow the ROS-based wiki for non-GPS navigation with VIO camera until you have a system capable of hovering in Loiter running ROS. For troubleshooting related questions, please post questions on the ArduPilot forum or comment on the blog for installation and real flight experiments.
-
Before you move on, make sure that you have a system with:
librealsense
,pyrealsense
andrealsense-ros
working.- MAVROS connection between the companion computer and ArduPilot FCU.
- Ability to attain stable flight in Loiter mode (highly recommended).
Prepare the landing target
-
First, select (or create) the tag that you want to use as a landing target. Say you decide to use tag id
00
of the36h11
type. Print, measure and note down the actual tag’s size (the black part of the tag), which should be 0.144m if printed on an A4 paper. -
Fix the tag to some landing pad or to the ground, make sure that it won’t get blown away when the copter is descending.
The pdf for all tags of the
36h11
family can be found here. Other tag options can be found here.
3. How does it work?
How to perform accurate localization and navigation in ROS?
You can take a look at our past GSoC 2019 project for a complete explaination on the VIO integration part.
How to perform precision landing in ROS?
In a nutshell, the process includes:
- Obtaining the raw images from the downfacing camera.
- Applying the computer vision algorithm of choice to acquire the target.
- Performing requisite calculation to obtain the right kind of data for the flight stack in use, then send them to the FCU.
- Experimenting to figure out the limits of the system, which include things like the min-max target detection range for a given target size, min-max data rate for a stable landing and how it affects other parts of the system, tuning parameters to achieve desired results, etc.
The main purpose of this blog post is demonstrating how to realize step #1-#3 for one specific setup which includes the Intel Realsense T265 as the sensor of choice, AprilTag as the target detection algorithm, and ArduPilot as our flight stack.
More specifically, below are the main components of our ROS-based precision landing system given the aforementioned setup:
ID | Launch command | Description |
---|---|---|
1 | roslaunch realsense2_camera rs_t265.launch |
The camera driver node realsense-ros , which publishes all raw data, including images in the topics /camera/fisheye1/image_raw and /camera/fisheye2/image_raw . |
2 | roslaunch vision_to_mavros t265_fisheye_undistort.launch |
A node that subscribes to the raw stereo images, undistorts and rectifies them, then publishes the rectified images as well as corresponding camera_info topics. |
3 | roslaunch apriltag_ros continuous_detection.launch |
ROS wrapper node for the apriltag visual fiducial marker system which takes in rectified image and corresponding camera_info topics to detect and calculate the tag’s pose relative to the camera frame, given accurate configurations in the tags.yaml file. |
4 | roslaunch vision_to_mavros t265_downfacing_tf_to_mavros.launch |
1. Transform 6-DoF pose data from T265 to VISION_POSITION_ESTIMATE data for mavros. 2. Transform landing tag’s pose in the camera frame (output from apriltag_ros ) into LANDING_TARGET data for MAVROS. |
5 | roslaunch mavros apm.launch |
MAVROS node for communication between ROS and ArduPilot. |
4. Installation
A quick clarification before we start:
apriltag_ros
(andmavros
, if built from source) is built withcatkin build
, whereasvision_to_mavros
(andrealsense-ros
, if built from source) is built withcatkin_make
.- Thus, there are two separate catkin workspaces (
catkin_ws
forcatkin_make
, andcatkin_ws_build
forcatkin build
) in the sections below.
Install AprilTag 3 library and ROS wrapper
-
Note that
apriltag_ros
requirescatkin build
workspace, which must be separated fromcatkin_make
workspace forrealsense-ros
(if built from source) andvision_to_mavros
packages. -
The commands below are adopted from the
apriltag_ros
repo. Always check the repo for the most up-to-date instructions. Currently, ROSKinetic
andMelodic
are supported.
export ROS_DISTRO=kinetic # Set this to your distro, e.g. kinetic or melodic
source /opt/ros/$ROS_DISTRO/setup.bash # Source your ROS distro
mkdir -p ~/catkin_ws_build/src # Make a new workspace, change the name if you have a catkin_ws folder created with catkin_make
cd ~/catkin_ws_build/src # Navigate to the source space
git clone https://github.com/AprilRobotics/apriltag.git # Clone Apriltag library
git clone https://github.com/AprilRobotics/apriltag_ros.git # Clone Apriltag ROS wrapper
cd ~/catkin_ws_build # Navigate to the workspace
rosdep install --from-paths src --ignore-src -r -y # Install any missing packages
catkin build # Build all packages in the workspace (catkin_make_isolated will work also)
source devel/setup.bash # Make sure to source otherwise ros cannot find the nodes from this workspace
- The default installation will place headers in
/usr/local/include
and shared library in/usr/local/lib
. Python wrapper will also be installed if python3 is installed.
Install vision_to_mavros
package
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone https://github.com/hoangthien94/vision_to_mavros.git
cd ..
catkin_make
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc # Add source command to .bashrc, if you have not done so
source ~/.bashrc
Install mavros
and establish connection between companion computer and ArduPilot
Follow the wiki page, then it would be simplier if all changes are included in apm.launch
. Otherwise, remember to append the exact params (such as fcu_url
) in all the subsequent launch commands for MAVROS.
5. Modification of parameters
After all packages are installed, certain parameters need to be changed for things to work properly.
Change camera calibration file for vision_to_mavros
package according your specific device
These parameters are required to perform image rectification with the Kanalla-Brandt distortion model which is implemented in OpenCV as the “fisheye” module.
-
Obtain the real params: With the T265 plugged in a computer with
librealsense
installed, you can runrs-enumerate-devices -c
, which will show a host of information under the nameStream Profiles supported by Tracking Module
. Example output of the command can be seen here. -
Then, modify the params
K1
,D1
,K2
,D2
,R
,T
in the calibration filecatkin_ws/src/vision_to_mavros/cfg/t265.yaml
as follows:
Params in t265.yaml |
Description | Output obtained from rs-enumerate-devices -c |
---|---|---|
K1 |
Intrinsic matrix of left camera | Intrinsic of “Fisheye 1” : PPX , PPY , Fx , Fy |
K2 |
Intrinsic matrix of right camera | Intrinsic of “Fisheye 2” : PPX , PPY , Fx , Fy |
D1 |
Distortion coefficients of left camera | Intrinsic of “Fisheye 1” : First 4 numbers of Coeffs |
D2 |
Distortion coefficients of right camera | Intrinsic of “Fisheye 2” : First 4 numbers of Coeffs |
R |
Rotation matrix from left camera to right camera | Extrinsic from “Fisheye 1” To “Fisheye 2” : Rotation Matrix |
T |
Translation vector from left camera to right camera | Extrinsic from “Fisheye 1” To “Fisheye 2” : Translation Vector |
with the camera matrix K as: |
||
Fx |
0 | PPX |
:—: | :—: | :—: |
0 | Fy |
PPY |
0 | 0 | 1 |
Change configuration of apriltag_ros
package according to the landing tag
-
Find the file
apriltag_ros/apriltag_ros/config/tags.yaml
and add a standalone tag with the actualid
,size
and setNAME
aslanding_target
. For example, with the example A4 tag above we would have:standalone_tags: [ {id: 0, size: 0.144, name: landing_target} ]
The id and size are required for the AprilTag algorithm, and the name is the frame_id under which the tag’s pose in the camera frame would be published in the
/tf
ROS topic. -
Next, modify
camera_name
,camera_frame
andimage_topic
params in the launch fileapriltag_ros/apriltag_ros/launch/continuous_detection.launch
as follows:<arg name="camera_name" default="/camera/fisheye2/rect" /> <arg name="camera_frame" default="camera_fisheye2_optical_frame" /> <arg name="image_topic" default="image" />
Where do these values come from? Note that the outputs from
t265_fisheye_undistort
node include:- Rectified image topics:
/camera/fisheye1/rect/image
(left) and/camera/fisheye2/rect/image
(right). - Camera info topics:
/camera/fisheye1/rect/camera_info
(left) and/camera/fisheye1/rect/camera_info
(right).
and since
apriltag_ros
subscribes to$(arg camera_name)/$(arg image_topic)
, we split the image topic’s name into 2 parts, and we wish to use the right camera (fisheye2
), thus we end up with the above params. - Rectified image topics:
-
camera_frame
can be anything, but the name of camera’s frame as published byrealsense-ros
iscamera_fisheye2_optical_frame
(default value for the right camera), thus we will also use it for consistency. This might come in handy for calculation related to/tf
as well as visualization in RViz.Note: if you wish to use the left camera, simply find all instances of
fisheye2
and change them intofisheye1
in 2 launch files:continuous_detection.launch
andt265_downfacing_tf_to_mavros.launch
. -
Finally, make sure:
precland_target_frame_id
int265_downfacing_tf_to_mavros.launch
must matches the tag’s name inapriltag_ros/apriltag_ros/config/tags.yaml
, andprecland_camera_frame_id
int265_downfacing_tf_to_mavros.launch
must matchescamera_frame
inapriltag_ros/apriltag_ros/launch/continuous_detection.launch
.
Change MAVROS setting to allow relaying LANDING_TARGET
messages
Navigate to mavros launch folder, edit the apm_config.yaml
file and change listen_lt
from false
to true
.
roscd mavros/launch
sudo nano apm_config.yaml
# Find landing_target, change liten_lt: true
ArduPilot params
- The params required to facilitate VIO flight are documented in this wiki.
- Set the following parameters through Mission Planner (or other GCS) to enable the precision landing feature and then reboot the flight controller.
PLND_ENABLED = 1 PLND_TYPE = 1
- Setup the vehicle with one of the flight modes to
LAND
. - Other
PLND_*
param can be set according to your references, see here for the full list.
Once all of the params are set correctly, it’s time for some ground test.
6. Ground test
Let’s take the system apart, and test them one by one before any real flights. You might want to run the following commands one by one, make sure each of them produces the right results before moving on:
roslaunch realsense2_camera rs_t265.launch
: all the data from the tracking camera should be coming through, including the raw fisheye images:
roslaunch vision_to_mavros t265_fisheye_undistort.launch
: view the rectified image/camera/fisheye2/rect/image
withrqt_image_view
or RViz. Additionally, check that/camera/fisheye2/rect/camera_info
are being published.
roslaunch apriltag_ros continuous_detection.launch
: view the detection imagetag_detections_image
withrqt_image_view
or RViz. Then move the copter so that the camera points at the landing tag, then/tag_detections
and/tf
should publish the detected tag’s pose in the camera’s frame.
roslaunch vision_to_mavros t265_downfacing_tf_to_mavros.launch
: the ROS topics/mavros/vision_pose/pose
and/mavros/landing_target/raw
should be populated. The latter only has data coming in when the landing target is in the field of view of the camera (and distinguishable, i.e. not too far away or too close).roslaunch mavros apm.launch
: there should be connection with the FCU, with messages popping up on the terminal.
After launching all the 5 nodes, hold the vehicle on top of the landing tag such that the camera can see the landing tag clearly. Then, check on the GCS:
Verify that ArduPilot is receiving LANDING_TARGET
data
- Whenever the landing tag is in the camera’s view and MAVROS is running correctly, view MAVLink messages on the GCS (Mission Planner:
Ctrl+F
>MAVLink Inspector
, QGC:Widgets
>MAVLink Inspector
).LANDING_TARGET
should have updated values in theangle_x
,angle_y
,distance
fields (the rest are mostly ignored).
- Move the camera around and check the values in all xyz directions are correct. In this project’s default configuration (USB port to the right), the camera’s x-axis will point to the right, y-axis will point to the back, and z-axis points downward (to the ground).
Verify that ArduPilot is receiving VISION_POSE
data
- Similarly, check that the message
VISION_POSITION_ESTIMATE
is coming through to the flight controller.
Improve detection rate of AprilTag if it is too low
Ideally, the LANDING_TARGET
message should come at 10Hz and above for a smooth landing. If the output of AprilTag is lower than 10Hz (check the rate in ROS with rostopic hz /tag_detections
, or in GCS with a stable telemetry connection or USB connection), you can improve the detection rate by checking out the corresponding ROS wiki and adjust the settings in settings.yaml
, particularly the param tag_threads
and tag_decimate
.
7. Flight test
Once everything is confirmed to work properly, all of the individual launch files are grouped in the t265_precland.launch
. Thus, for the flight tests the only command needed is:
roslaunch vision_to_mavros t265_precland.launch
Thereafter, we can simply follow the ArduPilot precision landing with IR-Lock wiki for instructions on how to actuate the landing in-flight. Below is a short summary:
- For downfacing configuration of the Realsense T265, there is a problem with inconsistent yaw angle between runs. The temporary fix is to tilt the nose up a little, so that the camera is not facing flat down, when the script starts. Once the FCU starts receiving vision messages, the vehicle can be put flat on the ground.
- Take-off and hover above the target.
- When target is detected (see above about verifying that ArduPilot is receiving
LANDING_TARGET
data), you can switch the vehicle toLAND
. - If everything is working properly, the copter should move towards then land on the tag.
- As usual, be prepared to retake control if there are sudden unexpected movements (i.e. change back the mode to Stabilize, AltHold or Loiter).
- If all is well, some successful flight tests can be seen below:
8. Discussion and alternative approaches
For this blog post, almost every steps have some alternatives that should be considered for your particular situation.
Fiducial marker system
There are many other options out there such as Aruco which is already supported in this wiki. Note that Aruco is now GPL licensed while AprilTag is using BSD license. The latest version of AprilTag (3) also support new and flexible layouts (circle, nested, etc.). You can find some comparisons in this paper.
Camera orientation
- It is implicitly assumed that the camera’s x axis is aligned with the vehicle’s y axis (to the right) while the camera’s y axis aligned with the vehicle’s -x (to the back). This is the same as pitching the camera down 90 degrees from facing forward.
- If your camera is oriented differently, then a transformation is mandatory. For reference, here’s the T265’s coordinates (note the fisheye’s axes), whereas the FCU’s local frame is (X Forward, Y Right and Z Down).
Image rectification for AprilTag detection
-
The AprilTag’s algorithm assumes rectified image and
camera_info
topics as input. Since the Kannala-Brandt distortion model (so-calledfisheye
model in OpenCV) employed by the T265 is not yet supported in theimage_proc
pipeline, we need to perform rectification in some other ways. -
In this project, a new ROS node (
t265_fisheye_undistort_node
) was created which takes inspiration from the sample script t265_stereo.py but in C++ and ROS. Additionally, left and right images are also synced with ApproximateTime_Policy to produce time-synched rectified stereo images which might be useful for other purposes such as generating disparity maps, at the cost of a slightly reduced image rate compared to the raw data. -
I would highly recommend you check out the
image_undistort
repo, which supports a number of other less popular models and a bunch of cool computer vision functions, however not the ones we specifically need for this project.
How to run MAVROS landing target plugin?
There seems to be no documentation anywhere on how to use the plugin correctly that I could find. Hence, to the source code!
Below is a brief rundown from my understanding of the code.
- First, take a look at the main source code for
landing_target
plugin and the definition of theLandingTarget
message. - All input params for this plugin can be set in this section of
apm_config.yaml
before launching mavros. - The data input type for
landing_target
plugin can be in 1 of 3 forms, as seen in this part:- Through listening to
/tf
transform. Can be enabled by settinglisten_tf
totrue
. - Through subscribing to
/mavros/landing_target/pose
messages (of typePoseStamped
). Can be enabled by setting bothlisten_tf
andlisten_lt
tofalse
. - Through subscribing to
/mavros/landing_target/raw
messages (of typeLandingTarget
). Can be enabled by settinglisten_tf
tofalse
andlisten_lt
totrue
.
- Through listening to
- You can choose whichever input method that works for you. However, note that for option 1 and 2 (as of this writing) there are several ENU-NED frame transformation functions implemented in the code that one needs to decipher in order to get the correct output given an (presumably well-understood) input. Generally, follow the callback function for your type of input which will lead to this function to figure out the dataflow, frame conversions, related parameters and what goes where.
- In this project, option 3 was selected for it provides the most control of the final MAVLink messages to be sent. Looky here. If you are working with PX4, however, the results might not be the same (see point below).
How to populate the LANDING_TARGET
data messages?
- If the landing tag is detected in the image, precision landing can be performed following MAVLink’s landing target protocol.
- As of this writing, different flight stack (PX4 and AP) supports different kinds of input. See the MAVLink wiki for details. For ArduPilot, only
LANDING_TARGET
fields relative to the captured image are supported (hence the ignored fields in theLANDING_TARGET
message).
9. Useful links
- A quick overview of AprilTag.
- A very helpful explaination of the related OpenCV concepts: stereo calibration and image rectification.
- Many useful infos regarding precision landing are gathered in this discussion: How to use Precision Landing feature with companion computer?.
- More tuning instructions related to precision landing can be found in the wiki: Precision Landing and Loiter — Copter documentation.