Experiment with Visual Odometry - ROVIO

Chapter one: Building and Testing a Monocular Visual Odometry System

New features are being added for the localisation and positioning using on-board camera systems to the flight controller as VISION_POSITION_ESTIMATE MavLink message,this opens a new field of experimentation to implement NON-GPS autonomous flights.

I propose a Monocular Visual Odometry System that can be built using off the shelves components and open-source software developed by the ETH Zurich Autonomous Systems Lab. The software is running on a Intel Pentium based Companion Computer running UBUNTU 16.04 and ROS KINETIC. The VIO system is controlling a Linux Based Flight Controller -BBBMINI- installed on 450 Size Quadcopter.

Visual Inertial Odometry
Using a camera system and an Inertial Measurement Unit - IMU , we can estimate a 6 DoF (Degree of Freedom) state corresponding to 3D position (xyz) and 3 Axis rotation (roll-pitch-yaw), in relation to a fixed reference W (World/ Map / Home).

source: https://www.researchgate.net

This state estimation is then transmitted to the flight controler using the MavLink message Messages (common) Ā· MAVLink Developer Guide corresponding to the actual vehicle state in space:

VISION_POSITION_ESTIMATE ( #102 )

Global position/attitude estimate from a vision source.

Field Name Type Units Description
usec uint64_t us Timestamp
x float m Global X position
y float m Global Y position
z float m Global Z position
roll float rad Roll angle
pitch float rad Pitch angle
yaw float rad Yaw angle

The system can be broken down in 3 major components:

A) The Visual Inertial sensor : A Global Shutter USB Camera with a MPU9250 IMU connected to an Arduino
B) The state estimator : ROVIO (Robust Visual Inertial Odometry) that runs on the Companion Computer
C) The Flight Controller : On this system I am using the BeagleBone Black with a DIY sensors cape = The BBBMINI

A) The Visual Inertial sensor

Inspired by this excellent blog, ROS camera and IMU synchronization | Work-is-Playing, I decided to build my own sensor. Searching on EBAY for global shutter cameras , I ordered two UEye-1220LE cameras at a fraction of the price and replaced the lens with a 1/2.5" 1.7mm 5MP M12 IR Blocked Wide Angle FPV Camera Lens. After some experimentation, I was able to make it run as a calibrated ROS camera.

Camera and IMU synchronization Next step is making the IMU sensor device, based on a MPU9250 connected to an Arduino Pro Mini (5V ATMEGA328P) using the I2C bus. I made a simple ā€œthru-holeā€ board, soldered the components and connectors, mounted this board on the back of camera using nylon posts and inserted the IMU module into connector so it is sitting on top of camera.The connections of the Arduino are Serial to CC , SDA-SCL & INTERRUPT_PIN to-from MPU9250 and TRIGGER_PIN to Camera Shutter.

The IMU is free-running with its own internal DMP processor that generates a stable 200 Hz interrupted updates of the GYRO and ACC readings. The Arduino reads the IMU, and based on a predefined number of interrupts, it will triggers the camera shutter(via the trigger line) to capture a new image. This way we have a continuous 200 hz flow of IMU data with 20 HZ images capture that is hardware synchronized with the IMU. I did some modification on the ROS Node so it can be more stable and less prone to drift, look for patrick-mods in mpu6050_serial_to_imu_node.cpp, I havenā€™t changed name from mpu6050 to mpu9250 as the DMP code ant the pins are the same between models.

B) The state estimator : ROVIO

Robust Visual Inertial Odometry (ROVIO) is a state estimator based on an extended Kalman Filter(EKF), which proposed several novelties. In addition to FAST corner features, whose 3D positions are parameterized with robotcentric bearing vectors and distances, multi-level patches are extracted from the image stream around these features. The patch features are tracked, warped based on IMU-predicted motion, and the photometric errors are used in the update step as innovation terms. ROVIO was developed as a monocular VIO pipeline, is available as an open-source software package.

This system requires a powerful Companion Computer in order to process and output state estimate on real-time with good refresh rate and minimum delay. I installed a AAEON PICO-APL3 with a Pentium N4200 and 4 Gb RAM, loaded UBUNTU 16.04 and ROS KINETIC. PICO-APL3 | PICO-ITX Board with IntelĀ® PentiumĀ® N4200/ CeleronĀ® N3350 Processor - AAEON

Building the code according to the instructions on github, using catkin tools, you must reduce the number of concurrent jobs an memory usage so it does not hang after exhausting all resources. Here is the command: catkin build rovio -j 1 --mem-limit 50% --cmake-args -DCMAKE_BUILD_TYPE=Release

Tuning computational costs
The following parameters can be adapted to reduce computation costs rovio_node.cpp: Reduce feature count to 12 Reduce the patch size to 4 If no external pose measurements are used nPose should definitively be set to 0

rovio.info: Reduce the number of processed image levels by getting startLevel and endLevel closer to each other, e.g. 2 and 1. Set alignMaxUniSample to 0 such that only 1 sample is evaluated when searching patches Eventually disable patch warping if your application does not involve too fast motions

Calibration
Camera IMU calibration Ā· ethz-asl/kalibr Wiki Ā· GitHub Take a look at the video first as it explain pretty well how this calibration works.

This is quite tricky and requires a lot of experimentation, this is where most of the experimenters get stucked and it takes a lot of determination to make it successful. The most difficult step is estimating correctly the IMU parameters as there is no references for the MPU9250 so I had to extract the correct numbers from a graph generated by the Allan Vaiance test GitHub - GAVLab/allan_variance: Allan variance approach for characterizing inertial signals.

Once the calibration is complete, the values are exported in the rovio.info file in the Camera Calibration section. The extended Kalman Filter(EKF) parameters have to be tweeked as well, particulay the values of the Prediction Noise. Since we use consumer grade IMU we must increase the values of vel, acb, gyb, and att., so during tests, we increase them separetely by factors of 10 until we have nice results, meaning a stable estimation that does not drift and diverge.

C) The Flight Controller

Launching the estimator
The system start in sequence:
1- The Camera - roslaunch ueye_cam cam0_ext_400.launch
2- The IMU - roslaunch mpu6050_serial_to_imu demo.launch
3- The Estimator - roslaunch rovio rovio_ueye.launch

During development I launch them as 3 separate launch files, making it easier to debug and optimize. At final I merged them in a single launch file roslaunch rovio rovio_full.launch

MAVROS LAUNCH
Last we launch MAVROS with the apm profile and talking to the flight controler (BBBMINI) on Ethernet roslaunch mavros apm.launch fcu_url:=udp://0.0.0.0:14551@

Specific configuration
We must make sure that VISION is not blacklisted in the apm_pluginlists.yaml And we disable the TIMESYNC in apm_config.yaml timesync_rate: 0.0 # TIMESYNC rate in Hertz (feature disabled if 0.0) Depending on system configuration , we might have to augment mavsys rate rosrun mavros mavsys rate --all 100

ARDUPILOT LAUNCH
This is how the ArduCopter is launched on the BBBMINI sudo ./arducopter -C udp:192.168.8.34:14550 -D udp:10.0.0.2:14551

You set Serial1 (-C) and Serial2 (-D) as Mavlink2 (speed is irrelevant on IP)and the connexion to the GCS is on a WIFI AP mode set on 192.168.8.xx:14550 and connexion to the CC as a standard private network in the 10.0.0x range.

BENCH TESTING

Calibration and optimization are much easier (and much safer) when working with SITL.
The picture above is pretty self explanatory as we can simulate relative displacement by moving the camera. I am using the QGC widgets - ANALYZE to check the signal quality, gain and stability.

This is the end of ā€˜ā€˜Chapter Oneā€™ā€™ of this experiment.
The next phases will discuss the detail of on-board integration and test phases, like the Static test, Walk Test, Initial test and the Autonomous flight.

The configurations files , the modified sources and lab notes are on github: GitHub - patrickpoirier51/ROVIO: Visual Odometry System using MPU9250 IMU and Ueye Global Shutter Camera

Here is PART 2:

14 Likes

Outstanding, thanks for sharing! Very much looking forward to next chapter :wink:

1 Like

Very nice ! I cannot wait to see the result !

1 Like

Very nice tutorial. But I am stuck at one point. I am trying to achieve autonomous flight using intel T265 tracking camera. I have integrated it with pixhawk, but when I call ros service for take off, for a very short duration it increases the thrush and then the propellers stop rotating.

Please read these blogs

If you have questions on T265 you can ask there

Are there any pre-built cheap mono camera+imu products available on the market? I found only Jevois, but itā€™s far more that actually needed

Hello
I am not aware of any commercially available product in this category. This project seemed interesting but it did not realized -afaik http://web.mit.edu/sze/www/navion/2017_rss_navion.pdf

As for JeVois, the IMU is not synchro with camera

Intel T265 is the thing

Thanks @ppoirier, Iā€™m surprised because thereā€™re so many applications for this combo. Yes, T265 is ok, but (1) Iā€™m thinking about something smaller (2) I want to be able to still use camera for other purposes (3) I want to test various algos
I guess I will start with my D435i with one camera for experiments

The reason why T265 has outclassed other options is because of its embedded power. The ASIC & Movidius combo can spit 6 DoF @ 200 hz , If you want to achieve that on any other systems you need a TX2 or an Intel Pentium class Companion Computer.

On the Monocular Rovio experiments I used a 350$ processor that is over 200 Grams and required 15 Watts of power, making the 450 size flamewheel quadcopter frame quite overloaded. On the T265 experiments I just needed a Banana Pi Zero making the integration on a 330 size a piece of cake.

Yes, you are right about the computations, but as for now I donā€™t care about it :slight_smile:

BTW DJI Tello also uses Movidus 2

This might be interesting
https://devtalk.nvidia.com/default/topic/1070644/jetson-projects/real-time-monocular-vision-based-slam-with-nvidia-jetson-cnn-and-ros/

fast-depth improved FCRN-ResNet50 with skip connections. In general I think people working on this topic should study recent sematic segmentation stuff.
And this is without IMU fusion

@ppoirier how about using Pixhawk IMU and camera shutter configuration (https://ardupilot.org/copter/docs/common-camera-shutter-with-servo.html)?

See also https://docs.px4.io/v1.9.0/en/peripherals/camera.html#camera-imu-sync-example-vio

Yes, this option works fine, it has been successfully demonstrated by Kabir Mohammed https://youtu.be/67jV7G2rQrA

Is it possible to do the same with Ardupilot? If so how do I configure Ardupilot to do this?

Iā€™ve stumbled across this while searching for Allan Variance information. Could you provide more details on the parameters you used to log the data? Iā€™m getting the same data for Orange and Black Cubes.

I have not characterized the IMU onboard FC , I used a MPU9250 and captured IMU data for an an extended period once it was warm. Then I processed the bagfile in oredr to generate the curves and numbers, and extracted these to feed the EKF filter.

That makes way more sense than trying to get ArduPilot to do it. Iā€™ve managed to get samples written to the SD card at 1kHz (out of 8kHz from fast sampling), but that still doesnā€™t make the cut for inputs needed for Allan Variance. There might be a trick I am missing to get the full data written to the SD card.

Maybe you could ask @tridge on the dev channel: https://gitter.im/ArduPilot/ardupilot
I know he characterized some new FC IMU recently