Indoor autonomous flight with Arducopter, ROS and Aruco Boards Detection

I would like to show my work (in progress) on a simple system capable to realize indoor autonomous flight.
The system is based on a quadcopter with a camera that stream video (and telemetry) to a desktop PC and receive pose estimation message and command from the same PC.

The camera is downward looking and on the floor there is an Aruco Boards like this:

The pose estimation is calculated by aruco_gridboard ROS package on the PC and the relevant messages are sent back to the quadcopter using mavros ROS package.

The Flight Controller and the Raspberry Pi 0 on the quadcopter are connected via serial port whereas the Rapsberry Pi 0 and the desktop PC are connected via WiFi.

Some more info on my system:

  • A little quadcopter (130mm) with RevoMini FC with Arducopter 3.7-dev with @chobitsfan PR. Some relevant parameters on my quad:
    • AHRS_EKF_TYPE 2
    • EKF2_ENABLE 1
    • EKF3_ENABLE 0
    • EK2_GPS_TYPE 3
    • EK2_POSNE_M_NSE 0.1
    • EK2_EXTNAV_DELAY 80
    • GPS_TYPE 0
    • COMPASS_USE 0
    • VISO_TYPE 0
  • On the quadcopter there is a Raspberry Pi 0 (connected to FC with serial port) and a Raspberry Cam streaming video to Desktop PC via WiFi
  • On the Raspberry Pi 0
    • Raspbian Stretch Lite
    • gstreamer1.0 (installed with apt)
    • gst-rpicamsrc (https://github.com/thaytan/gst-rpicamsrc) to use rpicamsrc as source for gst-launch-1.0
    • ser2net (installed with apt) a serial to network proxy
    • in /etc/ser2net.conf add line:
      2000:raw:0:/dev/ttyAMA0:115200 8DATABITS NONE 1STOPBIT
  • On Desktop PC

The video is streamed from Raspberry Pi with gstreamer and, on the PC, the node gscam get this stream and publish camera/image_raw and camera/camera_info topics.
On the PC aruco_gridboard (slightly modified by me) subscribe to the above topics and publish a camera_pose message to the mavros/vision_pose/pose topic.

SET_GPS_GLOBAL_ORIGIN and SET_HOME_POSITION messages (similar to this 1) are sent before starting to use the system.

13 Likes

Excellent progress and nice blog that makes it easy to replicate , congratulation !!
Thanks as well to @chobitsfan and @Subodh_Mishra for their collaboration on this.
Just bought a Skyviper Scout to implement as a ā€˜ā€˜standardā€™ā€™ indoor test platform, so anyone can start experiment. (I think @chobitsfan is flying with this model as well ?).

1 Like

@ppoirier even I have ordered a Skyviper scout! I will start experimenting with it once it is delivered. Currently I am working with Skyviper V2450.

@anbello, so you stopped using aruco_mapping? Seems the current aruco board you are using is less noisy when compared to aruco_mapping.

Really great work and nice detail on the post to help others get it working!

1 Like

Note: I didnā€™t criticize or review skyviper scout here. I just share my experiment. Scout is really a good, very affordable copter.

I found flash size in scout seems smaller than v2450 (only 512k, I think). It seems not possible to flash current master to scout even disable all features in APM_Config.h. After removing unused libraries, I was able to flash current master to scout. But flying performance with external navigation data (ATT_POST_MOCAP) seems very poor, in the same setup and compare to v2450. I am still testing it.

please refer to Build ardupilot for 2018 scout for building. Really thanks @tridge for helping me in gitter

@chobitsfan ā€¦ interesting, I will ask for advice on your thread, thanks

@Subodh_Mishra yes now I use aruco_gridboard which detect aruco board of markers with less noisy pose estimation. aruco_mapping use more then one marker to map the environment in which they are posed (this is a feature that aruco_gridboard doesnā€™t have) but the pose is estimated from a single marker each time and so is noisier.

@ppoirier and @rmackay9 thanks for your kind words and encouragement :slight_smile:

Way to go, congrats. Really like your use of the inexpensive and super small revomini, on a 130mm frame.

1 Like

Great post. Thanks for sharing

1 Like

That would make such a good tutorial video :wink:

1 Like

It surely would be. Itā€™s probably a little bit complicated in one single video. There is ROS to setup, explain mavros, explain gstreamer. Not to mention the ā€œnon-standardā€ versions of the autopilot to flash.

Iā€™m thinking to go towards ROS installation and setup for one of the next videos and letā€™s start from there.

1 Like

Sounds like a good start, revisiting your latest lab; aruco the ā€˜ā€˜ROS Wayā€™ā€™ with usb_cam & monocular calibration & aruco Tag (or board). May I suggest you go with catkin build (much easier than catkin_make) .

Then gstreamer over wifi (@fnoop could certainly help on this) capture to openCV python.

Next Gstreamer capture on ROS using GSCAM http://wiki.ros.org/gscam as @anbello
https://github.com/anbello/aruco_gridboard/blob/master/launch/detection_rpi.launch#L14
or connect to the skyviper web page as @Subodh_Mishra
https://github.com/SubMishMar/skyviper_ros/blob/master/skyviper_capture_images/src/capture_images_node.cpp#L44

Last would the process and control loop, there is still some work to do here, but for the sake of standardization MavROS would be the preferred path. As for the ā€˜ā€˜special buildā€™ā€™ hopefully this will be in master with DELAY parameters or an automatic sync with the EKF using time from the camera pose system.

What makes this type of lab really interesting is the fact that the processing is not onboard , making it affordable to run from a variety of vehicles on a desktop/laptop and it much easier to build as most of the modules are apt-get install type.

@anbello , @Subodh_Mishra , @chobitsfan , @fnoop : comments-suggestions ?

1 Like

@ppoirier, I think the problem with my setup is that the time stamp of the image is provided by my computer running ROS, it is not essentially the actual time stamp at which the image was captured. Here is how it is done: https://github.com/SubMishMar/skyviper_ros/blob/1bf07fdb2c6033067a9f90c98b1b93acc2709a79/skyviper_capture_images/src/capture_images_node.cpp#L32 . I donā€™t think this is the best way to do it. But, can I use gstreamer with skyviper?

Pretty certain we can connect to this pipe with Gstreamer , here is with VLC media player:

1 Like

Thanks I will try this.

Here is a simple pipeline that works
gst-launch-1.0 rtspsrc location=rtsp://192.168.99.1/media/stream2 ! decodebin ! autovideosink

Problem is that it is transmitting hi-res = too much delay , we need strip down

This one is a copy of Mission Planner = 230 Millisec
gst-launch-1.0 rtspsrc location=rtsp://192.168.99.1/media/stream2 buffer-mode=1 latency=100 ! application/x-rtp ! rtph264depay ! avdec_h264 ! videoconvert ! video/x-raw,format=BGRA ! autovideosink

I am trying to use use the ros driver gscam to do this.

@ppoirier and @tiffo I donā€™t know if all the step delineated by @ppoirier will become video tutorials anyway I will be available to provide further details.

Beginning with gstreamer over WiFi part:
Iā€™am using a dedicated router as access point, with only my desktop PC on ethernet and the Raspberry Pi as WiFi client. From my test this configuration gives less latency then that in which the Raspberry Pi is configured as access point and the desktop PC connect via USB-WiFi device.

On The Raspberry Pi I launch this gstreamer pipeline:
gst-launch-1.0 -v rpicamsrc bitrate=10000000 rotation=180 exposure-mode=10 awb-mode=0 awb-gain-red=1 awb-gain-blue=2 iso=800 shutter-speed=10000 contrast=50 ! "image/jpeg,width=640,height=480,framerate=30/1" ! udpsink host=192.168.10.16 port=9000

rotation=180 is due to the way my cam is mounted and the other parameter serves to have fixed value for exposure and gain and high contrast that gives a better marker detection.

On desktop PC gscam ROS node has the following parameter to receive the stream:
<env name=ā€œGSCAM_CONFIGā€ value=ā€œudpsrc port=9000 ! jpegdec ! videoconvertā€/>

This gives me a delay on near 80ms.

2 Likes