Position Hold with VISION_POSITION_ESTIMATE and ATT_POS_MOCAP for indoor localization

If you are talking about this as you can see has been merged

If you see delay it coud be due to what I wrote above:

1 Like

What exactly do you mean by this? Do you say that the time stamp of the image and the pose associated with it must be same?

I also wanted to know is you are using the arducopter 3.6 rc7 from github’s master or from the website where you can directly download the binaries?

I have problems with english language, anyway I will try.
Any message has a timestamp, when you form the message for /mavros/vision_pose/pose topic you should use as timestamp the time at witch the frames are captured by the cam on the quadcopter.
I don’t know how you can do this. In my system I use gstreamer both to capture frames (on quadcopter), send it over UDP and receive it on desktop PC. In this way I can know the time at witch the frames are captured and use it as timestamp so I keep in the count the WiFi latency.

In the last test I used 3.6.0-rc10 from the website

1 Like

Thanks. It is clear now.

Hey @rmackay9 @vkurtz , Is this vision_lag parameter being used somewhere in the code which is in master now? There is a lot of lag b/w my vision position estimates and EKF’s output. Where can I fix this?

This video shows the lag between /mavros/mocap/pose(base_link) and /mavros/local_position/pose(local_pose): https://youtu.be/cOFGx1HAGJs

Another test with my system for indoor localization (more info some post above):

Here I don’t used the RC, all the flight is controlled by a python script, a modified version of this.
Takeoff in guided mode, 4 setpoint in a square, land.
As can be seen the setpoint are reached with instabilities in x and y axis, the z axis is always stable.
I don’t know how much of this instabilities are due to some Arducopter parameters that I have to tune better and how much are due to suboptimal position estimation from the aruco marker.
I will investigate further.

2 Likes

Hi @rmackay9
I create a lag parameter for vision system, too. I place it in AP_NavEKF2 because I think it may be similar to EK2_FLOW_DELAY and EK2_HGT_DELAY. My changes are here. I have tested it with skyviper v2450 and motion capture system (delay = 10ms). Maybe one of us could create a PR? Because it would be helpful to have a vision delay parameter. Thank you very much

3 Likes

@chobitsfan, definitely sounds like something we need. I suspect we may need to put the delay parameter at a higher level but I’m just guessing. I’m at the HEX conference but I think we should try and get this into master next week somehow.

2 Likes

@chobitsfan and @rmackay9 why do you think that is necessary to have a lag parameter for vision system? I understood that this is automatically calculated by this method:

The linked method calculate the transport lag to correct the timestamp, maybe this new lag parameter has another purpose?

@anbello This only compensates for link lag. The sensor lag can be many times this (the time it takes to capture the image, process it, analyse it, calculate the required data, and then send it to the link).

Thank you @fnoop now I understand better. But if I use the timestamp of captured image in the header of the vision position estimate message I should have all the lag calculated by the aforementioned method. Am I wrong?

No, it only deals with the transport/link lag which will typically be a physical link latency of a few ms. A sensor/vision sensor lag could be anything from 10ms to 50ms to 100ms to much higher. If you’re using a fast onboard computer with a fast camera frame rate you may get a sensor lag as low as 10-20ms. If you were using eg. a raspberry pi+raspicam it may well be as high as 200ms, and it could vary from frame to frame from anything between 50ms and 200ms depending on a variety of factors at any particular millisecond of time.

Also the link lag converges on a sample of latencies over a period of time, I would have thought if you have a vision based estimation system that you would want to take into account individual frame latencies rather than trying to smooth out jitter. The higher the lag the higher the jitter is likely to be.

Your system looks really cool, I’m looking forward to more details as it’s something I’ve had on my todo list to try for a long time :slight_smile:

4 Likes

Thanks, on post #32 above there is a link to some details of my system, I am waiting to have enough time to put together all the necessary info and source (only little diff vs aruco_mapping package) to put it on github and here as a post. I have very little time to do this kind of work but I will do for sure.
And in any case I have yet some problems to solve on my system as you can see from the videos.

Ciao

1 Like

You are using fake GPS msg, right?
because I am doing the same thing, but the position hold doesnot work fine with me.
Can you please share with the steps that you have made?
I created my ROS node to get the position and publish it as the HIL_GPS msg.
what parameters should I change, should something with mavros??
Your help really matters, thank you.

Hello Andrea,
I finally have started to experiment with aruco/ vision_position_estimate
I am using an Intel based Companion Computer with a calibrated USB Camera
Launch:
roslaunch usb_cam.launch
roslaunch aruco_gridboard detection.launch
roslaunch mavros apm.launch fcu_url:=/dev/ttyUSB0:115200
rosrun mavros mavsys rate --all 100

On my glight controler BBBMINI
sudo ./arducopter -C udp:192.168.7.1:14550 -D /dev/ttyO4

So the pipeline is camera-CC-Serial to BBBMINI - udp to - Mission Planner
I connect to mission planner, set these:
AHRS_EKF_TYPE 2
EKF2_ENABLE 1
EKF3_ENABLE 0
GPS_TYPE 0
EK2_GPS_TYPE 3
COMPASS_USE 0
VISO_TYPE 0 == or 1?!? , I tried both

I set the Home/EKF home and get the vehicle displayed
I can get good movement correlation on display when moving my quad-camera

But I cannot read the mavros local pose in RVIZ
I have WORLD as reference so I guess I am missing a couple of TRANSFORM here…
OK found it here
/launch/ detection_rpi.launch

Have you made the complete TF for your frame ?

No, to be honest I did not understand well all the TF stuff in ROS, I have to study more.
Anyway I added offset parameter for the camera but I did not push it yet, I will do asap.

[EDIT]
VISO_TYPE 0

ok good, I might make one for my frame and will update here.

BTW Andrea I added the missing debug logic for the display
parameter in detection.launch

modified node.cpp line 408
if(debug_display_) {
cv::imshow(OPENCV_WINDOW, imageCopy);
cv::waitKey(2);
}

Making it easier to switch display on/off

1 Like

Well, that was a nice sunday experiment :slight_smile:
Here is the Launch config
…lol it does not read xml !!!

Local origin to world
node pkg=“tf” type=“static_transform_publisher” name=“tf_local_origin”
args="0 0 0 0 0 0 world local_origin 100

Rotate the IMU in reference to camera
arg name=“pi/2” value="1.57079632
arg name=“imu_rotate” value=“0.05 0 -0.08 $(arg pi/2) 3.1415926
node pkg=“tf” type=“static_transform_publisher” name=“camera_fcu”
args=”$(arg imu_rotate) camera fcu 100

according to this:

1 Like

please note that I have a forward camera setup (not down-looking) this is why there is a rotation about the red axis == still need some tweaking but I have the general setup working

1 Like

This is not true, I already pushed the offset parameter commit, but only for the offset along the back-front axes (and I have forgotten to have done it :slight_smile: )