Position Hold with VISION_POSITION_ESTIMATE and ATT_POS_MOCAP for indoor localization

@vkurtz Thank you, I’ll play with the roll rate D parameter and see if I can get improved behavior.

@vkurtz I am looking at the code in your ardupilot_gazebo on github and I don’t understand if the values on this line for set_home_position are correct:


q should be in this form [w, x, y, z] and [1, 0, 0, 0] should be the null-rotation, why do you use [0, 0, 0, 1] instead?

@anbello you are totally right about that, I was mixed up since ROS often uses [x, y, z, w]. It shouldn’t matter too much, since I believe the home position could be set with any valid orientation, but I’ve made the correction to the github repo. Thanks!

@vkurtz this suggestion improved my performance in simulation immensely, thank you!

Hey @JustFineD, Where and how in the arducopter code base do you override the aforementioned functions?

Hi @Subodh_Mishra,
When I fly my drone in an Indoor environment based on VIVE HTC, I override the following functoins:

  1. bool NavEKF3_core::getPosNE(Vector2f &posNE) const
  2. bool NavEKF3_core::getPosD(float &posD) const
  3. bool NavEKF3_core::getHAGL(float &HAGL) const
  4. bool NavEKF3_core::getLLH(struct Location &loc) const

Here is the relevant code.

bool NavEKF3_core::getPosNE(Vector2f &posNE) const
{
const struct Location &gpsloc_tmp = _ahrs->get_gps().location();
Vector2f temp2PosNE = location_diff(EKF_origin, gpsloc_tmp);
posNE.x = temp2PosNE.x;
posNE.y = temp2PosNE.y;
return true;

}

bool NavEKF3_core::getPosD(float &posD) const
{
const struct Location &gpsloc = _ahrs->get_gps().location();
posD = -gpsloc.alt/100.0;
return filterStatus.flags.vert_pos;
}

bool NavEKF3_core::getHAGL(float &HAGL) const
{
const struct Location &gpsloc = _ahrs->get_gps().location();
HAGL = terrainState + gpsloc.alt/100.0 - posOffsetNED.z;
// If we know the terrain offset and altitude, then we have a valid height above ground estimate
return !hgtTimeout && gndOffsetValid && healthy();
}

bool NavEKF3_core::getLLH(struct Location &loc) const
{
const AP_GPS &gps = AP::gps();

if(validOrigin) {
	const struct Location &gpsloc2 = _ahrs->get_gps().location();
		loc.lat = gpsloc2.lat;
		loc.lng = gpsloc2.lng;
		// Altitude returned is an absolute altitude relative to the WGS-84 spherioid
		loc.alt = gpsloc2.alt;
		loc.flags.relative_alt = 0;
		loc.flags.terrain_alt = 0;
		return true;
	}

It’s very simple and works very well (~5 cm precision)
Good luck
Doron

1 Like

Thanks Doron! I am gonna give this a try.

000063.BIN (162.7 KB)
Hi everyone,
I am trying to use aruco markers to localize my drone but I haven’t been successful in doing so. Here I attach a log file to one of the experiments in which I was just trying to give a takeoff command via QGroundControl. I had set the take off altitude to 1m but the drone took off to a height of nearly 2 m and then lost height and fell to the ground. Can anyone please take a look at my logfiles and tell me what is wrong? @rmackay9 @anbello @chobitsfan @vkurtz

@vkurtz I seem to have similar delays between vision message and ekf’s estimate, I wanted to know if the vision_lag related change in the code is in the master? I doesn’t seem to be.
thanks.

I am almost there with my system with pose estimation through aruco marker.
I am able to take off in althold then switch to loiter and sending pose to /mavros/vision_pose/pose the quadcopter remain almost stable.
For now I link a video then as soon as I have the time I will do a post with more details.

4 Likes

Some more information on my system from another thread:

Hi @Subodh_Mishra
I did not have enough time or knowledge to analyze the bin.
However, one important thing to check is that the timestamp in the message /mavros/vision_pose/pose is relative to the instant of the image capture, in order to take into account the latency of the WiFi.
Also from what I saw in the video it seems that the scale in the two poses is different, when you move the drone on the Z axis a pose has a large excursion the other a small one.
Here the video (from you) I refer to:

2 Likes

Yes you are correct, the scale looks messed up and also I think there is too much delay b/w /mavros/vision_pose/pose and EKF’s result (ie. /mavros/local_position/pose). The delay compensation is not in master I think.

If you are talking about this as you can see has been merged

If you see delay it coud be due to what I wrote above:

1 Like

What exactly do you mean by this? Do you say that the time stamp of the image and the pose associated with it must be same?

I also wanted to know is you are using the arducopter 3.6 rc7 from github’s master or from the website where you can directly download the binaries?

I have problems with english language, anyway I will try.
Any message has a timestamp, when you form the message for /mavros/vision_pose/pose topic you should use as timestamp the time at witch the frames are captured by the cam on the quadcopter.
I don’t know how you can do this. In my system I use gstreamer both to capture frames (on quadcopter), send it over UDP and receive it on desktop PC. In this way I can know the time at witch the frames are captured and use it as timestamp so I keep in the count the WiFi latency.

In the last test I used 3.6.0-rc10 from the website

1 Like

Thanks. It is clear now.

Hey @rmackay9 @vkurtz , Is this vision_lag parameter being used somewhere in the code which is in master now? There is a lot of lag b/w my vision position estimates and EKF’s output. Where can I fix this?

This video shows the lag between /mavros/mocap/pose(base_link) and /mavros/local_position/pose(local_pose): https://youtu.be/cOFGx1HAGJs

Another test with my system for indoor localization (more info some post above):

Here I don’t used the RC, all the flight is controlled by a python script, a modified version of this.
Takeoff in guided mode, 4 setpoint in a square, land.
As can be seen the setpoint are reached with instabilities in x and y axis, the z axis is always stable.
I don’t know how much of this instabilities are due to some Arducopter parameters that I have to tune better and how much are due to suboptimal position estimation from the aruco marker.
I will investigate further.

2 Likes

Hi @rmackay9
I create a lag parameter for vision system, too. I place it in AP_NavEKF2 because I think it may be similar to EK2_FLOW_DELAY and EK2_HGT_DELAY. My changes are here. I have tested it with skyviper v2450 and motion capture system (delay = 10ms). Maybe one of us could create a PR? Because it would be helpful to have a vision delay parameter. Thank you very much

3 Likes

@chobitsfan, definitely sounds like something we need. I suspect we may need to put the delay parameter at a higher level but I’m just guessing. I’m at the HEX conference but I think we should try and get this into master next week somehow.

2 Likes