Position Hold with VISION_POSITION_ESTIMATE and ATT_POS_MOCAP for indoor localization

Hey @JustFineD, Where and how in the arducopter code base do you override the aforementioned functions?

Hi @Subodh_Mishra,
When I fly my drone in an Indoor environment based on VIVE HTC, I override the following functoins:

  1. bool NavEKF3_core::getPosNE(Vector2f &posNE) const
  2. bool NavEKF3_core::getPosD(float &posD) const
  3. bool NavEKF3_core::getHAGL(float &HAGL) const
  4. bool NavEKF3_core::getLLH(struct Location &loc) const

Here is the relevant code.

bool NavEKF3_core::getPosNE(Vector2f &posNE) const
{
const struct Location &gpsloc_tmp = _ahrs->get_gps().location();
Vector2f temp2PosNE = location_diff(EKF_origin, gpsloc_tmp);
posNE.x = temp2PosNE.x;
posNE.y = temp2PosNE.y;
return true;

}

bool NavEKF3_core::getPosD(float &posD) const
{
const struct Location &gpsloc = _ahrs->get_gps().location();
posD = -gpsloc.alt/100.0;
return filterStatus.flags.vert_pos;
}

bool NavEKF3_core::getHAGL(float &HAGL) const
{
const struct Location &gpsloc = _ahrs->get_gps().location();
HAGL = terrainState + gpsloc.alt/100.0 - posOffsetNED.z;
// If we know the terrain offset and altitude, then we have a valid height above ground estimate
return !hgtTimeout && gndOffsetValid && healthy();
}

bool NavEKF3_core::getLLH(struct Location &loc) const
{
const AP_GPS &gps = AP::gps();

if(validOrigin) {
	const struct Location &gpsloc2 = _ahrs->get_gps().location();
		loc.lat = gpsloc2.lat;
		loc.lng = gpsloc2.lng;
		// Altitude returned is an absolute altitude relative to the WGS-84 spherioid
		loc.alt = gpsloc2.alt;
		loc.flags.relative_alt = 0;
		loc.flags.terrain_alt = 0;
		return true;
	}

It’s very simple and works very well (~5 cm precision)
Good luck
Doron

1 Like

Thanks Doron! I am gonna give this a try.

000063.BIN (162.7 KB)
Hi everyone,
I am trying to use aruco markers to localize my drone but I haven’t been successful in doing so. Here I attach a log file to one of the experiments in which I was just trying to give a takeoff command via QGroundControl. I had set the take off altitude to 1m but the drone took off to a height of nearly 2 m and then lost height and fell to the ground. Can anyone please take a look at my logfiles and tell me what is wrong? @rmackay9 @anbello @chobitsfan @vkurtz

@vkurtz I seem to have similar delays between vision message and ekf’s estimate, I wanted to know if the vision_lag related change in the code is in the master? I doesn’t seem to be.
thanks.

I am almost there with my system with pose estimation through aruco marker.
I am able to take off in althold then switch to loiter and sending pose to /mavros/vision_pose/pose the quadcopter remain almost stable.
For now I link a video then as soon as I have the time I will do a post with more details.

4 Likes

Some more information on my system from another thread:

Hi @Subodh_Mishra
I did not have enough time or knowledge to analyze the bin.
However, one important thing to check is that the timestamp in the message /mavros/vision_pose/pose is relative to the instant of the image capture, in order to take into account the latency of the WiFi.
Also from what I saw in the video it seems that the scale in the two poses is different, when you move the drone on the Z axis a pose has a large excursion the other a small one.
Here the video (from you) I refer to:

2 Likes

Yes you are correct, the scale looks messed up and also I think there is too much delay b/w /mavros/vision_pose/pose and EKF’s result (ie. /mavros/local_position/pose). The delay compensation is not in master I think.

If you are talking about this as you can see has been merged

If you see delay it coud be due to what I wrote above:

1 Like

What exactly do you mean by this? Do you say that the time stamp of the image and the pose associated with it must be same?

I also wanted to know is you are using the arducopter 3.6 rc7 from github’s master or from the website where you can directly download the binaries?

I have problems with english language, anyway I will try.
Any message has a timestamp, when you form the message for /mavros/vision_pose/pose topic you should use as timestamp the time at witch the frames are captured by the cam on the quadcopter.
I don’t know how you can do this. In my system I use gstreamer both to capture frames (on quadcopter), send it over UDP and receive it on desktop PC. In this way I can know the time at witch the frames are captured and use it as timestamp so I keep in the count the WiFi latency.

In the last test I used 3.6.0-rc10 from the website

1 Like

Thanks. It is clear now.

Hey @rmackay9 @vkurtz , Is this vision_lag parameter being used somewhere in the code which is in master now? There is a lot of lag b/w my vision position estimates and EKF’s output. Where can I fix this?

This video shows the lag between /mavros/mocap/pose(base_link) and /mavros/local_position/pose(local_pose): https://youtu.be/cOFGx1HAGJs

Another test with my system for indoor localization (more info some post above):

Here I don’t used the RC, all the flight is controlled by a python script, a modified version of this.
Takeoff in guided mode, 4 setpoint in a square, land.
As can be seen the setpoint are reached with instabilities in x and y axis, the z axis is always stable.
I don’t know how much of this instabilities are due to some Arducopter parameters that I have to tune better and how much are due to suboptimal position estimation from the aruco marker.
I will investigate further.

2 Likes

Hi @rmackay9
I create a lag parameter for vision system, too. I place it in AP_NavEKF2 because I think it may be similar to EK2_FLOW_DELAY and EK2_HGT_DELAY. My changes are here. I have tested it with skyviper v2450 and motion capture system (delay = 10ms). Maybe one of us could create a PR? Because it would be helpful to have a vision delay parameter. Thank you very much

3 Likes

@chobitsfan, definitely sounds like something we need. I suspect we may need to put the delay parameter at a higher level but I’m just guessing. I’m at the HEX conference but I think we should try and get this into master next week somehow.

2 Likes

@chobitsfan and @rmackay9 why do you think that is necessary to have a lag parameter for vision system? I understood that this is automatically calculated by this method:

The linked method calculate the transport lag to correct the timestamp, maybe this new lag parameter has another purpose?

@anbello This only compensates for link lag. The sensor lag can be many times this (the time it takes to capture the image, process it, analyse it, calculate the required data, and then send it to the link).

Thank you @fnoop now I understand better. But if I use the timestamp of captured image in the header of the vision position estimate message I should have all the lag calculated by the aforementioned method. Am I wrong?

No, it only deals with the transport/link lag which will typically be a physical link latency of a few ms. A sensor/vision sensor lag could be anything from 10ms to 50ms to 100ms to much higher. If you’re using a fast onboard computer with a fast camera frame rate you may get a sensor lag as low as 10-20ms. If you were using eg. a raspberry pi+raspicam it may well be as high as 200ms, and it could vary from frame to frame from anything between 50ms and 200ms depending on a variety of factors at any particular millisecond of time.

Also the link lag converges on a sample of latencies over a period of time, I would have thought if you have a vision based estimation system that you would want to take into account individual frame latencies rather than trying to smooth out jitter. The higher the lag the higher the jitter is likely to be.

Your system looks really cool, I’m looking forward to more details as it’s something I’ve had on my todo list to try for a long time :slight_smile:

4 Likes