VISION_POSITION_ESTIMATE not appearing in QGroundControl

This project is really interesting, I think @rmackay9 and @priseborough should take a deeper look into this because it is a great demonstrator that can be easily replicated without having to get specialised sensors. I am just a little worried about the latency of frame capture over wifi. @Subodh_Mishra can you share your code ?

No, I am not explicitly publishing it but I am feeding the offset between the skyviper imu and camera as a parameter in Qgroundcontrol/MissionPlanner

What Parameter are you setting for this offset ?
I saw part of your code on the other discussion, may I take a look at your gitter ?

Yes I can, but where should I share it.

Yes there is some latency and wifi is a tad bit unreliable at times. The data I published in /mavros/local_position/pose is coming at an average frequency of 4.5 Hz.

I also want to mention that I am using EKF3 for visual odometry fusion as EKF2 didnt seem to take in visual odometry inputs.

As far as I can recall, here are the parameters I had changed to make the FCU accept my /mavros/vision_position/pose updates:
EK3_ENABLE (Enabled this)
VISO_ORIENT
VISO_POS_X
VISO_POS_Y
VISO_POS_Z
VISO_TYPE(should be set to MAV for VO to work)

When the VO fusion starts, the following message is displayed in the mavros teminal:

[ INFO] [1525454530.463380626]: FCU: EKF3 IMU0 started relative aiding [ INFO] [1525454530.648255144]: FCU: Fusing Velocity [ INFO] [1525454530.760722136]: FCU: EKF3 IMU0 fusing odometry
I am sure that the following functions are used for fusing VO data:

writeBodyFrameOdom(deltas) defined in ardupilot/libraries/AP_NavEKF3/AP_NavEKF3_Measurements.cpp
&
FuseBodyVel() defined in
ardupilot/libraries/AP_NavEKF3/AP_NavEKF3_PosVelFusion.cpp

Which part of code do you want to see? there are different places I made changes in and itā€™s kind of a mess, but I will be happy to share.

@Subodh_Mishra Genarally, you make a branch of the ardupilot code on tour github account and work from there. As for the other code, like OpenCV and Mavros, you can relicate the workspace as well on github.

As for the offsets:
VISO_ORIENT
VISO_POS_X
VISO_POS_Y
VISO_POS_Z
is related to the vehicle frame, there is no transpose to IMU-CAMERA frames within these parameters.

Okay I will do that. But the main change I have done in my project is just to enable EKF3, change VISO_TYPE to MAV and send vision estimated pose as VISION_POSITION_DELTA instead of VISION_POSITION_ESTIMATE by doing what @jeff_null has done here (except using SET_HOME_POSITION instead of SET_GPS_GLOBAL_ORIGIN).

What I donā€™t understand is if I send lets say [0 0 4 0 0 0 1] as /mavros/vision_position/pose then I expect something close to [0 0 4 0 0 0 1] as an output in the topic /mavros/local_position/pose but its not so there is considerable difference in the results. What convention does the FCU use?

Adding, no I donā€™t publish any IMU-CAM tf, is it necessary?

Where can I see the changes you have made to incoporate vision_position_estimate instead of vision_position_delta.

Besides this, I want to know the convention in which these vision deltas need to be givenā€¦ I am giving the input in ENU convention

Here is how my vision_pose_estimate() function in mavros/mavros_extras/src/plugins/vision_pose_estimate.cpp looks like:

void vision_position_estimate(uint64_t usec,
		Eigen::Vector3d &position,
		Eigen::Vector3d &rpy)
{
    Eigen::Vector3d delta_pos = position - prev_position;
    delta_pos = prev_rotation_inverse*delta_pos;
    Eigen::Matrix3d curr_rotation = rpy2rotn(rpy);
    Eigen::Matrix3d delta_rot = prev_rotation_inverse*curr_rotation;
    Eigen::Vector3d delta_rpy = ftf::quaternion_to_rpy(Eigen::Quaterniond(delta_rot));

    prev_position = position;
    prev_rotation = curr_rotation;
    prev_rotation_inverse = prev_rotation.inverse();

	mavlink::ardupilotmega::msg::VISION_POSITION_DELTA vp{};
	vp.time_usec = usec;

    vp.time_delta_usec = usec - old_time;
    old_time = usec;

    //std::cout << vp.time_delta_usec  << std::endl;
	vp.position_delta[0] = delta_pos.x();
	vp.position_delta[1] = delta_pos.y();
	vp.position_delta[2] = delta_pos.z();
	vp.angle_delta[0] = delta_rpy.x();
	vp.angle_delta[1] = delta_rpy.y();
	vp.angle_delta[2] = delta_rpy.z();	


    
    if(flag_var)
    {
    	UAS_FCU(m_uas)->send_message_ignore_drop(vp);
    	//ROS_INFO("[vpe:] Sent vision deltas");
    }
    else
    {
	    flag_var = true;
	    ROS_INFO("[vpe:]First Run, hence ignored!");
    }
}

Subodh,

the changes to support the various vision-position-estimate messages are in master now. This is one of the commits that added support.

This is the PR from @peterbarker that still needs to go in to allow syncing the time stamps between the companion computer (and/or ground computer) and the flight controller.

By the way, if itā€™s a relative position that youā€™re looking to pass in then itā€™s best to use the vision-position-estimate messages instead of the vision-position-delta. The deltas message should only be used if the source of the information really provides deltas. For example, an optical flow system or similar.

Hey mackay, can you explain the settings changes needed for vision_position_estimate to work properly?

Daraju,

If you have a log file I can have a look. I donā€™t think any setting changes should be required except the times between the flight controller and groundstation/companion-computer must be aligned. This is currently difficult to do but I expect the PR mentioned above from @peterbarker will resolve this soon.

@rmackay9 a minimal wiki would be really appreciated:
How it works
General setup
Parameters
Flight Modes

I am working on various types of estimators, and getting closer to tests :wink:

Iā€™ve also got it to run, so QGroundcontrol can shows the vision_position_delta topic in the mavlink inspector.
But can someone tell me, how I can get the pixhawk to do a position hold with my SLAM data?
The global_position_int still drifts. I do not want to use the build in magnetometer. I just want to use the data from my vision system. So I need to overwrite the hdg value. I thought i could override the heading value with the vision message. Do I also need to publish the same pose for local_position/pose? Because the pixhawk does not publish this messageā€¦

Does someone has a idea how I can achieve an indoor flight without magnetometer?

for SLAM normally we should be passing in the vision-position-estimate. For that to work properly this PR needs to go into master. After thatā€™s done, there is another PR to enable driving without a compass.

So getting there but not quite done yet.

Ok, does this variant already work with Arducopter? Or do both of them need the PR to get the slam and navigation without compass work.

Today Tridge and Peter came up with a new PR for handling the system clock differences between the companion-computer/GCS and the flight controller. If any one would like to test it to see if ROS can provide a visual-position-estimate to ArduPilot that would be great. Here is a compiled Pixhawk/Cube binary for Rover which is master/latest with this PR applied.

@coolkiffings, this PR will be required for any vehicle to properly handle the vision-position-estimate messages. Flying/Driving without a compass is a much simpler issue. I think itā€™s possible as long as the compass arming checks are turned off and the vehicle is facing true north when itā€™s first turned on. Iā€™m not completely sure thoughā€¦ we may have some checks here and there which need fixing.

Can you build a version for Ardupilot, so I can test it with my copter.

Thanks

Edit:
So just tested the rover firmware.
The vision_position_estimate message can be processed by the ardupilot fw and its doing what Iā€™ve done with my rosbag.
But can someone please build me a ArduCopter FW with this PR?

2 Likes

Mike,

Thanks for testing. Here is a Copter binary with the same PR applied.

P.S. I have no idea what a ā€œrosbagā€ isā€¦

I suggest you join us in the vision room, your input will be appreciated

@rmackay9 Thank you very much for the firmware. I will test it with my copter soon.
Iā€™ll just joined the gitter.

A rosbag is a recorded data set, and which can be replayed with the exact timing at any time

1 Like

@rmackay9 Thanks for following up on this. I tried the new firmware with our existing system and ā€“ alas ā€“ it does not solve the EKF problems we are seeing. Specifically:

  • If I send VISION_DELTA_ESTIMATE messages, I still see the same poor fused results of our visual odometry and the IMU signals.
  • If I send VISION_POSE_ESTIMATE messages, the EKF never produces a fused result.

Itā€™s possible thereā€™s an unrelated problem on our end; itā€™s been a few months since I last dug into this. I can email you dataflash logs if youā€™re curious.