A small VIO system using raspberry pi and arducam ov9281

I think it’s sent through ipc tunnel. But I still didn’t got right.

You can help here.

Thanks, I think I figured it out another solution: You can publish the ego-planner output to MAVROS using /setpoint_raw/local.

@chobitsfan ,
kudos for the awesome work!!
I had some queries regarding system time synchronization and synchronization between IMU and camera frames:

  1. I believe you have set BRD_RTC_TYPES to 1, which is Mavlink SYSTEM_TIME.
    Therefore, The onboard computer sends SYSTEM_TIME msgs(via MAVROS) and uses the Mavlink TIMESYNC message to synchronize the system time between FCU and RPi, right?
    How do you handle the response once MAVROS receives TIMESYNC Mavlink msg from FCU?
    I understand the TIMESYNC helps you find the offset between the FCU and On-Board Computer, which you add/ subtract to the time stamp sent for the HIGHRES IMU Mavlink msg from FCU. Am I right in understanding this?
    Could you please discuss in detail the role of SYSTEM_TIME and TIMESYNC messages in helping onboard computer and FCU time sync?
    If BRD_RTC_TYPES is set to 1, is FCU time set to the time_unix_usec field value from the SYSTEM_TIME msg?
    How is the TIMESYNC msg used in this process? Is it to calculate the offset between the onboard computer time and FCU system time(the one set by Mavlink SYSTEM_TIME msg) to account for communication offsets?
  2. I also wanted to ask you if you use a camera trigger with servo/ relay functionality from FCU to trigger the camera to get the images at a certain frequency(DIY Camera Trigger using Relay — Copter documentation). If not, do you receive time-stamped image frames and further use software synchronization to sync images and highres imu mavlink packets for VIO? How do you get accurate timestamps for camera images?

Hi

  1. SYSTEM_TIME is only used to set FC log time, so that it is easier to know when did this flight happen. TIMESYNC is used to measure the time offset between FC and pi. Timestamp used in vins-mono is based on pi time.
  2. libcamera provides time-stamped image frames, see use sensor timestamp · chobitsfan/libcamera-apps@a142eb2 · GitHub

@chobitsfan @radiant_bee

I found there are two values related with image time stamp.

If the driver timestamped a frame, why the frame time has something to do with boot time?

    struct timespec boot_ts, epoch_ts;
    clock_gettime(CLOCK_BOOTTIME, &boot_ts);
    clock_gettime(CLOCK_REALTIME, &epoch_ts);
    uint64_t boot_epoch_offset = epoch_ts.tv_sec * 1000000000 + epoch_ts.tv_nsec - (boot_ts.tv_sec * 1000000000 + boot_ts.tv_nsec);
		CompletedRequestPtr &completed_request = std::get<CompletedRequestPtr>(msg.payload);

        auto sensor_ts = completed_request->metadata.get(controls::SensorTimestamp);
        uint64_t sensor_ts_epoch = *sensor_ts + boot_epoch_offset;
        ros::Time sensor_ros_ts(sensor_ts_epoch / 1000000000, sensor_ts_epoch % 1000000000);

I think driver timestamped a frame in “time since boot”, but ROS timestamp is “time since epoch”

I didn’t get how to align imu data time(which is aligned with system timer from epoch) with the image time, which is “time since boot” + “sensor driver timer, meta data”. And if the camera don’t have RTC, then it should be somewhere in the code aligned with system timer, and why we need add offset since boot?

Hi @chobitsfan, Thank you for your wonderful work and contributions to the community! I’m currently working on building a small, low-cost indoor drone and would like to use commonly available hardware components for this project. My goal is to keep the budget low while learning and exploring drone technology, with plans to upgrade as I gain more experience.

It would be incredibly helpful if you could share a complete list of hardware that would allow me to achieve this, including any recommendations for upgrades as I progress. I’m particularly interested in hardware for implementing Visual SLAM or other autonomous navigation methods without GPS.

Looking forward to your guidance!

Hi. I am sorry for late reply. About time alignment, I try to align IMU & camera to ROS time (which is time from epoch)

Hi. Hardware for VIO is already list in post. For the small indoor drone, I use Kakute H7 Stacks

1 Like

@chobitsfan

It seems position is drifting The position is drifting. Any ideas on calibration or what might be wrong? · Issue #263 · HKUST-Aerial-Robotics/VINS-Fusion · GitHub, img & imu time sync issue?

I want to send the visual odometry data using Mavsdk-Python. Can someone tell me where i am wrong. The following is a simple dummy data i am trying to send to the drone.

import time
import asyncio
from mavsdk import System
from mavsdk.telemetry import Odometry,  PositionBody, VelocityBody, Quaternion, AngularVelocityBody, Covariance
from mavsdk.telemetry_server_pb2 import TelemetryServerResult  # Importing the result enum or class

async def main():
    # Connect to the drone
    drone = System()
    await drone.connect(system_address="tcp://127.0.0.1:8000")
    async for state in drone.core.connection_state():
        lastPacketTime=time.time()
        if state.is_connected:
            print(f"-- Connected to drone!")
            break

    # Create required components
    current_time_usec = int(time.time() * 1e6)
    position_body = PositionBody(0.0, 0.0, 0.0)  # Position in meters
    velocity_body = VelocityBody(1.0, 0.0, 0.0)  # Velocity in m/s
    q = Quaternion(1.0, 0.0, 0.0, 0.0,current_time_usec)  # Orientation as quaternion (w, x, y, z,timestamp in micro-sec)
    angular_velocity_body = AngularVelocityBody(0.0, 0.0, 0.0)  # Angular velocity in rad/s
    pose_covariance = Covariance([0.0] * 9)  # Placeholder 3x3 covariance matrix
    velocity_covariance = Covariance([0.0] * 9)  # Placeholder 3x3 covariance matrix
    
    # Create the Odometry object
    odometry_data = Odometry(
        time_usec=current_time_usec,  # Timestamp in microseconds
        frame_id=Odometry.MavFrame.BODY_NED,  # Use a valid MavFrame enum value for frame_id
        child_frame_id=Odometry.MavFrame.BODY_NED,  # Same for child_frame_id
        position_body=position_body,
        q=q,
        velocity_body=velocity_body,
        angular_velocity_body=angular_velocity_body,
        pose_covariance=pose_covariance,
        velocity_covariance=velocity_covariance,
    )

    # Publish odometry
    await drone.telemetry.set_rate_odometry(1)
    while True:
        try:
            await drone.telemetry_server.publish_odometry(odometry_data)
            asyncio.sleep(1)
        except:
            print("hello")

Run the main loop
if __name__ == "__main__":
    asyncio.run(main())

if there is a solution using pymavlink that will work too. it would be great if there was an example showing how it works that would be great

Hey @chobitsfan, really cool project! I’m working on something similar, also with a Raspberry Pi 4.

Quick question: Your camera outputs in either 8-bit or 10-bit raw, right? And you were using 640x400 pixels, at 20 fps, right? Did you compress each frame before processing them for feature-tracking, or did you use the raw footage?

I imagine that 1280x720 pixels would be too much for a RPi4, if using uncompressed footage?

Did you compress each frame before processing them for feature-tracking, or did you use the raw footage?

I use raw footage, you need uncompressed, raw image fpr feature tracking

1 Like

Hi @chobitsfan, I have a small question.

I’m trying to build a VIO system using VINS-Fusion and OpenVINS with a camera (OV5640) and an IMU (MPU9250).

However, during calibration with Kalibr, the results were very strange:

  • The estimated time offset between the camera and the IMU was over 0.17 seconds (not milliseconds).
  • The extrinsic parameters were completely different from the ones physically measured on the hardware setup.

As a result, the VINS system quickly diverges and becomes unstable.

My setup is:

  • Board: Rockchip-based Linux custom board
  • Camera connection: MIPI CSI-2
  • IMU connection: I2C
  • Camera and IMU topics are published separately, without hardware synchronization.

I heard that time synchronization is extremely critical for VIO systems.

Have you ever used hardware synchronization (external trigger) between the camera and the IMU?
If so, could you share how you achieved it?

Also, do you think switching to a global shutter camera like the OV9281 would significantly improve VIO performance, even if perfect time synchronization is not achieved?

Additionally, do you think the quality of the IMU I’m using (MPU9250) is sufficient for VIO applications, or should I consider upgrading?

I’ve been struggling with this for almost four months, and it’s been very frustrating.

Any advice or experience you can share would be incredibly appreciated. Thank you so much!

@chobitsfan I am currently testing an IMU that samples around 200Hz (5ms interval), and attached a plot showing the sample rate stability.

There is a small amount of jitter around the average sample rate.

I plan to use this IMU for VIO applications such as OpenVINS or VINS-Fusion.

Would this level of timing jitter be acceptable for VIO?
Or would it negatively affect performance like causing divergence or instability?

Thank you for any feedback!

Hi,
Could you help me in resolving these 2 problems?

  1. How are you sending 20Hz pose to ardupilot FC?
    Before you mentioned, VIO frequency is 10Hz (as defined in config chobits_config.yaml. but how are you sending 20Hz pose frequency to ardupilot FC?
    are you using estimation from /vins_estimator/imu_propagate topic/data ? in you modified VINS-MONO?
    If /vins_estimator/imu_propagate is used then it should be around 100Hz output. (i believe you are filtering it to 20Hz).
    (on my indoor drone setup , [a] 10Hz odometry from slam is not enough for stable flight, lot of circular motions in loiter. [b] sending 20Hz pose from T265 works better.(loiter and waypoint following)

  2. how/from where is the 3d voxel map generated ?
    Could you provide the configuration for eco-planner launch? I could not find it here https://github.com/chobitsfan/ego-planner
    Is sparse pointcloud from VINS-MONO is sent to ego-planner?
    I’m planning to use stereo+imu setup, so Is vins sparse pointcloud topic enough for ego-planner or depth data is required to generate the voxel map (later for 3d path planning)?
    How can we get voxel map from VINS?, which is shown in video4 on the repo(YouTube)

  1. I think vins is publish pose in 10hz in this example. I limit it to 10hz because pi 4 is not fast enough. I think you can increase it to 20hz (the same as camera fps) if pi 5 is used
  2. yes, I use feature points from vins-mono. I think you may just use stereo data to archive better result

Did you use stereo camera (2 OV5640)?

no, I use just monocular camera