A small VIO system using raspberry pi and arducam ov9281

I think it’s sent through ipc tunnel. But I still didn’t got right.

You can help here.

Thanks, I think I figured it out another solution: You can publish the ego-planner output to MAVROS using /setpoint_raw/local.

@chobitsfan ,
kudos for the awesome work!!
I had some queries regarding system time synchronization and synchronization between IMU and camera frames:

  1. I believe you have set BRD_RTC_TYPES to 1, which is Mavlink SYSTEM_TIME.
    Therefore, The onboard computer sends SYSTEM_TIME msgs(via MAVROS) and uses the Mavlink TIMESYNC message to synchronize the system time between FCU and RPi, right?
    How do you handle the response once MAVROS receives TIMESYNC Mavlink msg from FCU?
    I understand the TIMESYNC helps you find the offset between the FCU and On-Board Computer, which you add/ subtract to the time stamp sent for the HIGHRES IMU Mavlink msg from FCU. Am I right in understanding this?
    Could you please discuss in detail the role of SYSTEM_TIME and TIMESYNC messages in helping onboard computer and FCU time sync?
    If BRD_RTC_TYPES is set to 1, is FCU time set to the time_unix_usec field value from the SYSTEM_TIME msg?
    How is the TIMESYNC msg used in this process? Is it to calculate the offset between the onboard computer time and FCU system time(the one set by Mavlink SYSTEM_TIME msg) to account for communication offsets?
  2. I also wanted to ask you if you use a camera trigger with servo/ relay functionality from FCU to trigger the camera to get the images at a certain frequency(DIY Camera Trigger using Relay — Copter documentation). If not, do you receive time-stamped image frames and further use software synchronization to sync images and highres imu mavlink packets for VIO? How do you get accurate timestamps for camera images?

Hi

  1. SYSTEM_TIME is only used to set FC log time, so that it is easier to know when did this flight happen. TIMESYNC is used to measure the time offset between FC and pi. Timestamp used in vins-mono is based on pi time.
  2. libcamera provides time-stamped image frames, see use sensor timestamp · chobitsfan/libcamera-apps@a142eb2 · GitHub

@chobitsfan @radiant_bee

I found there are two values related with image time stamp.

If the driver timestamped a frame, why the frame time has something to do with boot time?

    struct timespec boot_ts, epoch_ts;
    clock_gettime(CLOCK_BOOTTIME, &boot_ts);
    clock_gettime(CLOCK_REALTIME, &epoch_ts);
    uint64_t boot_epoch_offset = epoch_ts.tv_sec * 1000000000 + epoch_ts.tv_nsec - (boot_ts.tv_sec * 1000000000 + boot_ts.tv_nsec);
		CompletedRequestPtr &completed_request = std::get<CompletedRequestPtr>(msg.payload);

        auto sensor_ts = completed_request->metadata.get(controls::SensorTimestamp);
        uint64_t sensor_ts_epoch = *sensor_ts + boot_epoch_offset;
        ros::Time sensor_ros_ts(sensor_ts_epoch / 1000000000, sensor_ts_epoch % 1000000000);

I think driver timestamped a frame in “time since boot”, but ROS timestamp is “time since epoch”

I didn’t get how to align imu data time(which is aligned with system timer from epoch) with the image time, which is “time since boot” + “sensor driver timer, meta data”. And if the camera don’t have RTC, then it should be somewhere in the code aligned with system timer, and why we need add offset since boot?

Hi @chobitsfan, Thank you for your wonderful work and contributions to the community! I’m currently working on building a small, low-cost indoor drone and would like to use commonly available hardware components for this project. My goal is to keep the budget low while learning and exploring drone technology, with plans to upgrade as I gain more experience.

It would be incredibly helpful if you could share a complete list of hardware that would allow me to achieve this, including any recommendations for upgrades as I progress. I’m particularly interested in hardware for implementing Visual SLAM or other autonomous navigation methods without GPS.

Looking forward to your guidance!

Hi. I am sorry for late reply. About time alignment, I try to align IMU & camera to ROS time (which is time from epoch)

Hi. Hardware for VIO is already list in post. For the small indoor drone, I use Kakute H7 Stacks

1 Like

@chobitsfan

It seems position is drifting The position is drifting. Any ideas on calibration or what might be wrong? · Issue #263 · HKUST-Aerial-Robotics/VINS-Fusion · GitHub, img & imu time sync issue?

I want to send the visual odometry data using Mavsdk-Python. Can someone tell me where i am wrong. The following is a simple dummy data i am trying to send to the drone.

import time
import asyncio
from mavsdk import System
from mavsdk.telemetry import Odometry,  PositionBody, VelocityBody, Quaternion, AngularVelocityBody, Covariance
from mavsdk.telemetry_server_pb2 import TelemetryServerResult  # Importing the result enum or class

async def main():
    # Connect to the drone
    drone = System()
    await drone.connect(system_address="tcp://127.0.0.1:8000")
    async for state in drone.core.connection_state():
        lastPacketTime=time.time()
        if state.is_connected:
            print(f"-- Connected to drone!")
            break

    # Create required components
    current_time_usec = int(time.time() * 1e6)
    position_body = PositionBody(0.0, 0.0, 0.0)  # Position in meters
    velocity_body = VelocityBody(1.0, 0.0, 0.0)  # Velocity in m/s
    q = Quaternion(1.0, 0.0, 0.0, 0.0,current_time_usec)  # Orientation as quaternion (w, x, y, z,timestamp in micro-sec)
    angular_velocity_body = AngularVelocityBody(0.0, 0.0, 0.0)  # Angular velocity in rad/s
    pose_covariance = Covariance([0.0] * 9)  # Placeholder 3x3 covariance matrix
    velocity_covariance = Covariance([0.0] * 9)  # Placeholder 3x3 covariance matrix
    
    # Create the Odometry object
    odometry_data = Odometry(
        time_usec=current_time_usec,  # Timestamp in microseconds
        frame_id=Odometry.MavFrame.BODY_NED,  # Use a valid MavFrame enum value for frame_id
        child_frame_id=Odometry.MavFrame.BODY_NED,  # Same for child_frame_id
        position_body=position_body,
        q=q,
        velocity_body=velocity_body,
        angular_velocity_body=angular_velocity_body,
        pose_covariance=pose_covariance,
        velocity_covariance=velocity_covariance,
    )

    # Publish odometry
    await drone.telemetry.set_rate_odometry(1)
    while True:
        try:
            await drone.telemetry_server.publish_odometry(odometry_data)
            asyncio.sleep(1)
        except:
            print("hello")

Run the main loop
if __name__ == "__main__":
    asyncio.run(main())

if there is a solution using pymavlink that will work too. it would be great if there was an example showing how it works that would be great

Hey @chobitsfan, really cool project! I’m working on something similar, also with a Raspberry Pi 4.

Quick question: Your camera outputs in either 8-bit or 10-bit raw, right? And you were using 640x400 pixels, at 20 fps, right? Did you compress each frame before processing them for feature-tracking, or did you use the raw footage?

I imagine that 1280x720 pixels would be too much for a RPi4, if using uncompressed footage?

Did you compress each frame before processing them for feature-tracking, or did you use the raw footage?

I use raw footage, you need uncompressed, raw image fpr feature tracking

1 Like