In this tutorial, we provide principled methods to quantitatively evaluate the quality of an estimated trajectory from visual(-inertial) odometry (VO/VIO), which is the foundation of benchmarking the accuracy of different algorithms. First, we show how to determine the transformation type to use in trajectory alignment based on the specific sensing modality (i.e., monocular, stereo and visual-inertial). Second, we describe commonly used error metrics (i.e., the absolute trajectory error and the relative error) and their strengths and weaknesses. To make the methodology presented for VO/VIO applicable to other setups, we also generalize our formulation to any given sensing modality. To facilitate the reproducibility of related research, we publicly release our implementation of the methods described in this tutorial.
2021-07-19 11:01:21 506KB slam
1
https://s3.eu-central-1.amazonaws.com/avg-kitti/data_odometry_gray.zip 20多个G数据集,下了一天,贼难下
2021-07-11 17:44:26 459B 数据集
1
In this paper, we focus on the problem of motion tracking in unknown environments using visual and inertial sensors.We term this estimation task visual-inertial odometry (VIO), in analogy to the well-known visual-odometry problem. We present a detailed study of EKF-based VIO algorithms, by comparing both their theoretical properties and empirical performance. We show that an EKF formulation where the state vector comprises a sliding window of poses (the MSCKF algorithm) attains better accuracy, consistency, and computational efficiency than the SLAM formulation of the EKF, in which the state vector contains the current pose and the features seen by the camera. Moreover, we prove that both types of EKF approaches are inconsistent, due to the way in which Jacobians are computed. Specifically, we show that the observability properties of the EKF’s linearized system models do not match those of the underlying system, which causes the filters to underestimate the uncertainty in the state estimates. Based on our analysis, we propose a novel, real-time EKF-based VIO algorithm, which achieves consistent estimation by (i) ensuring the correct observability properties of its linearized system model, and (ii) performing online estimation of the camera-to-IMU calibration parameters. This algorithm, which we term MSCKF 2.0, is shown to achieve accuracy and consistency higher than even an iterative, sliding-window fixed-lag smoother, in both Monte-Carlo simulations and real-world testing. I
2021-05-28 16:20:21 735KB VIO limingyang
1
TrackViz 一个简单的UE4插件,可以可视化AirSim跟踪记录。 只需将此Plugins\TrackViz克隆到UE4项目中的Plugins\TrackViz目录中,然后加载要在其中可视化轨道的级别。 主要特点 Airsim记录的轨迹可视化 选定轨迹的摄像机视图 显示方向向量 连接轨道的对应点 如何开始 使用TrackVizGameMode作为使用该插件的默认游戏模式。 默认情况下,在FPaths::ProjectDir() + TEXT("tracks")中搜索跟踪记录。 对于UE4项目, FPaths::ProjectDir()是根目录,对于打包项目, FPaths::Proje
1
这篇文章提出了一个名为LeGO-LOAM的轻量级6自由度的激光SLAM方法. LeGO-LOAM主要分为点云分隔, 特征提取, 里程计和建图四个方面, 在特征提取方面它借鉴了LOAM算法, 并且和其他的激光SLAM方法相比, LeGO-LOAM的亮点是支持闭环检测, 而且作者表示它能够在嵌入式系统上实时地进行位姿估计.
2021-03-20 17:31:33 12.26MB loam LeGO-LOAM
1
将 kitti 数据集 odometry 中kitti格式的 groundtruth 转换为了 tum 格式的groundtruth, 由于kitti格式的groundtruth没有时间戳,使用tum格式的groundtruth更便于对轨迹进行评估
2021-03-07 15:54:50 1.38MB kitti odometry groundtruth tum
1
Robust Stereo Visual Inertial Odometry for Fast.pdf
2021-02-19 20:09:08 4.09MB slam
1
包含SLAM 视觉里程计部分完整实现,使用数据集为TUM的RGB-D数据集。主要包含计算并将相机位姿输出到文件的函数与匹配计算的相机位姿与地面真值的函数。
2021-01-28 05:04:40 24.25MB SLAM
1
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate Visual-Inertial Odometry or Simultaneous Localization and Mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that non-linear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspirat
2021-01-28 05:04:39 5.22MB VSLAM
1