SLAM | visual-lidar odometry and

2019-07-26  本文已影响0人  Janeshurmin

1. 为什么要做sensor fusion

单种类型的sensor均存在一定的缺陷:

==> Sensor fusion,探索两者优势,互补融合。

2. DEMO & LOAM(Ji Zhang)

2.1 DEMO(Visual odometry)

针对单纯的视觉里程计,可以采用RGB-D图像+深度的解决方案。

深度可以采用红外测距获取,但由于sensor使用范围的限制,大部分测量范围为10m以内,超出这个范围,尤其是室外,会出现大量面积有图像但没有深度的情况,即存在数据稀疏/深度信息不足的情况。

DEMO的深度来自RGB-D相机/3D lidar,文章提出通过深度图or三角测量(针对先前的运动估计)两种方法将深度信息叠加在image上,此外采用Bundle Adjustment非线性优化方法,以细化帧对帧的运动估计。实验结果表明室内运动估计效果好,室外效果一般。

存在的问题:色彩均匀的地方很难跟踪特征,需改进feature detection and tracking



DEMO实验结果

2.2 LOAM(Lidar odometry)

若仅使用3D Lidar,与相机相比,分辨率比较差,激光扫描比较远的时候,点云非常稀疏,所以在比较远的情况下,无法进行正常工作。而针对Lidar的畸变,可以基于lidar扫描,用其他sensor来校正,解决方案包括:

LOAM依赖激光点云进行运动估计和构图,因为激光自身的扫描率很低,数据率低,所以需要激光器扫描比较平缓,让它有足够的时间得到充分的扫描结果,同时这里将IMU叠加上去,对点云结果进行修正。

存在的问题:运动平缓smooth,高频运动用IMU补偿,未解决aggressive motion问题



LOAM实验结果

Additionally, we conduct tests to measure accumulated drift of the motion estimate. We choose corridor for indoor experiments that contains a closed loop. This allows us to start and finish at the same place. The motion estimation generates a gap between the starting and finishing positions, which indicates the amount of drift. For outdoor experiments, we choose orchard environment. The ground vehicle that carries the lidar is equipped with a high accuracy GPS/INS for ground truth acquisition. The measured drifts are compared to the distance traveled as the relative accuracy, and listed in Table I. Specifically, Test 1 uses the same datasets with Fig. 10(a) and Fig. 10(g). In general, the indoor tests have a relative accuracy around 1% and the outdoor tests are around 2.5%. Table II compares relative errors in motion estimation with and without using the IMU. The lidar is held by a person walking at a speed of 0.5m/s and moving the lidar up and down at a magnitude around 0.5m. The ground truth is manually measured by a tape ruler. In all four tests, using the proposed method with assistance from the IMU gives the highest accuracy, while using orientation from the IMU only leads to the lowest accuracy. The results indicate that the IMU is effective in canceling the nonlinear motion, with which, the proposed method handles the linear motion.

3. V-LOAM

3.1 Overview

目标/优势:6-DOF,precision,low-drift,real-time,ligh-frequence,robost to aggressive motion

解决方案:DEMO + LOAM,漂移匀速假设,无回环

大体思路:VO结合雷达深度信息估计相机位姿,利用这个位姿进行激光点云的畸变矫正,后将矫正后的点云映射到局部地图中,用于后续的位姿优化。

3.2 Visual odometry

视觉里程计结合lidar的深度信息粗略估计相机运动位姿,在高频估计ego-motion。主要包括三大模块,即the feature tracking block、the depth map registration block和the frame to frame motion estimation block。

(1) The feature tracking block

Function:提取并匹配连续图像之间的视觉特征,利用Kanade Lucas Tomasi (KLT)方法跟踪Harris corners

Input:图像

Output:视觉特征

(2) The depth map registration block

Function:深度图处理,将雷达点云映射到局部深度图,并将深度关联至视觉特征

Input:视觉特征,深度图,雷达点云

Output:视觉特征(带/不带深度)

Method:

深度来源: 相机/lidar、三角测量、KD-Tree(深度映射存储在2D KD-tree中,用于快速索引)

(3) The frame to frame motion estimation block

Function:粗略估计相机位姿

Input:相邻两帧的视觉特征(带/不带深度)

Output:相机位姿R和t

Method:构建3个约束方程,即上一帧深度已知和未知两个运动约束方程(归一化齐次坐标系),和一个重投影误差约束方程,即转化为3D-2D、2D-2D的重投影误差非线性优化方程,利用LM求解。具体推导如下所示:

3.3 Lidar odometry

通过激光雷达里程计在低频率细化运动估计和纠正漂移,即利用视觉里程计估算出来的位姿进行激光点云的畸变矫正,后将矫正后的点云映射到局部地图中。

(1) A sweep to sweep refinement step

Function:利用VO估计出来的位姿对相邻两帧的激光点云进行配准,畸变校正,并估计运动位姿

Input:激光点云,R,t

Output:优化后的位姿R,t

Method:计算曲率、提取边角点和平面点、计算相应点的距离、基于姿态估计做点插值、优化修正位姿(LM)

(2) A sweep to map registration step

Function:建图,多线程估计,以1Hz更新地图点云

(3) Transfrom integration

Function:视觉里程计信息帧率高,BA优化帧率较低,将二者结合,从视觉里程计到帧运动变换,利用BA优化的结果修正视觉里程计,可以得到图像帧率下的高频综合位姿结果。

3.4 实验

实验设备:60Hz帧率uEye单目相机(广角/鱼眼镜头)+基于北越UTM-30LX激光扫描仪

评估设备:单个摄像机+Velodyne激光雷达

A. 精度测试

B. 鲁棒性测试

Table II compares relative position errors.

补充资料:

未完待续~~~

上一篇下一篇

猜你喜欢

热点阅读