ICP配准代码的实现(python+open3D)

2024-03-05  本文已影响0人  小黄不头秃

之前我们有介绍过ICP配准算法的基本原理:ICP配准的基本原理
如果我们在学习的时候手头没有点云数据,我们可以使用官方给出的Demo:官方Demo。在这个库中,提供了ICP迭代最近点算法及其变体。本文主要记录了次库API的使用方法。

1 点到点的ICP

1.1 辅助可视化函数

def draw_registration_result(source, target, transformation):
    source_temp = copy.deepcopy(source)
    target_temp = copy.deepcopy(target)
    source_temp.paint_uniform_color([1, 0.706, 0])
    target_temp.paint_uniform_color([0, 0.651, 0.929])
    source_temp.transform(transformation)
    o3d.visualization.draw_geometries([source_temp, target_temp],
                                      zoom=0.4459,
                                      front=[0.9288, -0.2951, -0.2242],
                                      lookat=[1.6784, 2.0612, 1.4451],
                                      up=[-0.3402, -0.9189, -0.1996])
source = o3d.io.read_point_cloud("./data/cloud_bin_0.pcd")
target = o3d.io.read_point_cloud("./data/cloud_bin_1.pcd")
threshold = 0.02
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
                            [-0.139, 0.967, -0.215, 0.7],
                            [0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)

1.2 点到点ICP API

print("Apply point-to-point ICP")
reg_p2p = o3d.pipelines.registration.registration_icp(
    source, target, threshold, trans_init,
    o3d.pipelines.registration.TransformationEstimationPointToPoint(),
    o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration=4000))
print(reg_p2p)
print("Transformation is:")
print(reg_p2p.transformation)
draw_registration_result(source, target, reg_p2p.transformation)

在这里,我们可以调整一些参数来看看此算法的性能,和每个参数的具体意义。
reg_p2p = o3d.pipelines.registration.registration_icp(source, target, threshold, trans_init, o3d.pipelines.registration.TransformationEstimationPointToPoint(), o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration=4000)) 这是调用 ICP 配准的主要函数,它返回一个 RegistrationResult 对象,包含了配准的结果和评估指标。它的参数分别是:

o3d.pipelines.registration.TransformationEstimationPointToPoint()是变换估计方法,一个 open3d.pipelines.registration.TransformationEstimation 对象,表示使用点到点的方式计算变换矩阵。这种方式假设源点云和目标点云的点是一一对应的,然后使用最小二乘法求解变换矩阵。这是最简单的一种变换估计方法,也可以使用其他的方法,如点到平面、广义 ICP、彩色 ICP 等。

o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration=4000)是收敛准则,一个 open3d.pipelines.registration.ICPConvergenceCriteria 对象,表示配准过程的终止条件。它有三个参数:relative_fitnessrelative_rmsemax_iteration,分别表示相对适应度、相对均方根误差和最大迭代次数。当任意一个条件满足时,配准过程就会停止。默认的值分别是 1e-6、1e-6 和 30。

(1)最大对应点的距离的设置方法(即,代码中的threshold):

# For Vanilla ICP (double)
max_correspondence_distance = 0.07

# For Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])

(2)初始变换矩阵的初始化:
初始对齐通常通过全局配准算法获得。

# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
                                    [-0.139, 0.967, -0.215, 0.7],
                                    [0.487, 0.255, 0.835, -1.4],
                                    [0.0, 0.0, 0.0, 1.0]])

(3)估计方法
这将设置ICP方法来计算给定对应关系的两个点云之间的转换。这里就可以设置是使用点到点的ICP还是使用点到面等方法。这里还可以设置不同的核,例如:robust kernels(用于去除离群点)

# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()

# 鲁棒核
robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)

核的选择还有这些选项:

print("Robust point-to-plane ICP, threshold={}:".format(threshold))
loss = o3d.pipelines.registration.TukeyLoss(k=sigma)
print("Using robust loss:", loss)
p2l = o3d.pipelines.registration.TransformationEstimationPointToPlane(loss)
reg_p2l = o3d.pipelines.registration.registration_icp(source_noisy, target,
                                                      threshold, trans_init,
                                                      p2l)
print(reg_p2l)
print("Transformation is:")
print(reg_p2l.transformation)
draw_registration_result(source, target, reg_p2l.transformation)

(4)ICP收敛标准
这里是设置ICP算法的终止条件。

# Convergence-Criteria for Vanilla ICP:

criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001, relative_rmse=0.000001, max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:

criteria_list = [
    treg.ICPConvergenceCriteria(relative_fitness=0.0001, relative_rmse=0.0001, max_iteration=20),
    treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
    treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]

(5)体素大小
体素大小(较低的体素大小对应于较高的分辨率),适用于多尺度ICP的每个尺度。我们希望在粗点云(低分辨率或大体素大小)上执行初试迭代(因为它更省时,并且避免局部最小值),然后移动到密集的点云(高分辨率或小体素大小)。因此,体素大小必须严格降序排列。

# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])

2 多尺度ICP示例

通过上面的学习,我们应该大概知道了ICP的使用方法,我们不难发现初始化的对齐对算法的影响极大,不良的初始化直接会导致ICP收敛失败。那如何去解决这个问题呢?通常会有两个方案:

使用Vanilla-ICP时出现的问题:

如果点云没有很好地对齐,收敛可能会在初始迭代中卡在局部最小值中。如果对齐的点云没有足够的重叠,我们需要有一个更大的max_correspondence_distance。如果点云被大量下采样(粗),则获得的结果将不准确。

这些缺点可以使用多尺度ICP解决。在多尺度ICP中,我们在粗点云上执行初试迭代,以获得对初始对齐的更好估计,并使用此对齐方式在更密集的点云上收敛。粗点云上的ICP非常昂贵,并且允许我们使用更大的max_correspondence_distance。收敛也不太可能卡在局部最小值中。当我们得到一个很好的估计值时,在密集的点云上需要更少的迭代来收敛到更准确的变换。

所以,建议使用Multi-Scale ICP替代ICP,以实现高效收敛,特别是对于大型点云。

import open3d as o3d
import numpy as np 
import copy 
import time 
import open3d.t.pipelines.registration as treg

def draw_registration_result(source, target, transformation):
    source_temp = source.clone()
    target_temp = target.clone()

    source_temp.transform(transformation)

    # This is patched version for tutorial rendering.
    # Use `draw` function for you application.
    o3d.visualization.draw_geometries(
        [source_temp.to_legacy(),
         target_temp.to_legacy()],
        zoom=0.4459,
        front=[0.9288, -0.2951, -0.2242],
        lookat=[1.6784, 2.0612, 1.4451],
        up=[-0.3402, -0.9189, -0.1996])
    
def apply_noise(pcd, mu, sigma):
    noisy_pcd = copy.deepcopy(pcd)
    points = np.asarray(noisy_pcd.points)
    points += np.random.normal(mu, sigma, size=points.shape)
    noisy_pcd.points = o3d.utility.Vector3dVector(points)
    return noisy_pcd

    
if __name__ =="__main__":
    demo_icp_pcds = o3d.data.DemoICPPointClouds()
    source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
    target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])

    # source = o3d.t.io.read_point_cloud("./data/cloud_bin_0.pcd")
    # target = o3d.t.io.read_point_cloud("./data/cloud_bin_1.pcd")
        
    voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])

    # List of Convergence-Criteria for Multi-Scale ICP:
    criteria_list = [
        treg.ICPConvergenceCriteria(relative_fitness=0.0001,
                                    relative_rmse=0.0001,
                                    max_iteration=20),
        treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
        treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
    ]

    # `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
    max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.02, 0.001])

    # Initial alignment or source to target transform.
    init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)

    # Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
    estimation = treg.TransformationEstimationPointToPlane()

    # Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
    callback_after_iteration = lambda loss_log_map : print("Iteration Index: {}, Scale Index: {}, Scale Iteration Index: {}, Fitness: {}, Inlier RMSE: {},".format(
        loss_log_map["iteration_index"].item(),
        loss_log_map["scale_index"].item(),
        loss_log_map["scale_iteration_index"].item(),
        loss_log_map["fitness"].item(),
        loss_log_map["inlier_rmse"].item()))
    
    s = time.time()

    registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
                                            criteria_list,
                                            max_correspondence_distances,
                                            init_source_to_target, estimation,
                                            callback_after_iteration)

    ms_icp_time = time.time() - s
    print("Time taken by Multi-Scale ICP: ", ms_icp_time)
    print("Inlier Fitness: ", registration_ms_icp.fitness)
    print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)

    draw_registration_result(source, target, registration_ms_icp.transformation)

参考博客:
【1】官方文档
【2】https://jiatianzhi.cn/index.php/archives/84/
【3】https://jiatianzhi.cn/index.php/archives/93/

上一篇下一篇

猜你喜欢

热点阅读