立体视觉中几种Cost Volume的理解

2023-06-11  本文已影响0人  starryCaptain

在立体视觉的双目匹配的过程中存在Cost Volume的基本概念。以下介绍几种常见的Cost Volume计算方式及对应的理解。
首先展示3种典型方式的代码实现:

def forward(self, left_feature, right_feature):
    b, c, h, w = left_feature.size()
    if self.feature_similarity == 'difference':
        cost_volume = left_feature.new_zeros(b, c, self.max_disp, h, w)  # [B, C, D, H, W]
        for i in range(self.max_disp):
            if i > 0:
                cost_volume[:, :, i, :, i:] = left_feature[:, :, :, i:] - right_feature[:, :, :, :-i]
            else:
                cost_volume[:, :, i, :, :] = left_feature - right_feature
    elif self.feature_similarity == 'concat':
        cost_volume = left_feature.new_zeros(b, 2 * c, self.max_disp, h, w)  # [B, 2C, D, H, W]
        for i in range(self.max_disp):
            if i > 0:
                cost_volume[:, :, i, :, i:] = torch.cat((left_feature[:, :, :, i:], right_feature[:, :, :, :-i]), dim=1)
            else:
                cost_volume[:, :, i, :, :] = torch.cat((left_feature, right_feature), dim=1)
    elif self.feature_similarity == 'correlation':
        cost_volume = left_feature.new_zeros(b, self.max_disp, h, w)  # [B, D, H, W]
        for i in range(self.max_disp):
            if i > 0:
                cost_volume[:, i, :, i:] = (left_feature[:, :, :, i:] * right_feature[:, :, :, :-i]).mean(dim=1)
            else:
                cost_volume[:, i, :, :] = (left_feature * right_feature).mean(dim=1)
    else:
        raise NotImplementedError
    return cost_volume

Difference

其中difference方式是最直观最容易理解的,即在left_featureright_feature逐渐“错开”的过程中对应的逐元素相减,元素之间的差值即表示两个特征的在当前位置匹配程度。举例如下:

n = 3
m = 4
left = np.tile(np.arange(m), (n, 1))
right = np.tile(np.arange(1, m + 1), (n, 1))
max_dis = 3
result = np.ones((max_dis, n, m)) * 0
for i in range(max_dis):
    if i > 0:
        result[i, :, i:] = left[:, i:] - right[:, :-i]
    else:
        result[i, :, i:] = left - right
left:
[[0 1 2 3]
 [0 1 2 3]
 [0 1 2 3]]

right:
[[1 2 3 4]
 [1 2 3 4]
 [1 2 3 4]]

简单起见,假设C维度不存在,只考虑H,W维度。存在两个[3,4]的左右特征。设max_dis = 3,则首先创建一个[3,3,4]result

result:
[[[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]

 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]

 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]]

for i in range(max_dis)中将每一次“错位”后相减的值赋值到result对应维度[i,3,4]中,最终得到:

result:
[[[-1. -1. -1. -1.]
  [-1. -1. -1. -1.]
  [-1. -1. -1. -1.]]

 [[ 0.  0.  0.  0.]
  [ 0.  0.  0.  0.]
  [ 0.  0.  0.  0.]]

 [[ 0.  0.  1.  1.]
  [ 0.  0.  1.  1.]
  [ 0.  0.  1.  1.]]]

可以看到在i=1的位置,result中的值全为0,即在“错位”1个距离的时候,left_featureright_feature完全匹配,这与我们的观察吻合。

Concat

这种方式没有什么特别值得说的,只是直接将两个特征在C维度上进行了叠加。

Correlation

假设特征图的形状均为[C,H,W],可以将它视为由H*W个元素组成,其中每个元素是一个长度为C的向量。事实上,在卷积神经网络中,一张原始图像某一块区域的特征也正是由这样一个向量表示的。
对于两个高维向量相似度的衡量,可以使用点积的方式:
\vec{a} \cdot \vec{b} = ||\vec{a}|| ||\vec{b}|| \cos{\theta}
从几何角度看,点积是两个向量的长度与它们夹角余弦的积,也可以理解为\vec{a}\vec{b}方向上的投影与的乘积。这反映了两个向量在方向上的相似度,结果越大越相似。若两个向量正交垂直,则结果为0。对两个[C,1,1]的向量逐元素相乘并相加正表示了这两个向量点积的结果。为了去除向量本身模长对结果的影响,可以先对矩阵在C维度上进行归一化。
举例如下:

left:
tensor([[[7., 6., 5., 4., 3.],
         [7., 6., 5., 4., 3.],
         [7., 6., 5., 4., 3.]],

        [[1., 2., 3., 4., 5.],
         [1., 2., 3., 4., 5.],
         [1., 2., 3., 4., 5.]]])
right:
tensor([[[5., 4., 3., 2., 1.],
         [5., 4., 3., 2., 1.],
         [5., 4., 3., 2., 1.]],

        [[3., 4., 5., 6., 7.],
         [3., 4., 5., 6., 7.],
         [3., 4., 5., 6., 7.]]])

存在两个[2,3,5]的特征矩阵。使用x_normalized = F.normalize(x, dim=0)对它们在C维度进行归一化,得到:

left_normalized:
tensor([[[0.9899, 0.9487, 0.8575, 0.7071, 0.5145],
         [0.9899, 0.9487, 0.8575, 0.7071, 0.5145],
         [0.9899, 0.9487, 0.8575, 0.7071, 0.5145]],

        [[0.1414, 0.3162, 0.5145, 0.7071, 0.8575],
         [0.1414, 0.3162, 0.5145, 0.7071, 0.8575],
         [0.1414, 0.3162, 0.5145, 0.7071, 0.8575]]])
right_normalized:
tensor([[[0.8575, 0.7071, 0.5145, 0.3162, 0.1414],
         [0.8575, 0.7071, 0.5145, 0.3162, 0.1414],
         [0.8575, 0.7071, 0.5145, 0.3162, 0.1414]],

        [[0.5145, 0.7071, 0.8575, 0.9487, 0.9899],
         [0.5145, 0.7071, 0.8575, 0.9487, 0.9899],
         [0.5145, 0.7071, 0.8575, 0.9487, 0.9899]]])

max_dis = 5,则首先创建一个[2,5,3,5]result。在for i in range(max_dis)中将这两个归一化后的矩阵点积的值赋值到result对应维度[2,i,3,5]中,得到:

result:
[[[[0.84887475 0.67082036 0.44117653 0.2236068  0.07276069]
   [0.84887475 0.67082036 0.44117653 0.2236068  0.07276069]
   [0.84887475 0.67082036 0.44117653 0.2236068  0.07276069]]

  [[0.         0.81348926 0.6063391  0.36380345 0.16269785]
   [0.         0.81348926 0.6063391  0.36380345 0.16269785]
   [0.         0.81348926 0.6063391  0.36380345 0.16269785]]

  [[0.         0.         0.73529422 0.49999997 0.26470593]
   [0.         0.         0.73529422 0.49999997 0.26470593]
   [0.         0.         0.73529422 0.49999997 0.26470593]]

  [[0.         0.         0.         0.6063391  0.36380345]
   [0.         0.         0.         0.6063391  0.36380345]
   [0.         0.         0.         0.6063391  0.36380345]]

  [[0.         0.         0.         0.         0.44117653]
   [0.         0.         0.         0.         0.44117653]
   [0.         0.         0.         0.         0.44117653]]]


 [[[0.07276069 0.2236068  0.44117653 0.67082036 0.84887475]
   [0.07276069 0.2236068  0.44117653 0.67082036 0.84887475]
   [0.07276069 0.2236068  0.44117653 0.67082036 0.84887475]]

  [[0.         0.16269785 0.36380345 0.6063391  0.81348926]
   [0.         0.16269785 0.36380345 0.6063391  0.81348926]
   [0.         0.16269785 0.36380345 0.6063391  0.81348926]]

  [[0.         0.         0.26470593 0.49999997 0.73529422]
   [0.         0.         0.26470593 0.49999997 0.73529422]
   [0.         0.         0.26470593 0.49999997 0.73529422]]

  [[0.         0.         0.         0.36380345 0.6063391 ]
   [0.         0.         0.         0.36380345 0.6063391 ]
   [0.         0.         0.         0.36380345 0.6063391 ]]

  [[0.         0.         0.         0.         0.44117653]
   [0.         0.         0.         0.         0.44117653]
   [0.         0.         0.         0.         0.44117653]]]]

再使用mean = torch.from_numpy(result).mean(dim=0)对结果在C维度上求平均值,得到:

mean:
tensor([[[0.4608, 0.4472, 0.4412, 0.4472, 0.4608],
         [0.4608, 0.4472, 0.4412, 0.4472, 0.4608],
         [0.4608, 0.4472, 0.4412, 0.4472, 0.4608]],

        [[0.0000, 0.4881, 0.4851, 0.4851, 0.4881],
         [0.0000, 0.4881, 0.4851, 0.4851, 0.4881],
         [0.0000, 0.4881, 0.4851, 0.4851, 0.4881]],

        [[0.0000, 0.0000, 0.5000, 0.5000, 0.5000],
         [0.0000, 0.0000, 0.5000, 0.5000, 0.5000],
         [0.0000, 0.0000, 0.5000, 0.5000, 0.5000]],

        [[0.0000, 0.0000, 0.0000, 0.4851, 0.4851],
         [0.0000, 0.0000, 0.0000, 0.4851, 0.4851],
         [0.0000, 0.0000, 0.0000, 0.4851, 0.4851]],

        [[0.0000, 0.0000, 0.0000, 0.0000, 0.4412],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.4412],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.4412]]])

可以看到在i=2的位置,result中的值全为0.5(由于对长度为2的C通道求了平均值),即在“错位”2个距离的时候,left_featureright_feature完全匹配,这与我们的观察吻合。
通过这种方式求得的Cost Volume维度相比于前两种更少,因此能够减轻网络技术的负担,加快运算速度。

上一篇下一篇

猜你喜欢

热点阅读