pytorch api: view-reshape-permut

2020-03-19  本文已影响0人  魏鹏飞

1. view(*shape) → Tensor

Returns a new tensor with the same data as the self tensor but of a different shape.

The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions d, d+1, \dots, d+k that satisfy the following contiguity-like condition that \forall i = 0, \dots, k-1

stride[i]=stride[i+1]×size[i+1]

Otherwise, contiguous() needs to be called before the tensor can be viewed. See also: reshape(), which returns a view if the shapes are compatible, and copies (equivalent to calling contiguous()) otherwise.

Parameters:
Example:
x = torch.randn(4, 4)
print(x, x.size())
y = x.view(16)
print(y, y.size())
z = x.view(-1, 8)
print(z, z.size())

a = torch.randn(1, 2, 3, 4)
print(a.size())
b = a.transpose(1, 2)
print(b.size())
c = a.view(1, 3, 2, 4)
print(c.size())
print(torch.equal(b, c))

# Results:
tensor([[ 1.1833, -0.2871, -0.0479,  0.2630],
        [-0.9824,  2.1211, -0.2360, -1.6516],
        [ 0.1959,  0.7525,  0.3477, -1.2675],
        [-0.3960, -1.1657,  1.3323, -0.1467]]) torch.Size([4, 4])
tensor([ 1.1833, -0.2871, -0.0479,  0.2630, -0.9824,  2.1211, -0.2360, -1.6516,
         0.1959,  0.7525,  0.3477, -1.2675, -0.3960, -1.1657,  1.3323, -0.1467]) torch.Size([16])
tensor([[ 1.1833, -0.2871, -0.0479,  0.2630, -0.9824,  2.1211, -0.2360, -1.6516],
        [ 0.1959,  0.7525,  0.3477, -1.2675, -0.3960, -1.1657,  1.3323, -0.1467]]) torch.Size([2, 8])
torch.Size([1, 2, 3, 4])
torch.Size([1, 3, 2, 4])
torch.Size([1, 3, 2, 4])
False

2. torch.reshape(input, shape) → Tensor

Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.

See torch.Tensor.view() on when it is possible to return a view.

A single dimension may be -1, in which case it’s inferred from the remaining dimensions and the number of elements in input.

Parameters:
Example:
a = torch.arange(4.)
print(torch.reshape(a, (2, 2)))
b = torch.tensor([[
    [1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]],
    [[4, 4, 4, 4], [5, 5, 5, 5], [6, 6, 6, 6]],
])
print(b.size())
print(b.view(4, 2, 3))
print(torch.reshape(b, (4, 2, 3)))

# Results:
tensor([[0., 1.],
        [2., 3.]])
torch.Size([2, 3, 4])
tensor([[[1, 1, 1],
         [1, 2, 2]],

        [[2, 2, 3],
         [3, 3, 3]],

        [[4, 4, 4],
         [4, 5, 5]],

        [[5, 5, 6],
         [6, 6, 6]]])
tensor([[[1, 1, 1],
         [1, 2, 2]],

        [[2, 2, 3],
         [3, 3, 3]],

        [[4, 4, 4],
         [4, 5, 5]],

        [[5, 5, 6],
         [6, 6, 6]]])

3. permute(*dims) → Tensor

Returns a view of the original tensor with its dimensions permuted.

Parameters:
Example:
x = torch.tensor([[
    [1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]],
    [[4, 4, 4, 4], [5, 5, 5, 5], [6, 6, 6, 6]],
])
print(x.size())
print(x.transpose(0, 1))
print(x.permute(1, 0, 2))

# Results:
torch.Size([2, 3, 4])
tensor([[[1, 1, 1, 1],
         [4, 4, 4, 4]],

        [[2, 2, 2, 2],
         [5, 5, 5, 5]],

        [[3, 3, 3, 3],
         [6, 6, 6, 6]]])
tensor([[[1, 1, 1, 1],
         [4, 4, 4, 4]],

        [[2, 2, 2, 2],
         [5, 5, 5, 5]],

        [[3, 3, 3, 3],
         [6, 6, 6, 6]]])

4. torch.transpose(input, dim0, dim1) → Tensor

Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.

The resulting out tensor shares it’s underlying storage with the input tensor, so changing the content of one would change the content of the other.

Paremeters:
Example:
x = torch.randn(2, 3)
print(x)
print(torch.transpose(x, 0, 1))

# Results:
tensor([[-1.0063,  1.7286, -1.0078],
        [-0.1752, -1.0252,  1.2613]])
tensor([[-1.0063, -0.1752],
        [ 1.7286, -1.0252],
        [-1.0078,  1.2613]])



Summarize:

The resulting out tensor shares it’s underlying storage with the input tensor, so changing the content of one would change the content of the other.

a = torch.tensor([[
    [1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]],
    [[4, 4, 4, 4], [5, 5, 5, 5], [6, 6, 6, 6]],
])
print(a.size(), '\n')
b = a.view(6, 4)
b[0][0] = 10
print(b.size(), a.size())
print(b)
print(a, '\n')
c = torch.reshape(a, (6, 4))
c[0][0] = 20
print(c.size(), a.size())
print(c)
print(a, '\n')
d = a.permute(1, 0, 2)
d[0][0][0] = 30
print(d.size(), a.size())
print(d)
print(a, '\n')
e = a.transpose(0, 1)
e[0][0][0] = 40
print(e.size(), a.size())
print(e)
print(a, '\n')

# Results:
torch.Size([2, 3, 4]) 

torch.Size([6, 4]) torch.Size([2, 3, 4])
tensor([[10,  1,  1,  1],
        [ 2,  2,  2,  2],
        [ 3,  3,  3,  3],
        [ 4,  4,  4,  4],
        [ 5,  5,  5,  5],
        [ 6,  6,  6,  6]])
tensor([[[10,  1,  1,  1],
         [ 2,  2,  2,  2],
         [ 3,  3,  3,  3]],

        [[ 4,  4,  4,  4],
         [ 5,  5,  5,  5],
         [ 6,  6,  6,  6]]]) 

torch.Size([6, 4]) torch.Size([2, 3, 4])
tensor([[20,  1,  1,  1],
        [ 2,  2,  2,  2],
        [ 3,  3,  3,  3],
        [ 4,  4,  4,  4],
        [ 5,  5,  5,  5],
        [ 6,  6,  6,  6]])
tensor([[[20,  1,  1,  1],
         [ 2,  2,  2,  2],
         [ 3,  3,  3,  3]],

        [[ 4,  4,  4,  4],
         [ 5,  5,  5,  5],
         [ 6,  6,  6,  6]]]) 

torch.Size([3, 2, 4]) torch.Size([2, 3, 4])
tensor([[[30,  1,  1,  1],
         [ 4,  4,  4,  4]],

        [[ 2,  2,  2,  2],
         [ 5,  5,  5,  5]],

        [[ 3,  3,  3,  3],
         [ 6,  6,  6,  6]]])
tensor([[[30,  1,  1,  1],
         [ 2,  2,  2,  2],
         [ 3,  3,  3,  3]],

        [[ 4,  4,  4,  4],
         [ 5,  5,  5,  5],
         [ 6,  6,  6,  6]]]) 

torch.Size([3, 2, 4]) torch.Size([2, 3, 4])
tensor([[[40,  1,  1,  1],
         [ 4,  4,  4,  4]],

        [[ 2,  2,  2,  2],
         [ 5,  5,  5,  5]],

        [[ 3,  3,  3,  3],
         [ 6,  6,  6,  6]]])
tensor([[[40,  1,  1,  1],
         [ 2,  2,  2,  2],
         [ 3,  3,  3,  3]],

        [[ 4,  4,  4,  4],
         [ 5,  5,  5,  5],
         [ 6,  6,  6,  6]]]) 
上一篇下一篇

猜你喜欢

热点阅读