跟着官方入门pytorch(二)

2019-06-16  本文已影响0人  NJUST江波

pytorch torch包

torch 包含了多维张量的数据结构以及基于其上的多种数学操作。

函数

t = torch.Tensor([[1,2,3],[4,5,6]])
index_a = torch.LongTensor([[0,0],[0,1]])
index_b = torch.LongTensor([[0,1,1],[1,0,0]])
print(t)
print(torch.gather(t,dim=1,index=index_a))
print(torch.gather(t,dim=0,index=index_b))
>>tensor([[1., 2., 3.],
        [4., 5., 6.]])
>>tensor([[1., 1.],
        [4., 5.]])
>>tensor([[1., 5., 6.],
        [4., 2., 3.]])

计算torch.gather(t,dim=1,index=index_a)细节:

output[0,0] = input[0,index[0,0]]= input[0,0]=1
output[0,1] = input[0,index[0,1]]= input[0,0]=1
output[1,0] = input[1,index[1,0]]= input[1,0]=4
output[1,1] = input[1,index[1,1]]= input[1,1]=5

计算torch.gather(t,dim=0,index=index_b)细节:

output[0,0] = input[index[0,0],0]=input[0,0] = 1
output[0,1] = input[index[0,1],1]=input[1,1] = 5
output[0,2] = input[index[0,2],2]=input[1,2] = 6
output[1,0] = input[index[1,0],0]=input[1,0] = 4
output[1,1] = input[index[1,1],1]=input[0,1] = 2
output[1,2] = input[index[1,2],2]=input[0,2] = 3
 a=torch.rand((1,2))
>>> b=torch.rand((1,2))
>>> print(a)
tensor([[0.8738, 0.8688]])
>>> print(b)
tensor([[0.9889, 0.6731]])
>>> print(torch.stack((a,b),0))
tensor([[[0.8738, 0.8688]],

        [[0.9889, 0.6731]]])
>>> print(torch.stack((a,b),1))
tensor([[[0.8738, 0.8688],
         [0.9889, 0.6731]]])
>>> print(torch.stack((a,b),0).size())
torch.Size([2, 1, 2])
>>> print(torch.stack((a,b),1).size())
torch.Size([1, 2, 2])
# Load all tensors onto the CPU
>>> torch.load('tensors.pt', map_location='cpu')
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
# Map tensors from GPU 1 to GPU 0
>>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
上一篇 下一篇

猜你喜欢

热点阅读