我爱编程深度学习-推荐系统-CV-NLP人工智能/模式识别/机器学习精华专题

Pytorch Basic

2017-10-23  本文已影响0人  四碗饭儿

Pytorch是一个深度学习框架,适合作为深度学习研究平台。

Pytorch的优点

Pytorch系列大纲

学习内容

基础数据结构

Tensor

x.size()
x = torch.Tensor([[1, 2, 3]])
print(x.expand(3, -1))
 1  2  3
 1  2  3
 1  2  3
[torch.FloatTensor of size 3x3]
  torch.mean()

batch_size = 2

a = torch.Tensor([1, 2, 3])
a = a.expand([batch_size, a.size()[0]])

w1 = torch.Tensor([1, 2, 3])

batch_size = 2
​
a = torch.Tensor([1, 2, 3])
a = a.expand([batch_size, a.size()[0]])
​
w1 = torch.Tensor([1, 2, 3])
​

a * w1

1  4  9
1  4  9
[torch.FloatTensor of size 2x3]

计算流图和自动求导

Variable

使用深度学习编程框架的一个好处是一旦我们搭建起计算流图(如何由输入得到输出),框架就可以帮我们进行误差反向传播求导运算。这是如何实现的呢?在Pytorch中,我们依靠可以记住历史的Variable

Variable类似Tensor但是它会记得自己是如何被创造的

x = autograd.Variable( torch.Tensor([1., 2., 3]), requires_grad=True )
print x.data
y = autograd.Variable( torch.Tensor([4., 5., 6]), requires_grad=True )
z = x + y
print z.data
print z.grad_fn
s.backward() 
print x.grad # Tensor

细粒度的自动求导控制

Variable有两个重要的属性, require_gradsvolatile,创建时,二者均默认为False(除了网络中的模型参数)

model = torchvision.models.resnet18(pretrained=True)
for param in model.parameters():
    param.requires_grad = False
# Replace the last fully-connected layer
# Parameters of newly constructed modules have requires_grad=True by default
model.fc = nn.Linear(512, 100)

# Optimize only the classifier
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)

深度学习组件

对于大型的神经网络模型来说,自动求导机制是远远不够的——仅仅是底层的操作。我们需要一些深度学习组件——例如将底层计算包裹成layers。Pytorch的nn包定义了一系列相当于layersModules,一个Module通常包括

# -*- coding: utf-8 -*-
import torch
from torch.autograd import Variable

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random Tensors to hold inputs and outputs, and wrap them in Variables.
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)

# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Variables for its weight and bias.
model = torch.nn.Sequential(
    torch.nn.Linear(D_in, H),
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out),
)

# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(size_average=False)

learning_rate = 1e-4
for t in range(500):
    # Forward pass: compute predicted y by passing x to the model. Module objects
    # override the __call__ operator so you can call them like functions. When
    # doing so you pass a Variable of input data to the Module and it produces
    # a Variable of output data.
    y_pred = model(x)

    # Compute and print loss. We pass Variables containing the predicted and true
    # values of y, and the loss function returns a Variable containing the
    # loss.
    loss = loss_fn(y_pred, y)
    print(t, loss.data[0])

    # Zero the gradients before running the backward pass.
    model.zero_grad()

    # Backward pass: compute gradient of the loss with respect to all the learnable
    # parameters of the model. Internally, the parameters of each Module are stored
    # in Variables with requires_grad=True, so this call will compute gradients for
    # all learnable parameters in the model.
    loss.backward()

    # Update the weights using gradient descent. Each parameter is a Variable, so
    # we can access its data and gradients like we did before.
    for param in model.parameters():
        param.data -= learning_rate * param.grad.data

nn包也提供了一些损失函数

自定义Module

继承nn.Module,定义forward

优化与训练

torch.optim

Pytorch 完整的网络

网络组件应继承torch.nn.Module

class MyNN(torch.nn.module):
  def __init__(self, ):# 定义参数
  def forward(self): # 前向传播

输入和target应为Variable

def make_input()
def make_target()

训练

  loss_function = torch.nn.NLLLoss() 
  optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
  for epoch in xrange(100):
    for instance, label in data:
      model.zero_grad() # Pytorch会累积梯度,在使用新样本更新前需要先清空前面样本的梯度
      input = make_input() 
      target = make_target()
      output = model(input) # 前向传播
      loss = loss_function(output, target) # 损失函数
      optimizer.step()

上一篇 下一篇

猜你喜欢

热点阅读