产品经理也能动手实践的AI(六)- 从头开始训练一个简单的神经网

2019-06-22  本文已影响0人  Hawwwk

正文共: 2919字 6图

1.概览

2.1 机器学习概念

2.2 FastAI概念

2.3 Python命令

2.4 Pytorch命令

x,y = next(iter(data.train_dl)):获取DataLoader里的下一个mini-batch

3 如何从头创建一个神经网络,通过3个例子来演示,主要包含forward和backward两部分:

L1 最原始的线性拟合SGD

Forward:

y_hat = x @ a

Backward:

def update():    loss = mse(y, y_hat)    if t % 10 == 0: print(loss)    loss.backward()    with torch.no_grad():        a.sub_(lr * a.grad)        a.grad.zero_()

L2 手写数字拟合 Mnist_Logistic

Forward:

class Mnist_Logistic(nn.Module):    def __init__(self):        super().__init__()        self.lin = nn.Linear(784, 10, bias=True)
def forward(self, xb): return self.lin(xb)

Backward:

def update(x,y,lr):    wd = 1e-5    y_hat = model(x)    # weight decay    w2 = 0.    for p in model.parameters(): w2 += (p**2).sum()    # add to regular loss    loss = loss_func(y_hat, y) + w2*wd    loss.backward()    with torch.no_grad():        for p in model.parameters():            p.sub_(lr * p.grad)            p.grad.zero_()    return loss.item()

L3 多层加速手写数字拟合 Mnist_NN

Forward:

class Mnist_NN(nn.Module):    def __init__(self):        super().__init__()        self.lin1 = nn.Linear(784, 50, bias=True)        self.lin2 = nn.Linear(50, 10, bias=True)
def forward(self, xb): x = self.lin1(xb) x = F.relu(x) return self.lin2(x)
def update(x,y,lr):    opt = optim.Adam(model.parameters(), lr)    y_hat = model(x)    loss = loss_func(y_hat, y)    loss.backward()    opt.step()    opt.zero_grad()    return loss.item()
上一篇 下一篇

猜你喜欢

热点阅读