基础知识

pytorch搭建简单神经网络

2020-11-14  本文已影响0人  生信编程日常

一个基本的神经网络结构如图1和图2所示。图1是只有一个神经元的示意图,图2是一个含有隐藏层的简单神经网络。

如图1所示,当我们有一系列特征(input signals)为x1, x2, …xm,我们为其赋予对应的权重wk1,wk2,…xkm,对应相乘再相加,得到了x1wk1 + x2wk2 + x3wk3 + ... + xmwkm。我们再加上一个偏置(bias),得到一个总和sum。然后将这个值代入一个激活函数(比如sigmoid/relu等),得到最终值,输出。

图2是在图1的基础上,加入了隐藏层。我们对于LayerL1中的输入,赋予多组权重,计算可以得到Layer L2中的值,再对其中的值赋予权重a1, a2, a3,求出sum值,经过激活函数激活,输出。


图1 图2

以下用pytorch实现一个简单网络:

import torch
import numpy as np
from sklearn import datasets
import pandas as pd
# data
boston = datasets.load_boston()

housing_data = pd.DataFrame(boston.data)
housing_data.columns = boston.feature_names
# housing_data["Price"] = boston.target
housing_data.head(5)

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(housing_data, boston.target, random_state = 1, test_size = 0.1)

class Net(torch.nn.Module):
    def __init__(self, n_feature, n_output):
        super(Net, self).__init__()
        self.hidden = torch.nn.Linear(n_feature, 100)
        self.predict = torch.nn.Linear(100, n_output)
    
    def forward(self,x):
        out = self.hidden(x)
        out = torch.relu(out)
        out = self.predict(out)
        return out
    
net = Net(13, 1) # 输入13个特征,输出1
    
# 定义loss
loss_func = torch.nn.MSELoss()

# 优化函数
optimizer = torch.optim.Adam(net.parameters(), lr=0.01)

#训练
x_train = torch.Tensor(np.array(X_train))
y_train = torch.Tensor(np.array(y_train))
x_test = torch.Tensor(np.array(X_test))
y_test = torch.Tensor(np.array(y_test))

loss_train_list = []
loss_test_list = []

for i in range(1000):    
    pred = net.forward(x_train)
    pred = torch.squeeze(pred)
    loss_train = loss_func(pred, y_train)

    optimizer.zero_grad() # 将梯度设为0
    loss_train.backward() # 反向传播
    optimizer.step()

    loss_train_list.append(loss_train)
    # test
    pred = net.forward(x_test)
    pred = torch.squeeze(pred)
    loss_test = loss_func(pred, y_test)
    loss_test_list.append(loss_test)

print('end!')

我们看看loss的分布:

import matplotlib.pyplot as plt
plt.plot(np.log(np.array(loss_train_list)), label='train')
plt.plot(np.log(np.array(loss_test_list)), label = 'test')
plt.legend(prop = {'size':18})
plt.show()
上一篇 下一篇

猜你喜欢

热点阅读