Pytorch实现LeNet5解决自定义数据集(cifar10/
LeNet5
论文原文:http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
LeNet-5 网络结构
该篇论文有 42 页,但关于 LeNet-5 网络的核心部分并没有那么多,我们直接定位第二章的B 小节进行阅读。LeNet-5 的网络结构如下:
LeNet-5 共有 7 层,输入层不计入层数,每层都有一定的训练参数,其中三个卷积层的训练参数较多,每层都有多个滤波器,也叫特征图,每个滤波器都对上一层的输出提取不同的像素特征。所以 LeNet-5 的简略结构如下:
输入-卷积-池化-卷积-池化-卷积(全连接)-全连接-全连接(输出)
各层的结构和参数如下:
C1层是个卷积层,其输入输出结构如下:
输入: 32 x 32 x 1 滤波器大小: 5 x 5 x 1 滤波器个数:6
输出: 28 x 28 x 6
参数个数: 5 x 5 x 1 x 6 + 6 = 156
P2层是个池化层,其输入输出结构如下:
输入: 28 x 28 x 6 滤波器大小: 2 x 2 滤波器个数:6
输出: 14 x 14 x 6
参数个数:2 x 6 = 12
在原文中,P1池化层采用的是平均池化,鉴于现在普遍都使用最大池化,所以在后面的代码实现中我们统一采用最大池化。
C3层是个卷积层,其输入输出结构如下:
输入: 14 x 14 x 6 滤波器大小: 5 x 5 x 6 滤波器个数:16
输出: 10 x 10 x 16
参数个数: 5 x 5 x 6 x 16 + 16 = 2416
imageP2 池化之后的特征图组合计算得到C3的滤波器个数。
P4层是个池化层,其输入输出结构如下:
输入: 10 x 10 x 16 滤波器大小: 2 x 2 滤波器个数:16
输出: 5 x 5 x 16
参数个数: 2 x 16 = 32
C5层在论文中是个卷积层,但滤波器大小为 5 x 5,所以其本质上也是个全连接层。如果将5 x 5 x 16 拉成一个向量,它就是一个全连接层。其输入输出结构如下:
输入: 5 x 5 x 16 滤波器大小: 5 x 5 x 16 滤波器个数:120
输出: 1 x 1 x 120
参数个数: 5 x 5 x 16 x 120 + 120 = 48120
F6层是个全连接层,全连接的激活函数采用的是函数,其输入输出结构如下:
输入:120
输出:84
参数个数:120 x 84 + 84 = 10164
F7层即输出层,也是个全连接层,其输入输出结构如下:
输入:84
输出:10
参数个数: 84 x 10 + 10 = 850
code:
import os
import time
import torch
import torchvision
from torch import nn, optim
from torch.utils import data
from torchvision import transforms
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
WORK_DIR = '/tmp/cifar10'
NUM_EPOCHS = 10
BATCH_SIZE = 128
LEARNING_RATE = 1e-4
NUM_CLASSES = 10
MODEL_PATH = './models'
MODEL_NAME = 'LeNet.pth'
# Create model
if not os.path.exists(MODEL_PATH):
os.makedirs(MODEL_PATH)
transform = transforms.Compose([
transforms.RandomCrop(36, padding=4),
transforms.RandomResizedCrop(32),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
# Load data
train_dataset = torchvision.datasets.ImageFolder(root=WORK_DIR + '/' + 'train',
transform=transform)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
shuffle=True)
class LeNet(nn.Module):
"""use myself network.
inputs img size is 32 * 32
Args:
num_classes: img classes.
"""
def __init__(self, num_classes=NUM_CLASSES):
super(LeNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=5, stride=1, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(5 * 5 * 16, 120),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, num_classes)
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
out = self.classifier(x)
return out
def main():
print(f"Train numbers:{len(train_dataset)}")
# load model
model = LeNet()
# cast
cast = nn.CrossEntropyLoss().to(device)
# Optimization
optimizer = optim.Adam(
model.parameters(),
lr=LEARNING_RATE,
weight_decay=1e-8)
step = 1
for epoch in range(1, NUM_EPOCHS + 1):
model.train()
# cal one epoch time
start = time.time()
for images, labels in train_loader:
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = cast(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Step [{step * BATCH_SIZE}/{NUM_EPOCHS * len(train_dataset)}], "
f"Loss: {loss.item():.8f}.")
step += 1
# cal train one epoch time
end = time.time()
print(f"Epoch [{epoch}/{NUM_EPOCHS}], "
f"time: {end - start} sec!")
# Save the model checkpoint
torch.save(model, MODEL_PATH + '/' + MODEL_NAME)
print(f"Model save to {MODEL_PATH + '/' + MODEL_NAME}.")
if __name__ == '__main__':
main()