深度学习

2020 经典卷积神经网络(1) VGG

2020-05-21  本文已影响0人  zidea
machine_learning.jpg

在卷积神经网中,我们通过调整不同尺寸的卷积核或不同 pattern 卷积核来读取一张图。通过卷积将空间上特征逐渐转换为通道上。

我们现在许多模型都是将一些经典卷积神经网作为基础发展而来,在这些经典神经网基础上做一些技巧来实现更好效果。

VGG 是随 AlexNet 之后,那么VGG 中大量使用 3 \times 3,在 AlexNet 中通过不同大小卷积核5\times5,这样做好处就是通常两个3\times3卷积和7 \times 7 卷积是同样效果,而且这样做好处是增加了神经网深度。

3_replace_5.jpg

现在卷积神经网络,有两个方向一个方向就是加宽每一个块,让每一个块可以提取更多的特征,另一个方向也就是相对于做宽神经网络更有效就是加深神经网。有关为什么更深的神经网络要比更宽的神经网要更好。李宏毅教授给出过答案,大家如果感兴趣可以自己去查一查。

VGG 就是朝着加深网络方向,而与此同时 googleNet 开辟加宽网络方式提高模型精准度的另一条路。


vgg16.png
# code=utf8
import tensorflow as tf
from keras import Sequential
from keras.layers import Conv2D, MaxPooling2D,Flatten,Softmax,Activation,Dense
from keras.utils.np_utils import to_categorical
from keras.datasets import mnist
from sklearn.metrics import recall_score,f1_score,precision_score

使用数据集是经典手写数字数据集

# print("hello VGG")
data = mnist.load_data()
(X_train,Y_train),(X_test,Y_test) = data

# (60000, 28, 28)
# print(X_train.shape)

# print(X_train[0])
# 添加通道
X_train=X_train.reshape(-1,28,28,1)
X_test=X_test.reshape(-1,28,28,1)
# print(Y_train.shape)
Y_train=to_categorical(Y_train,num_classes=10)
Y_test=to_categorical(Y_test,num_classes=10)
# print(Y_train.shape)

# print(X_train.shape[1:])
vgg

我们今天要实现就是 VGG16 ,可以查表得到,然后根据表一行一行实现代码即可,其实并没有什么难点,我们要做的工作就是读表写代码。

def VGG16(X,Y):
    model = Sequential()
    # 1st
    model.add(Conv2D(64,(3,3),
        strides=(1,1),
        input_shape=X.shape[1:],
        padding='same',
        data_format='channels_last',
        activation='relu',
        kernel_initializer='uniform'))

    model.add(Conv2D(64,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        kernel_initializer='uniform',
        activation='relu'))
    
    model.add(MaxPooling2D((2,2)))

    # 2nd
    model.add(Conv2D(128,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        kernel_initializer='uniform',
        activation='relu'))
    model.add(Conv2D(128,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        kernel_initializer='uniform',
        activation='relu'))
    model.add(MaxPooling2D((2,2)))

    # third
    model.add(Conv2D(256,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))
    model.add(Conv2D(256,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))
    model.add(Conv2D(256,(1,1),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))

    model.add(MaxPooling2D((2,2)))

    # 4th
    model.add(Conv2D(512,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))
        
    model.add(Conv2D(512,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))

    model.add(Conv2D(512,(1,1),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))

    model.add(MaxPooling2D((2,2)))
    # 5th
    model.add(Conv2D(512,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))

    model.add(Conv2D(512,(3,3),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))

    model.add(Conv2D(512,(1,1),
        strides=(1,1),
        padding='same',
        data_format='channels_last',
        activation='relu'))

      model.add(MaxPooling2D((2,2)))

    model.add(Flatten())
    model.add(Dense(4096,activation='relu'))
    model.add(Dense(4096,activation='relu'))
    model.add(Dense(1000,activation='relu'))
    model.add(Dense(10,activation='softmax'))

    model.summary()
    model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
    return model
if __name__ == "__main__":
    model = VGG16(X_train,Y_train)
    model.fit(X_train,Y_train,batch_size=128,epochs=10)
    Y_predict=model.predict(X_train)
    print(Y_predict)
    loss,acc=model.evaluate(X_test,Y_test)
    print(loss,acc)
上一篇下一篇

猜你喜欢

热点阅读