python

CNN变体网络之--GoogLeNet

2019-09-17  本文已影响0人  晨光523152

GoogLeNet的亮点

写成GoogLeNet,而不是GoogleNet,是为了向LeNet致敬。
GoogLeNet的亮点在于深度级联这个操作,操作方式如下图所示:


深度级联

亮点说明

把上一层的输出数据负责四份,通过不同大小的filter,利用不同的感受野获得不同的数据。
1 x 1的感受野是NIN的一种结构,能够减少模型中的参数


1x1的感受野

GoogLeNet代码

def inception_model(input_tensor, c1 = 4, c2 = 4, c3 = 4, c4 = 4, c5 = 4, c6 = 4):
    # 1 x 1 x c1

    conv_1_inception = layers.Conv2D(filters = c1, kernel_size = (1, 1),
                                     strides = (1, 1), padding = 'same', activation = 'relu')(input_tensor)

    # 1 x 1 x c2 + 3 x 3 x c3

    conv_21_inception = layers.Conv2D(filters = c2, kernel_size = (1, 1),
                                      strides = (1, 1), padding = 'same', activation = 'relu')(input_tensor)
    conv_22_inception = layers.Conv2D(filters = c3, kernel_size = (3, 3),
                                      strides = (1, 1), padding = 'same', activation = 'relu')(conv_21_inception)

    # 1 x 1 x c4 + 5 x 5 x c5

    conv_31_inception = layers.Conv2D(filters = c2, kernel_size = (1, 1),
                                      strides = (1, 1), padding = 'same', activation = 'relu')(input_tensor)
    conv_32_inception = layers.Conv2D(filters = c5, kernel_size = (5, 5),
                                      strides = (1, 1), padding = 'same', activation = 'relu')(conv_31_inception)

    # 最大赤化3 x 3  + 1 x 1 x c6

    pad_41_inception = layers.MaxPooling2D(padding = 'same',pool_size = (3, 3), strides = (1, 1))(input_tensor)
    print(pad_41_inception.shape)
    conv_42_inception = layers.Conv2D(filters = c6, kernel_size = (1, 1),
                                      strides = (1, 1), padding = 'same', activation = 'relu')(pad_41_inception)

    x = layers.concatenate([conv_1_inception, conv_22_inception, conv_32_inception, conv_42_inception], axis=-1)

    return x

in_layer = layers.Input(shape = (X_train.shape[1], X_train.shape[2], X_train.shape[3]))
conv_1 = layers.Conv2D(filters = 6, kernel_size = (5, 5), strides = (1, 1), padding = 'same', activation = 'relu')(in_layer)
pad_1 = layers.MaxPooling2D(pool_size = (2, 2))(conv_1)
conv_2 = layers.Conv2D(filters = 16, kernel_size = (5, 5), strides = (1, 1), padding = 'valid', activation = 'relu')(pad_1)
inception2 = inception_model(conv_2)
pad_2 = layers.MaxPooling2D(pool_size = (2, 2))(inception2)
conv_3 = layers.Conv2D(filters = 120, kernel_size = (5, 5), strides = (1, 1), padding = 'valid', activation = 'relu')(pad_2)
pred_1 = layers.Flatten()(conv_3)
pred_2 = layers.Dense(84, activation = 'relu')(pred_1)
pred_3 = layers.Dense(y_train.shape[1], activation = 'softmax')(pred_2)

model = keras.Model(in_layer, pred_3)

model.compile(loss = keras.losses.CategoricalCrossentropy(),optimizer = keras.optimizers.SGD(),
             metrics = ['accuracy'])
model.summary()

我写的网络结构如下:


image.png
image.png

实验结果如下图:


GoogLeNet_Figure_1.png GoogLeNet_Figure_2.png

测试集的结果如下:
9184/9225 [============================>.] - ETA: 0s - loss: 0.2970 - accuracy: 0.9425
9216/9225 [============================>.] - ETA: 0s - loss: 0.2964 - accuracy: 0.9426
9225/9225 [==============================] - 163s 18ms/sample - loss: 0.2961 - accuracy: 0.9427
[0.29609465242025307, 0.9426558]

上一篇 下一篇

猜你喜欢

热点阅读