[Keras] Keras面向小数据集的图像分类(retrain
这篇文章主要参考:
其中文翻译有:
相关的博客有:
说重点,在Keras中直接调用VGG / Inception_v3模型的时候出现了一点点问题,然后我使用 keras面向小数据集的图像分类(VGG-16基础上fine-tune)实现(附代码)中的源码Fine-tuning跑了一遍仍然有问题:
ValueError: The shape of the input to "Flatten" is not fully defined (got (None, None, 512). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model
主要出在 Flatten
层,没有指定输入。
源码很短,如下:
"""
This script goes along the blog post
"Building powerful image classification models using very little data"
from blog.keras.io.
It uses data that can be downloaded at:
https://www.kaggle.com/c/dogs-vs-cats/data
In our setup, we:
- created a data/ folder
- created train/ and validation/ subfolders inside data/
- created cats/ and dogs/ subfolders inside train/ and validation/
- put the cat pictures index 0-999 in data/train/cats
- put the cat pictures index 1000-1400 in data/validation/cats
- put the dogs pictures index 12500-13499 in data/train/dogs
- put the dog pictures index 13500-13900 in data/validation/dogs
So that we have 1000 training examples for each class, and 400 validation examples for each class.
In summary, this is our directory structure:
data/
train/
dogs/
dog001.jpg
dog002.jpg
...
cats/
cat001.jpg
cat002.jpg
...
validation/
dogs/
dog001.jpg
dog002.jpg
...
cats/
cat001.jpg
cat002.jpg
...
"""
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
# path to the model weights files.
weights_path = '../keras/examples/vgg16_weights.h5'
top_model_weights_path = 'fc_model.h5'
# dimensions of our images.
img_width, img_height = 150, 150
train_data_dir = 'cats_and_dogs_small/train'
validation_data_dir = 'cats_and_dogs_small/validation'
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 50
batch_size = 16
# build the VGG16 network
model = applications.VGG16(weights='imagenet', include_top=False)
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
top_model = Sequential()
top_model.add(Flatten(input_shape=model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(top_model_weights_path)
# add the model on top of the convolutional base
model.add(top_model)
# set the first 25 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
for layer in model.layers[:25]:
layer.trainable = False
# compile the model with a SGD/momentum optimizer
# and a very slow learning rate.
model.compile(loss='binary_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
# fine-tune the model
model.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples,
epochs=epochs,
validation_data=validation_generator,
nb_val_samples=nb_validation_samples)
但是,代码中已经明确指定了Flatten
层的输入大小:
top_model.add(Flatten(input_shape=model.output_shape[1:]))
这个问题一直困扰了我很久,keras系列︱图像多分类训练与利用bottleneck features进行微调(三)这位同学也是卡在这了,没能解决。不过我半个多月后又碰到了这个问题,发现没法绕开,最终还是找到了解决方法。实际上不止我一个人遇到了这个问题,很多人都遇到了,只是解决方法很难在搜索引擎上检索到:
解决方法其实已经在图中体现出来了,VGG模型在调用时选择了“不含顶层网络”的情况下,需要指定输入。VGG16的输入图像尺寸为(224,224),因此,将源代码中的:
model = applications.VGG16(weights='imagenet', include_top=False)
修改为如下即可:
model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(224,224,3))
需要注意的是,此时使用TensorFlow作为后端,因此图像数据格式为“channel_last”,如果使用theano则为(3,224,224)
,如果使用Inception_v3网络,则需要修改为:input_shape=(299,299,3)
。
修改完后再次运行程序,可能会有新错误:
model.add(top_model)
AttributeError: 'Model' object has no attribute 'add'
错误的原因是因为'Model'
不含有'add'
属性,没关系,我们可以使用函数式模型来解决无法'add'
的问题,可以参考下图:
将源码中的:
top_model = Sequential()
top_model.add(Flatten(input_shape=model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(top_model_weights_path)
# add the model on top of the convolutional base
model.add(top_model)
修改为:
# 新建一个分类模型置于模型顶层,并初始化
x = model.output
x = Flatten()(x)
x = Dense(256,
kernel_initializer='RandomUniform',
activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(Nb_classes,
kernel_initializer='RandomUniform',
activation='softmax')(x)
# 给经典模型组合新的顶层网络
new_model = Model(model.input, x)
因为我没有之前Building powerful image classification models using very little data中训练得到的Top layer的参数,因此我跳过了top_model.load_weights(top_model_weights_path)
直接选择了'RandomUniform'
初始化。除此之外,都大同小异,无需修改。