初识tensorboard (keras)

2017-11-02  本文已影响1160人  golgotha

TensorBoard 涉及到的运算,通常是在训练庞大的深度神经网络中出现的复杂而又难以理解的运算。
为了更方便 TensorFlow 程序的理解、调试与优化,实时监控训练的过程中各项重要指标的变化,比如loss、acc等,可以选择使用tensorboard。比如当acc(训练集的精度)持续上升,而val_acc(测试集的精度)开始下降的时候,代表我们的数据可能开始过拟合了。

windows上使用tensorboard有一个特别注意的坑。就是使用命令时注意盘符前面需要加一个字符串比如training:不然会显示不出来。

提示:
No scalar data was found.
Probable causes:
You haven’t written any scalar data to your event files.
TensorBoard can’t find your event files.

详细原因见[https://github.com/tensorflow/tensorboard/issues/52]这个问题。
错误:tensorboard --logdir=D:\deeplearning\workspace\minist\logs
正确:tensorboard --logdir=training:D:\deeplearning\workspace\minist\logs

当然代码上也要做一些调整,需要在model调用fit的时候,加入回调callbacks。
代码如下:

from keras.models import Sequential
from keras.layers.core import Dense, Dropout,Activation
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers import Flatten
from keras.optimizers import SGD
from keras.datasets import mnist
from keras import backend as K
import keras
import numpy as np
import struct
import tensorflow as tf

def trainCNN():
    modelCNN = Sequential()
    modelCNN.add(Convolution2D(24, 3, 3, input_shape=(28,28,1)))#这里使用24个filter,每个大小为3x3。输入图片大小28x28
    modelCNN.add(MaxPooling2D((2,2)))#pooling为2x2,即每4个网格取一个最大值 pooling之前是24(filter)x26x26,pooling后是24(filter)x13x13
    modelCNN.add(Convolution2D(48, 3, 3))#再用24x3x3filter卷积一次,大小为48(filter)x11x11
    modelCNN.add(MaxPooling2D((2, 2)))  # pooling为2x2,完成后成为48(filter)x5x5
    modelCNN.add(Flatten())
    modelCNN.add(Dense(output_dim=100))
    modelCNN.add(Activation('relu'))
    modelCNN.add(Dense(output_dim=10))
    modelCNN.add(Activation('softmax'))

    modelCNN.compile(loss='categorical_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy'])

    # input image dimensions
    img_rows, img_cols = 28, 28
    batch_size = 200
    num_classes = 10

    # the data, shuffled and split between train and test sets
    (x_train, y_train), (x_test, y_test) = mnist.load_data()

    if K.image_data_format() == 'channels_first':
        x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
        x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
        input_shape = (1, img_rows, img_cols)
    else:
        x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
        x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
        input_shape = (img_rows, img_cols, 1)

    x_train = x_train.astype('float32')
    x_test = x_test.astype('float32')
    x_train /= 255
    x_test /= 255
    print('x_train shape:', x_train.shape)
    print(x_train.shape[0], 'train samples')
    print(x_test.shape[0], 'test samples')

    # convert class vectors to binary class matrices
    y_train = keras.utils.to_categorical(y_train, num_classes)
    y_test = keras.utils.to_categorical(y_test, num_classes)
    
    #这里构造一个callback的数组,当作参数传给fit
    tb_cb = keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1, write_graph=True, write_images=False,
                                        embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None)
    es_cb = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.09, patience=5, verbose=0, mode='auto')
    cbks = [];
    cbks.append(tb_cb);
    cbks.append(es_cb);

    modelCNN.fit(x_train, y_train,
              batch_size=batch_size,
              callbacks=cbks,
              epochs=20,
              verbose=1,
              validation_data=(x_test, y_test))
    score = modelCNN.evaluate(x_test, y_test, verbose=0)
    print('Test loss:', score[0])
    print('Test accuracy:', score[1])
if __name__ == "__main__":
    trainCNN();

这里贴个训练的时候的图


image.png
上一篇下一篇

猜你喜欢

热点阅读