2018-04-05CNN

2018-04-05  本文已影响0人  DIO哒

我认为这篇博客讲的很详细也很直观。
https://www.cnblogs.com/flippedkiki/p/7765667.html

def compute_accuracy(v_xs,v_ys):
    global prediction
    y_pre=sess.run(prediction,feed_dict={xs:v_xs,keep_prob:1})
    correct_prediction = tf.equal(tf.arg_max(y_pre,1),tf.arg_max(v_ys,1))
    accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
    result=sess.run(accuracy,feed_dict={xs:v_xs,ys:v_ys,keep_prob:1})
    return result

def weight_variable(shape):
    initial=tf.truncated_normal(shape,stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial=tf.constant(0.1,shape=shape)
    return tf.Variable(initial)

def conv2d(x,W):
    #stride[1,x_movement,y_movement,1]
    #[0] and [3] must be 1
    return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')

def max_pool_2x2(x):
    xs=tf.placeholder(tf.float32,[None,784])#28 x 28
    ys = tf.placeholder(tf.float32,[None,10])#predict 10 numbers
    keep_prob = tf.placeholder(tf.float32)

第一篇教程主要是写了这几个函数
第一个compute accuracy之前在classification里也有用到,但貌似只是用在检测里,并没有和训练过程有太多瓜葛。
第二个和第三个看上去只是把add layer里初始化权重和bias的内容单独写成了一个函数,并且权重初始化由正态分布改成了部分的正态分布的值。
第三个conv2d应该就是计算卷积的函数了,也只是把tf.nn.conv2d简化了一下,第四个池化层也是同样,可能这样写起来会方便一点。池化层主要的作用是增加不变性,防止过拟合。

xs=tf.placeholder(tf.float32,[None,784])#28 x 28
ys = tf.placeholder(tf.float32,[None,10])#predict 10 numbers
keep_prob = tf.placeholder(tf.float32)
x_image = tf.reshape(xs,[-1,28,28,1])

#conv layer1
W_conv1 = weight_variable([5,5,1,32])#patch 5x5 in_size1 out_size32
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1)+b_conv1)#output size = 28x28x32
h_pool1 = max_pool_2x2(h_conv1)                      #output size = 14x14x32

#conv_layer2
W_conv2 = weight_variable([5,5,32,64])#patch 5x5 in_size32 out_size64
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)#output size = 14x14x64
h_pool2 = max_pool_2x2(h_conv2)                      #output size = 7x7x64

#fc1 layer
W_fc1 = weight_variable([7*7*64,1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1)
h_fc1_drop=tf.nn.dropout(h_fc1,keep_prob)

#fc2 layer
W_fc2 = weight_variable([1024,10])
b_fc2 = bias_variable([10])

prediction = tf.nn.softmax(tf.matmul(h_fc1_drop,W_fc2)+b_fc2)

搭建神经网络的过程是这样的
一共两层卷积神经网络
两层普通神经网络
唯一的区别在于预测函数不同,一个使用conv2d,一个单纯地矩阵相乘,当然都是要用激活函数的。

上一篇下一篇

猜你喜欢

热点阅读