tensorflow

tensorflow笔记(学习率)-mooc(北京大学)

2019-05-23  本文已影响0人  Jasmine晴天和我

学习率learning_rate:每次参数更新的幅度
w_n+1=w_n-learning_rate\nabla
参数更新向着损失函数梯度下降的方向。
学习率大了不收敛,小了收敛速度慢
指数衰减学习率
learning_rate = LEARNING_\_RATE_\_BASE*LEARNING_\_RATE_\_DECAY^\frac{global_\_step}{LEARNING_\_RATE_\_STEP}
LEARNING_RATE_BASE学习率初始值,LEARNING_RATE_DECAY学习率衰减率(0,1),global_step为当前运行的几轮,LEARNING_RATE_STEP多少轮更新一次学习率
根据运行的batch_size的轮数,动态更新学习率
多少轮更新一次学习率 = 总样本数/batch_size
global_step = tf.Variable(0,trainable = False) #记录当前共运行了多少轮,标注为不可训练

#设损失函数loss=(w+1)^2,令w初值是常数5。反向传播就是求最优w,即求最小loss对应的w值
import tensorflow as tf 
#定义待优化参数w初值赋值为5
w  = tf.Variable(tf.constant(5, dtype=tf.float32))
#定义损失函数loss
loss = tf.square(w+1)
#定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
#生成会话,训练40轮
with tf.Session() as sess:
    init_op=tf.global_variables_initializer()
    sess.run(init_op)
    for i in range(40):
        sess.run(train_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        print ("After %s steps: w is %f,   loss is %f." % (i, w_val,loss_val))
global_step = tf.Variable(0) #迭代次数初始值为0
#learning_rate = tf.train.exponential_decay(learning_rate_base(初始学习率), global_step(当前共运行了多少轮), LEARNING_RATE_STEP(衰减速度), LEARNING_RATE_DECAY(衰减率), staircase=False, name=None) False学习率是一条平滑下降的曲线,如果设置为true,那个指数都是整数,学习率阶梯型衰减。
#通过exponential_decay生成学习率
learning_rate = tf.train.exponential_decay(0.1, global_step, 100, 0.96, staircase=True)
#0.1为初始学习率,global_step为迭代次数,100为衰减速度,0.96为衰减率

#使用指数衰减的学习率,在minimize函数中传入global_step,它将自动更新,learning_rate也随即被更新
learning_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
#神经网络反向传播算法,使用梯度下降算法GradientDescentOptimizer来优化权重值,learning_rate为学习率,minimize中参数loss是损失函数,global_step表明了当前迭代次数(会被自动更新)
#coding:utf-8
#设损失函数 loss=(w+1)^2, 令w初值是常数10。反向传播就是求最优w,即求最小loss对应的w值
#使用指数衰减的学习率,在迭代初期得到较高的下降速度,可以在较小的训练轮数下取得更有收敛度。
import tensorflow as tf

LEARNING_RATE_BASE = 0.1 #最初学习率
LEARNING_RATE_DECAY = 0.99 #学习率衰减率
LEARNING_RATE_STEP = 1  #喂入多少轮BATCH_SIZE后,更新一次学习率,一般设为:总样本数/BATCH_SIZE

#运行了几轮BATCH_SIZE的计数器,初值给0, 设为不被训练
global_step = tf.Variable(0, trainable=False)
#定义指数下降学习率
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, LEARNING_RATE_STEP, LEARNING_RATE_DECAY, staircase=True)
#定义待优化参数,初值给10
w = tf.Variable(tf.constant(5, dtype=tf.float32))
#定义损失函数loss
loss = tf.square(w+1)
#定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
#生成会话,训练40轮
with tf.Session() as sess:
    init_op=tf.global_variables_initializer()
    sess.run(init_op)
    for i in range(40):
        sess.run(train_step)
        learning_rate_val = sess.run(learning_rate)
        global_step_val = sess.run(global_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        print ("After %s steps: global_step is %f, w is %f, learning rate is %f, loss is %f" % (i, global_step_val, w_val, learning_rate_val, loss_val))

上一篇下一篇

猜你喜欢

热点阅读