深度学习

[教程] - TensorFlow-Course

2018-10-21  本文已影响10人  phoenixmy

TensorFlow-Course:

https://github.com/open-source-for-science/TensorFlow-Course
一个新手入门级的教程,github上4000多星,看起来挺受欢迎的。

TensorFlow-Course:

教程从‘Hello, world!’开始,显然这是码农入门标配。
接下来会简单演示基本运算以及变量(eg:weights & biases)的定义和初始化,因为不管是机器学习还是深度学习,其基本构成单位就是运算和变量。

有了基本知识,后面就是稍微复杂点的内容,首先就是大家耳熟能详的linear regression:

#先定义好weights和bias:
W = tf.Variable(0.0, name="weights")
b = tf.Variable(0.0, name="bias")
#接着定义预测函数和损失函数
def inference(X):
    """
    Forward passing the X.
    :param X: Input.
    :return: X*W + b.
    """
    return X * W + b

def loss(X, Y):
    '''
    compute the loss by comparing the predicted value to the actual label.
    :param X: The input.
    :param Y: The label.
    :return: The loss over the samples.
    '''

    # Making the prediction.
    Y_predicted = inference(X)
    return tf.reduce_sum(tf.squared_difference(Y, Y_predicted))/(2*data.shape[0])

#然后是训练函数
#与手写梯度下降算法不同,tf提供了全套GD优化函数,此处只需要填入损失函数即可;
def train(loss):
    learning_rate = 0.0001
    return tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

#最后就是跑训练了
for epoch_num in range(FLAGS.num_epochs): # run 100 epochs
    loss_value, _ = sess.run([train_loss,train_op],
                             feed_dict={X: data[:,0], Y: data[:,1]})

    # Displaying the loss per epoch.
    print('epoch %d, loss=%f' %(epoch_num+1, loss_value))

    # save the values of weight and bias
    wcoeff, bias = sess.run([W, b])

综上所述,在LR算法上,tf确实和sklearn一样简洁。

上一篇下一篇

猜你喜欢

热点阅读