线性回归及随机梯度下降

2017-11-10  本文已影响0人  sbansiheng

流程:给定麦子可以产出一定量的面包,然后做一元回归,进行预测。
程序:在迭代次数中不断求解成本函数(cost function)对b和m的偏导,然后根据偏导更新b和m,来使得b和m 达到合适的位置,尽量使的cost function足够小。

如图
J就是cost function,θ可代表m或b

代码

wheat_and_bread = [[0.5, 5], [0.6, 5.5], [0.8, 6], [1.1, 6.8], [1.4, 7]]#麦子产生出面包量的数据
#y=m*x+b
#梯度下降 给定b 和 m  训练数据 学习率
def step_gradient(b_current, m_current, points, learningRate):
    b_gradient = 0
    m_gradient = 0
    N = float(len(points))
    for i in range(0, len(points)):
        x = points[i][0]
        y = points[i][1]
        #loss function (y - ((m_current * x) + b_current))^2
        #分别对 b 和 m 求偏导,然后求平均 所有训练数据对b,m产生梯度的平均
        b_gradient += -(2 / N) * (y - ((m_current * x) + b_current))
        m_gradient += -(2 / N) * x * (y - ((m_current * x) + b_current))
    #梯度下降 沿着梯度找最合适的 b和m
    new_b = b_current - (learningRate * b_gradient)
    new_m = m_current - (learningRate * m_gradient)
    return [new_b, new_m]

#数据 初始 b,m 学习率 迭代次数
def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
    b = starting_b
    m = starting_m
    for i in range(num_iterations):
        b, m = step_gradient(b, m, points, learning_rate)
    return [b, m]

b,m=gradient_descent_runner(wheat_and_bread, 1, 1, 0.01, 100)

画出损失函数和拟合函数

拟合函数
cost function和迭代次数的关系

完整版

这里用了plotly包,作图流程类似R语言中的ggplot,便于人的理解。

wheat_and_bread = [[0.5, 5], [0.6, 5.5], [0.8, 6], [1.1, 6.8], [1.4, 7]]#麦子产生出面包量的数据
#y=m*x+b
#梯度下降 给定b 和 m  训练数据 学习率
import plotly
import plotly.plotly as py
import plotly.graph_objs as go
plotly.tools.set_credentials_file(username='username', api_key='api_key')
def step_gradient(b_current, m_current, points, learningRate):
    b_gradient = 0
    m_gradient = 0
    N = float(len(points))
    for i in range(0, len(points)):
        x = points[i][0]
        y = points[i][1]
        #loss function (y - ((m_current * x) + b_current))^2
        #分别对 b 和 m 求偏导,然后求平均 所有训练数据对b,m产生梯度的平均
        b_gradient += -(2 / N) * (y - ((m_current * x) + b_current))
        m_gradient += -(2 / N) * x * (y - ((m_current * x) + b_current))
    #梯度下降 沿着梯度找最合适的 b和m
    new_b = b_current - (learningRate * b_gradient)
    new_m = m_current - (learningRate * m_gradient)
    return [new_b, new_m]

#数据 初始 b,m 学习率 迭代次数
def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
    b = starting_b
    m = starting_m
    cost_list=[]
    for i in range(num_iterations):
        b, m = step_gradient(b, m, points, learning_rate)
        cost_function = 0.0
        for x,y in points:
            cost_function+=(y-m*x-b)*(y-m*x-b)/len(points)
        cost_list.append(cost_function)
    #cost_function 不断下降的散点图
    trace = go.Scatter(
        x = list(range(num_iterations)),
        y = cost_list,
        mode = 'markers'
    )
    data=[trace]
    py.iplot(data, filename='basic-scatter')
    return [b, m]

b,m=gradient_descent_runner(wheat_and_bread, 1, 1, 0.01, 100)


#画预测的图
x=[]
y=[]
for i in range(0, len(wheat_and_bread)):
    x.append(wheat_and_bread[i][0])
    y.append(wheat_and_bread[i][1])
trace0=go.Scatter(
    x = x,
    y = y,
    mode = 'markers',
    name = 'markers'
)
x_predict=[]
y_predict=[]
for i in range(0, len(wheat_and_bread)):
    x_predict.append(wheat_and_bread[i][0])
    y_predict.append((wheat_and_bread[i][0])*m+b)
trace1=go.Scatter(
    x=x_predict,
    y=y_predict,
    mode='lines+markers',
    name='lines+markers'
)

data = [trace0, trace1]
py.iplot(data, filename='combine')
上一篇下一篇

猜你喜欢

热点阅读