线性回归 及 梯度下降(代码实现)

2019-01-22  本文已影响0人  lilicat

重点

1 特征归一化
2 损失函数
3 梯度下降

特征归一化

def norm(feature):
    matmin = feature.min(axis=0)
    feature -= matmin
    matmax = feature.max(axis=0)
    if feature.ndim >1:
        matmax[matmax == 0] = 1
    elif matmax == 0 and feature.ndim==1:
        matmax = 1
    feature /= matmax
    return feature

损失函数

def computeCost(X, y, theta):
    inner = np.power(((X * theta.T) - y), 2)
    return np.sum(inner) / (2 * len(X))

梯度下降法

def gradientDescent(X, y, theta, alpha, iters):
    temp = np.matrix(np.zeros(theta.shape))
    parameters = int(theta.ravel().shape[1])
    cost = np.zeros(iters+2)

    for i in range(iters+1):
        cost[i] = computeCost(X, y, theta)
        error = (X * theta.T) - y
        for j in range(parameters):
            term = np.multiply(error, X[:,j])
            temp[0,j] = theta[0,j] - ((alpha / len(X)) * np.sum(term))

        theta = temp
        cost[i+1] = computeCost(X, y, theta)
        if cost[i+1] >= cost[i] or cost[i+1] < 0.001:
            print ("Stopping at round %d, and the cost is %1.4f " % (i,cost[i]))
            print (cost)
            return theta, cost
        if (i+1)%100 ==1:
            print ("round %d, and the cost is %1.4f  " % (i,cost[i+1]) )
    print ("Stopping at round %d, and the cost is %1.4f " % (iters,cost[iters+1]))
    print (cost)
    return theta, cost  
上一篇下一篇

猜你喜欢

热点阅读