常用最优化算法-Python实现

2020-12-29  本文已影响0人  斐硕人

computeGradient 梯度计算方法见文章
理查德外推法计算偏导数近似值-python实现

梯度法

梯度下降法的优化思想是用当前位置负梯度方向作为搜索方向,因为该方向为当前位置的最快下降方向,所以也被称为是”最速下降法“。最速下降法越接近目标值,步长越小,前进越慢

流程
梯度法流程
实现
from matplotlib import pyplot
from computeGradient import computeGrad

def fun(x):
    return 100 * (x[1] - x[0] ** 2) ** 2 + (1 - x[0]) ** 2

def gradient(x):
    grad = computeGrad(fun, x, 1e-3)
    return grad

def Gradient(fun, gfun, x0, epsilon):
    maxIteration = 5000
    beta = 0.5
    sigma = 0.4
    k = 0
    while k < maxIteration:
        gk = gfun(x0)
        dk = -1.0 * gk

        if np.linalg.norm(gk) < epsilon:
            break
        m = 0
        mk = 0
        while m < 20:
            if fun(x0 + beta ** m * dk) <= fun(x0) + sigma * beta ** m * np.dot(gk, dk):
                mk = m
                break
            m += 1

        x0 = x0 + beta ** mk * dk
        k += 1
    x = x0
    val = fun(x)
    return k, x, val

k, x, val, graphArr = Gradient(fun, gradient, np.array([-1.2, 1.0]), 1e-5)
print "minPoint:", x, "minValue:", int(val), "iteration:", k

arg_x = np.array(graphArr)[:, 0]
arg_y = np.array(graphArr)[:, 1]
pyplot.plot(arg_x, arg_y)
pyplot.show()
结果
minPoint: [ 0.99998906  0.99997808] minValue: 0 iteration: 1435
梯度法分析图

阻尼牛顿法

基础牛顿法初始点需要靠近极小点,否则算法可能不收敛。为了克服初始点选取困难的问题,阻尼牛顿法引入了线搜索方法以得到大范围收敛。

流程
阻尼牛顿法流程
实现
import numpy as np
from matplotlib import pyplot
from computeGradient import computeGrad
from computeHessian import computeHes

def fun(x):
    return 100 * (x[1] - x[0] ** 2) ** 2 + (1 - x[0]) ** 2

def gradient(x):
    grad = computeGrad(fun, x, 1e-3)
    return grad

def hessianMatrix(x):
    Hes = computeHes(fun, x, 1e-3)
    return Hes

def dampingNewton(fun, gfun, Hess, x0, epsilon):
    maxIteration = 5000
    beta = 0.5
    sigma = 0.4
    k = 0
    graphArr = []
    graphArr.append(x0)

    while k < maxIteration:
        gk = gfun(x0)
        Gk = Hess(x0)
        dk = -1.0 * np.linalg.solve(Gk, gk)
        if np.linalg.norm(gk) < epsilon:
            break
        m = 0
        mk = 0
        while m < 20:
            if fun(x0 + beta ** m * dk) <= fun(x0) + sigma * beta ** m * np.dot(gk, dk):
                mk = m
                break
            m += 1
        x0 = x0 + beta ** m * dk
        k = k + 1
        graphArr.append(x0)

    x = x0
    val = fun(x)

    return k, x, val, graphArr


k, x, val, graphArr = dampingNewton(fun, gradient, hessianMatrix, np.array([-1.2, 1.0]), 1e-5)
print "minPoint:", x, "minValue:", int(val), "iteration:", k

arg_x = np.array(graphArr)[:, 0]
arg_y = np.array(graphArr)[:, 1]

pyplot.plot(arg_x, arg_y)
pyplot.show()
结果
minPoint: [ 0.9994168   0.99883101] minValue: 0 iteration: 5000
阻尼牛顿法

拟牛顿法

解决无约束优化问题,在计算过程中不需要计算目标函数的Hesse阵,却在某种意义下具有使用Hesse阵时的功效。其基本思想是,用迭代点处的一阶导数和拟牛顿条件来构造目标函数的曲率近似,然后把极小值作为新的迭代点,并重复这一过程,直至求得满足精度的近似极小值点。

流程
拟牛顿法流程-对称阵秩1
实现
import numpy as np
from matplotlib import pyplot

from computeGradient import computeGrad


def fun(x):
    return 100 * (x[1] - x[0] ** 2) ** 2 + (1 - x[0]) ** 2

def gradient(x):
    grad = computeGrad(fun, x, 1e-5)
    return grad

def sr1Newton(fun, gfun, x0, epsilon, maxIteration):
    beta = 0.55
    sigma = 0.4
    k = 0
    n = len(x0)
    Hk = np.eye(n)

    graphArr = []
    graphArr.append(x0)


    while k < maxIteration:
        gk = gfun(x0)
        if np.linalg.norm(gk) < epsilon:
            break
        dk = -1.0 * np.dot(Hk, gk)
        m = 0
        mk = 0
        while m < 20:
            if fun(x0 + beta ** m * dk) <= fun(x0) + sigma * beta ** m * np.dot(gk, dk):
                mk = m
                break
            m += 1
        x = x0 + beta ** mk * dk
        sk = x - x0
        yk = gfun(x0) - gk

        sHy = sk - np.dot(Hk, yk)
        Hk = Hk + 1.0 * sHy * sHy.reshape((n, 1)) / 1.0 * sHy.reshape((n, 1)) * yk
        k = k + 1
        x0 = x
        graphArr.append(x0)

    x = x0
    val = fun(x0)

    return k, x, val, graphArr

k, x, val, graphArr = sr1Newton(fun, gradient, np.array([-1.0, 1.0]), 1e-5, 10000)
print "minPoint:", x, "minValue:", int(val), "iteration:", k

arg_x = np.array(graphArr)[:, 0]
arg_y = np.array(graphArr)[:, 1]
pyplot.plot(arg_x, arg_y)
pyplot.show()
结果
minPoint: [ 0.99998982  0.99997959] minValue: 0 iteration: 8121
拟牛顿法-对称阵秩1
上一篇 下一篇

猜你喜欢

热点阅读