1. 感知机

2019-03-15  本文已影响0人  黄昏隐修所

书中感知机算法学习的目标是极小化所有误分类点到分离超平面的距离和, 于是损失函数定义为
J(w, b) = -\sum_{(x^{(i)}, y^{(i)})\in{M}}y^{(i)}(w^Tx^{(i)} + b)
其中M为误分类样本集合.
采用随机梯度下降(SGD), 考虑单个误分类样本(x,y): y(w^Tx+b)<0
J(w, b) = -y(w^Tx+b)
对参数求导得
\nabla{J_w}=-yx \ \ \ ,\ \ \ \nabla{J_b}=-y
更新参数
w := w + \alpha{y^{(i)}x^{(i)}} \ \ \ ,\ \ \ \ b := b + {\alpha}y^{(i)}

import numpy as np

class Perceptron(object):
    def __init__(self, feature_num, alpha, max_step=10000):
        self._alpha = alpha
        self._w = np.zeros(feature_num)
        self._b = 0
        self._max_step = max_step

    def fit(self, X, y):
        misclassify = True
        step = 0
        while misclassify and step <= self._max_step:
            misclassify = False
            step += 1
            for tx, ty in zip(X, y):
                if ty * (np.dot(tx, self._w) + self._b) <= 0:
                    self._w += self._alpha * tx * ty
                    self._b += self._alpha * ty
                    misclassify = True

    def predict(self, X):
        return np.where((X @ self._w.T + self._b).astype(int) > 0, 1, -1)
上一篇 下一篇

猜你喜欢

热点阅读