4.2 scikit-learn中的机器学习算法的封装

2019-06-27  本文已影响0人  逆风的妞妞

4.2 scikit-learn中的机器学习算法的封装

新建文件夹myscript,创建KNN.py

import numpy as np
from math import sqrt
from collections import Counter

def KNN_classify(k, X_train, y_train, x):
    assert 1 <= k <= X_train.shape[0], "k must be valid"
    assert X_train.shape[0] == y_train.shape[0],  "the size of X_train must equal to the size of y_train"
    assert X_train.shape[1] == x.shape[0], "the feature number of x must be equal to X_train"

    distances = [sqrt(np.sum((x_train - x)**2)) for x_train in X_train]
    nearest = np.argsort(distances)

    topK_y = [y_train[i] for i in nearest[:k]]
    votes = Counter(topK_y)

    return votes.most_common(1)[0][0]

在jupyter中调用封装好的knn方法,我们可以看到运行结果和上一小节一样。

import numpy as np
import matplotlib.pyplot as plt

raw_data_x = [[3.423749247, 2.334567896],
              [3.110073483, 1.745697878],
              [1.347946498, 3.368464565],
              [3.582294042, 4.679565478],
              [2.280364646, 2.866699256],
              [7.423454548, 4.696522875],
              [5.745051465, 3.533989946],
              [9.172456464, 2.051111010],
              [7.792783481, 3.424088941],
              [7.939820184, 0.791637231]
            ]
raw_data_y = [0,0,0,0,0,1,1,1,1,1]

X_train = np.array(raw_data_x)
y_train = np.array(raw_data_y)

x = np.array([8.093607318, 3.3657315144])
%run myscript/KNN.py
predict_y = KNN_classify(6, X_train, y_train, x)
predict_y
image.png

经过这个示例,我们对机器学习的流程有了更专业的认识。


image.png

但是,KNN算法并没有得到一个模型,因此,可以近似说kNN算法是一个不需要训练过程的算法。换句话说,输入样例可以直接送给训练数据集,在训练数据集上直接找到最近的点。其实k近邻算法是非常特殊的,我们也可以认为训练数据集就是模型本身。

使用scikit-learn中的kNN

# 加载相关的算法
from sklearn.neighbors import KNeighborsClassifier
# 创建算法对应的实例,传入参数
kNN_classifier = KNeighborsClassifier(n_neighbors=6)
# 拟合训练数据集
kNN_classifier.fit(X_train, y_train)
# 样本预测
kNN_classifier.predict([x])

程序报错:


image.png

因为之前的python版本支持传入的参数为一维数组,但是后来为了统一接口,传入的必须是一个矩阵,因此,我们需要将x转换成一个矩阵的形式。在前面代码中可以看到,我直接加入了一个中括号就可以通过编译,当然我们也可以利用如下方式:

X_predict = x.reshape(1, -1)
kNN_classifier.predict(X_predict)

正常运行结果如下:我们可以看出和我们之前的结果一致。


image.png

重新整理我们的KNN算法

新建文件kNN2.py

import numpy as np
from math import sqrt
from collections import Counter

class KNNClassfier:
    def __init__(self, k):
        # 初始化kNN分类器
        assert k >= 1, "k must be valid"
        self.k = k
        # _表示私有
        self._X_train = None
        self._y_train = None

    def fit(self, X_train, y_train):
        # 根据训练集训练KNN分类器
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must equal to the size of y_train"
        assert self.k <= X_train.shape[0], \
            "the size of X_train must be at least k."

        self._X_train = X_train
        self._y_train = y_train
        return self
    
    def predict(self, X_predict):
        # 给定待预测数据集X_predict, 返回表示X_predict的结果向量
        assert self._X_train is not None and self._y_train is not None, \
            "must fit before predict!"
        assert X_predict.shape[1] == self._X_train.shape[1], \
            "the feature number of X_predict must be equal to X_train"

        y_predict = [self._predict(x) for x in X_predict]
        return np.array(y_predict)

    def _predict(self, x):
        # 给定单个待预测数据x,返回x的预测结果值
        assert x.shape[0] == self._X_train.shape[1],\
            "the feature number of x must be equal to X_train"
        distances = [sqrt(np.sum((x_train - x) ** 2)) for x_train in self._X_train]
        nearest = np.argsort(distances)

        topK_y = [self._y_train[i] for i in nearest[:self.k]]
        votes = Counter(topK_y)

        return votes.most_common(1)[0][0]

    def __repr__(self):
        return "KNN(k=%d)" %self.k

在jupyter中输入下面代码进行测试

%run myscript/kNN2.py
knn_clf = KNNClassfier(k=6)
knn_clf.fit(X_train, y_train)
X_predict = x.reshape(1, -1)
y_predict = knn_clf.predict(X_predict)
y_predict[0]

运行结果和上面一样,输出结果1。

上一篇下一篇

猜你喜欢

热点阅读