Python语言与信息数据获取和机器学习生活不易 我用python机器学习与数据挖掘

有监督学习之交叉验证

2017-05-02  本文已影响0人  来个芒果

在有监督的机器学习算法中,为考察model的generalization ability,我们需要对model进行评估。为防止过拟合,不可以使用训练集数据进行模型评估,此时可以采取cross validation方法:将原始数据分为训练集、测试机

常见cross validation:

一、我们先介绍handout:该方法为k-fold的特殊情况:

思想:

import numpy as np
from sklearn.linear_model import LogisticRegression

np.random.seed(8)
shuffled_index = np.random.permutation(admissions.index)
shuffled_admissions = admissions.loc[shuffled_index]
train = shuffled_admissions.iloc[0:515]
test = shuffled_admissions.iloc[515:len(shuffled_admissions)]

#make regression model using training data.
model=LogisticRegression()
model.fit(train[['gpa']],train['actual_label'])
test['predicted_label']=model.predict(test[['gpa']])

accuracy=len(test[(test['predicted_label']==test['actual_label'])])/len(test)
print(accuracy)

ROC曲线:

import matplotlib.pyplot as plt
from sklearn import metrics

probabilities = model.predict_proba(test[["gpa"]])
fpr, tpr, thresholds = metrics.roc_curve(test["actual_label"], probabilities[:,1])
plt.plot(fpr, tpr)

AUC:
在实际应用中,我们更关心的时TPR,即正确预测到正例的比率。为了对该模型进行更有效的评估,可以计算roc曲线下部的面积,即AUC,当auc越接近1时,我们说该模型的效果越好。

from sklearn.metrics import roc_auc_score

auc_score=roc_auc_score(test['actual_label'],probabilities[:,1])
print(auc_score)

二、K-Fold

原理:将原始数据分为k分,k-1份作为训练集,剩下的1份作为测试集,一次迭代;

下面给出简单的源码:
分割数据:

import pandas as pd

admissions = pd.read_csv("admissions.csv")
admissions["actual_label"] = admissions["admit"]
admissions = admissions.drop("admit", axis=1)

shuffled_index = np.random.permutation(admissions.index)
shuffled_admissions = admissions.loc[shuffled_index]
admissions = shuffled_admissions.reset_index()

admissions.ix[0:128,'fold']=1  #将0-128 row作为第1次iteration
admissions.ix[129:257,'fold']=2
admissions.ix[258:386,'fold']=3
admissions.ix[387:514,'fold']=4
admissions.ix[515:644,'fold']=5
admissions['fold']=admissions['fold'].astype(int)
print(admissions.head())
print(admissions.tail())

生成模型,对每个模型进行评估,最后计算平均准确率accuracy:

import numpy as np
fold_ids = [1,2,3,4,5]
lr=LogisticRegression()
def train_and_test(admissions,fold_ids):
    accuracies=[]
    for i in fold_ids:
        train_iteration=admissions[admissions['fold']!=i]
        test_iteration=admissions[admissions['fold']==i]
        # make model using training data set.
        lr.fit(train_iteration[['gpa']],train_iteration['actual_label'])
        #predicting test data set.
        test_iteration['labels']=lr.predict(test_iteration[['gpa']])
        iteration_accuracy=len(test_iteration[test_iteration['labels']==test_iteration['actual_label']])/len(test_iteration)
        accuracies.append(iteration_accuracy)
    return accuracies
accuracies=train_and_test(admissions,fold_ids)
print(accuracies)
average_accuracy=sum(accuracies)/len(fold_ids)
print(average_accuracy)

python中的sklearn库已经封装好了该方法,使用sklearn库完成:

from sklearn.cross_validation import KFold
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression

admissions = pd.read_csv("admissions.csv")
admissions["actual_label"] = admissions["admit"]
admissions = admissions.drop("admit", axis=1)
kf=KFold(len(admissions),5,shuffle=True,random_state=8)

lr=LogisticRegression()
accuracies=cross_val_score(lr,admissions[['gpa']],admissions['actual_label'],scoring='accuracy',cv=kf)
average_accuracy=sum(accuracies)/len(accuracies)
print(accuracies)
print(average_accuracy)

注意:初始化KFold类并不会生成模型、预测数据,仅仅是对数据做了分组。
在进行线性/逻辑回归时常常使用交叉验证来评估模型。

如果是对于单变量模型--即特征只有一个,往往采用handout方法;
多余多变量模型--具有多个特征列,往往会采用kfold方法。

参考地址:https://zh.wikipedia.org/wiki/%E4%BA%A4%E5%8F%89%E9%A9%97%E8%AD%89

上一篇下一篇

猜你喜欢

热点阅读