【sklearn】KFold、StratifiedKFold、G

2020-12-21  本文已影响0人  致Great

1、KFold

 >>> import numpy as np
 >>> from sklearn.model_selection import KFold

 >>> X = ["a", "b", "c", "d"]
 >>> kf = KFold(n_splits=2)
 >>> for train, test in kf.split(X):
 ...     print("%s %s" % (train, test))
 [2 3] [0 1]
 [0 1] [2 3]

kfold交叉验证,直接随机的将数据划分为k折。看代码中的划分,只需要一个X就可以决定了,不受class和group这两个影响。

class和group分别为数据的标签和我们给数据的分组。下面分别介绍如果受影响的代码:

2、StratifiedKFold
Stratified它会根据数据集的分布来划分,使得 划分后的数据集的目标比例和原始数据集近似,也就是构造训练集和测试集分布相同的交叉验证集

 >>> from sklearn.model_selection import StratifiedKFold

 >>> X = np.ones(10)
 >>> y = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
 >>> skf = StratifiedKFold(n_splits=3)
 >>> for train, test in skf.split(X, y):
 ...     print("%s %s" % (train, test))
 [2 3 6 7 8 9] [0 1 4 5]
 [0 1 3 4 5 8 9] [2 6 7]
 [0 1 2 4 5 6 7] [3 8 9]

3、GroupKFold
有时我们会将数据分组,这时候GroupKFold可能派上用场。

GroupKFold 会保证同一个group的数据不会同时出现在训练集和测试集上。因为如果训练集中包含了每个group的几个样例,可能训练得到的模型能够足够灵活地从这些样例中学习到特征,在测试集上也会表现很好。但一旦遇到一个新的group它就会表现很差。

 >>> from sklearn.model_selection import GroupKFold

 >>> X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10]
 >>> y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"]
 >>> groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3]

 >>> gkf = GroupKFold(n_splits=3)
 >>> for train, test in gkf.split(X, y, groups=groups):
 ...     print("%s %s" % (train, test))
 [0 1 2 3 4 5] [6 7 8 9]
 [0 1 2 6 7 8 9] [3 4 5]
 [3 4 5 6 7 8 9] [0 1 2]

来源:https://blog.csdn.net/qq_16761099/article/details/106091354

上一篇下一篇

猜你喜欢

热点阅读