人工智能/模式识别/机器学习精华专题深度学习·神经网络·计算机视觉数据可视化

训练和测试数据的观察

2018-07-08  本文已影响15人  readilen

训练和测试数据集的分布

在开始竞赛之前,我们要检查测试数据集的分布与训练数据集的分布,如果可能的话,看看它们之间有多么不同。这对模型的进一步处理有很大帮助.
首先导入必须的库文件

import gc
import itertools
from copy import deepcopy

import numpy as np
import pandas as pd

from tqdm import tqdm

from scipy.stats import ks_2samp

from sklearn.preprocessing import scale, MinMaxScaler
from sklearn.manifold import TSNE
from sklearn.decomposition import TruncatedSVD
from sklearn.decomposition import FastICA
from sklearn.random_projection import GaussianRandomProjection
from sklearn.random_projection import SparseRandomProjection

from sklearn import manifold
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import classification_report
from sklearn.model_selection import StratifiedKFold

import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter

%matplotlib inline

1. t-SNE分布概述

首先,我将从训练数据集和测试数据集中取出等量的样本(来自两者的4459个样本,即整个训练集和测试集的样本),并对组合数据执行t-SNE。 我用均值方差缩放所有数据,但对于我们有异常值(> 3x标准差)的列,我也在缩放之前进行对数变换。

1.0 数据预处理

目前的预处理程序:

def combined_data(train, test):
    """
        Get the combined data
        :param train pandas.dataframe:
        :param test pandas.dataframe:
        :return pandas.dataframe:
    """
    A = set(train.columns.values)
    B = set(test.columns.values)
    colToDel = A.difference(B)
    total_df = pd.concat([train.drop(colToDel, axis=1), test], axis=0)
    return total_df

删除重复项目

def remove_duplicate_columns(total_df):
    """
        Removing duplicate columns
    """
    colsToRemove = []
    columns = total_df.columns
    for i in range(len(columns) - 1):
        v = total_df[columns[i]].values
        for j in range(i + 1, len(columns)):
            if np.array_equal(v, total_df[columns[j]].values):
                colsToRemove.append(columns[j])
    colsToRemove = list(set(colsToRemove))
    total_df.drop(colsToRemove, axis=1, inplace=True)
    print(f">> Dropped {len(colsToRemove)} duplicate columns")
    return total_df

处理极值

def log_significant_outliers(total_df):
    """
        frist master fill na
        Log-transform all columns which have significant outliers (> 3x standard deviation)
    :return pandas.dataframe:
    """
    total_df_all = deepcopy(total_df).select_dtypes(include=[np.number])
    total_df_all.fillna(0, inplace=True)  # ********
    for col in total_df_all.columns:
        # print(col)
        data = total_df_all[col].values
        data_mean, data_std = np.mean(data), np.std(data)
        cut_off = data_std * 3
        lower, upper = data_mean - cut_off, data_mean + cut_off
        outliers = [x for x in data if x < lower or x > upper]

        if len(outliers) > 0:
            non_zero_index = data != 0
            total_df_all.loc[non_zero_index, col] = np.log(data[non_zero_index])

        non_zero_rows = total_df[col] != 0
        total_df_all.loc[non_zero_rows, col] = scale(total_df_all.loc[non_zero_rows, col])
        gc.collect()

    return total_df_all

这步之后我们得到两个不同的数据集合,在极值处理上略有不同

1.1  执行PCA

由于有很多特征值,我认为在t-SNE之前执行PCA以减少维度是个好主意。 任意地,我选择包含1000个PCA组件,其中包括数据集中大约80%的变化,我认为这可以说明分布,但也加快了t-SNE。 在下面的内容中,我只展示了数据集中PCA上的绘图。

def test_pca(data, train_idx, test_idx, create_plots=True):
    """
        data, panda.DataFrame
        train_idx = range(0, len(train_df))
        test_idx = range(len(train_df), len(total_df))
        Run PCA analysis, return embeding
    """
    data = data.select_dtypes(include=[np.number])
    data = data.fillna(0)
    # Create a PCA object, specifying how many components we wish to keep
    pca = PCA(n_components=len(data.columns))

    # Run PCA on scaled numeric dataframe, and retrieve the projected data
    pca_trafo = pca.fit_transform(data)

    # The transformed data is in a numpy matrix. This may be inconvenient if we want to further
    # process the data, and have a more visual impression of what each column is etc. We therefore
    # put transformed/projected data into new dataframe, where we specify column names and index
    pca_df = pd.DataFrame(
        pca_trafo,
        index=data.index,
        columns=['PC' + str(i + 1) for i in range(pca_trafo.shape[1])]
    )

    if create_plots:
        # Create two plots next to each other
        _, axes = plt.subplots(2, 2, figsize=(20, 15))
        axes = list(itertools.chain.from_iterable(axes))

        # Plot the explained variance# Plot t
        axes[0].plot(
            pca.explained_variance_ratio_, "--o", linewidth=2,
            label="Explained variance ratio"
        )

        # Plot the explained variance# Plot t
        axes[0].plot(
            pca.explained_variance_ratio_.cumsum(), "--o", linewidth=2,
            label="Cumulative explained variance ratio"
        )

        # show legend
        axes[0].legend(loc='best', frameon=True)

        # show biplots
        for i in range(1, 4):
            # Components to be plottet
            x, y = "PC" + str(i), "PC" + str(i + 1)

            # plot biplots
            settings = {'kind': 'scatter', 'ax': axes[i], 'alpha': 0.2, 'x': x, 'y': y}

            pca_df.iloc[train_idx].plot(label='Train', c='#ff7f0e', **settings)
            pca_df.iloc[test_idx].plot(label='Test', c='#1f77b4', **settings)
    return pca_df
train_idx = range(0, len(train_df))
test_idx = range(len(train_df), len(total_df))

pca_df = test_pca(total_df, train_idx, test_idx)
pca_df_all = test_pca(total_df_all, train_idx, test_idx)
print(">> PCA : (only for np.number)", pca_df.shape, pca_df_all.shape)
image.png
image.png

看起来很有趣,训练数据比在测试数据中更加分散,测试数据似乎更紧密地聚集在中心周围。

1.2 运行t-SNE

稍微降低了维度,现在可以在大约5分钟内运行t-SNE,然后在嵌入的2D空间中绘制训练和测试数据。 在下文中,将看到任何差异的数据集案例执行此操作。

def test_tsne(data, ax=None, title='t-SNE'):
    """Run t-SNE and return embedding"""

    # Run t-SNE
    tsne = TSNE(n_components=2, init='pca')
    Y = tsne.fit_transform(data)

    # Create plot
    for name, idx in zip(["Train", "Test"], [train_idx, test_idx]):
        ax.scatter(Y[idx, 0], Y[idx, 1], label=name, alpha=0.2)
        ax.set_title(title)
        ax.xaxis.set_major_formatter(NullFormatter())
        ax.yaxis.set_major_formatter(NullFormatter())
    ax.legend()        
    return Y

# Run t-SNE on PCA embedding
_, axes = plt.subplots(1, 2, figsize=(20, 8))

tsne_df = test_tsne(
    pca_df, axes[0],
    title='t-SNE: Scaling on non-zeros'
)
tsne_df_unique = test_tsne(
    pca_df_all, axes[1],
    title='t-SNE: Scaling on all entries'
)

plt.axis('tight')
plt.show() 
image.png

从这看来,如果仅对非零条目执行缩放,则训练和测试集看起来更相似。 如果对所有条目执行缩放,则两个数据集似乎彼此更加分离。 在以前的笔记本中,我没有删除零标准偏差的重复列或列 - 在这种情况下,观察到更显着的差异。 当然,根据我的经验,谨慎对待t-SNE的解释,可能值得更详细地研究一下; 无论是在t-SNE参数,预处理等方面。

1.2.1 t-SNE由行索引或零计数着色

image.png

看起来很有趣 - 似乎较高索引的行位于图的中心。 此外,我们看到一小部分行,几乎没有零条目,右侧图中还有一些集群。

1.2.2 t-SNE的不同参数

根据不同参数,t-SNE可以给出一些不同的结果。 所以为了确保在下面我检查了一些不同的perplexity参数值。

_, axes = plt.subplots(1, 4, figsize=(20, 5))
for i, perplexity in enumerate([5, 30, 50, 100]):
    
    # Create projection
    Y = TSNE(init='pca', perplexity=perplexity).fit_transform(pca_df)
    
    # Plot t-SNE
    for name, idx in zip(["Train", "Test"], [train_idx, test_idx]):
        axes[i].scatter(Y[idx, 0], Y[idx, 1], label=name, alpha=0.2)
    axes[i].set_title("Perplexity=%d" % perplexity)
    axes[i].xaxis.set_major_formatter(NullFormatter())
    axes[i].yaxis.set_major_formatter(NullFormatter())
    axes[i].legend() 

plt.show()
image.png

2. Test vs. Train

另一个好的方法是看我们如何分类给定条目是否属于测试或训练数据集 - 如果可以合理地做到这一点,那就是两个数据集分布之间差异的指示。 我将使用基本的随机森林模型进行简单的混合10倍交叉验证,看看它执行此任务的效果如何。 首先让我们尝试对所有条目执行缩放的情况进行分类:

def test_prediction(data):
    """Try to classify train/test samples from total dataframe"""
​
    # Create a target which is 1 for training rows, 0 for test rows
    y = np.zeros(len(data))
    y[train_idx] = 1
​
    # Perform shuffled CV predictions of train/test label
    predictions = cross_val_predict(
        ExtraTreesClassifier(n_estimators=100, n_jobs=4),
        data, y,
        cv=StratifiedKFold(
            n_splits=10,
            shuffle=True,
            random_state=42
        )
    )
​
    # Show the classification report
    print(classification_report(y, predictions))
    
# Run classification on total raw data
test_prediction(total_df_all)

在目前的数据上,这给出了大约0.71 f1的分数,这意味着我们可以很好地做到这一预测,表明数据集之间存在一些显着差异。 让我们试试我们只缩放非零值的数据集:

>> Prediction Train or Test
             precision    recall  f1-score   support

        0.0       0.86      0.46      0.60      4459
        1.0       0.63      0.92      0.75      4459

avg / total       0.75      0.69      0.68      8918

#3 每个特征的分布相似性
接下来让我们尝试逐个特征地查看问题,并执行Kolomogorov-Smirnov测试以查看测试和训练集中的分布是否相似。 我将从scipy使用函数来运行 测试。 对于分布高度可区分的所有特征,我们可以从忽略这些列中受益,以避免过度拟合训练数据。 在下文中,我只是识别这些列,并将分布绘制为一些功能的完整性检查

def get_diff_columns(train_df, test_df, show_plots=True, show_all=False, threshold=0.1):
    """Use KS to estimate columns where distributions differ a lot from each other"""

    # Find the columns where the distributions are very different
    diff_data = []
    for col in tqdm(train_df.columns):
        statistic, pvalue = ks_2samp(
            train_df[col].values, 
            test_df[col].values
        )
        if pvalue <= 0.05 and np.abs(statistic) > threshold:
            diff_data.append({'feature': col, 'p': np.round(pvalue, 5), 'statistic': np.round(np.abs(statistic), 2)})

    # Put the differences into a dataframe
    diff_df = pd.DataFrame(diff_data).sort_values(by='statistic', ascending=False)

    if show_plots:
        # Let us see the distributions of these columns to confirm they are indeed different
        n_cols = 7
        if show_all:
            n_rows = int(len(diff_df) / 7)
        else:
            n_rows = 2
        _, axes = plt.subplots(n_rows, n_cols, figsize=(20, 3*n_rows))
        axes = [x for l in axes for x in l]

        # Create plots
        for i, (_, row) in enumerate(diff_df.iterrows()):
            if i >= len(axes):
                break
            extreme = np.max(np.abs(train_df[row.feature].tolist() + test_df[row.feature].tolist()))
            train_df.loc[:, row.feature].apply(np.log1p).hist(
                ax=axes[i], alpha=0.5, label='Train', density=True,
                bins=np.arange(-extreme, extreme, 0.25)
            )
            test_df.loc[:, row.feature].apply(np.log1p).hist(
                ax=axes[i], alpha=0.5, label='Test', density=True,
                bins=np.arange(-extreme, extreme, 0.25)
            )
            axes[i].set_title(f"Statistic = {row.statistic}, p = {row.p}")
            axes[i].set_xlabel(f'Log({row.feature})')
            axes[i].legend()

        plt.tight_layout()
        plt.show()
        
    return diff_df

# Get the columns which differ a lot between test and train
diff_df = get_diff_columns(total_df.iloc[train_idx], total_df.iloc[test_idx])
>> Dropping 22 features based on KS tests
             precision    recall  f1-score   support

        0.0       0.85      0.45      0.59      4459
        1.0       0.63      0.92      0.75      4459

avg / total       0.74      0.68      0.67      8918

#4 分解特征
到目前为止,我只看了PCA组件,但是大多数内核都考虑了几种分解方法,所以看一下每种方法的10-50个组件的t-SNE而不是1000个PCA组件可能会很有趣。 此外,有趣的是我们可以根据这个缩小的特征空间对测试/训练进行分类。

COMPONENTS = 20

# List of decomposition methods to use
methods = [
    TruncatedSVD(n_components=COMPONENTS),
    PCA(n_components=COMPONENTS),
    FastICA(n_components=COMPONENTS),
    GaussianRandomProjection(n_components=COMPONENTS, eps=0.1),
    SparseRandomProjection(n_components=COMPONENTS, dense_output=True)    
]

# Run all the methods
embeddings = []
for method in methods:
    name = method.__class__.__name__    
    embeddings.append(
        pd.DataFrame(method.fit_transform(total_df), columns=[f"{name}_{i}" for i in range(COMPONENTS)])
    )
    print(f">> Ran {name}")
    
# Put all components into one dataframe
components_df = pd.concat(embeddings, axis=1)

# Prepare plot
_, axes = plt.subplots(1, 3, figsize=(20, 5))

# Run t-SNE on components
tsne_df = test_tsne(
    components_df, axes[0],
    title='t-SNE: with decomposition features'
)

# Color by index
sc = axes[1].scatter(tsne_df[:, 0], tsne_df[:, 1], alpha=0.2, c=range(len(tsne_df)), cmap=cm)
cbar = fig.colorbar(sc, ax=axes[1])
cbar.set_label('Entry index')
axes[1].set_title("t-SNE colored by index")
axes[1].xaxis.set_major_formatter(NullFormatter())
axes[1].yaxis.set_major_formatter(NullFormatter())

# Color by target
sc = axes[2].scatter(tsne_df[train_idx, 0], tsne_df[train_idx, 1], alpha=0.2, c=np.log1p(train_df.target), cmap=cm)
cbar = fig.colorbar(sc, ax=axes[2])
cbar.set_label('Log1p(target)')
axes[2].set_title("t-SNE colored by target")
axes[2].xaxis.set_major_formatter(NullFormatter())
axes[2].yaxis.set_major_formatter(NullFormatter())

plt.axis('tight')
plt.show()  
image.png
image.png

测试数据集和训练数据集合分布相似了

上一篇下一篇

猜你喜欢

热点阅读