Python machine learning-sklearning

Logistic regression intuition an

2017-07-06  本文已影响5人  阿发贝塔伽马

可以发现如果预测错误,损失函数将变的无穷大

使用sc-learn训练logistic regression 模型

from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
import numpy as np
import matplotlib
from sklearn import datasets
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from matplotlib.colors import ListedColormap
iris = datasets.load_iris()
# print iris
X = iris.data[:, [2, 3]]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)

sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
# 测试集做同样的标准化,就是对测试集做相同的平移伸缩操作
X_test_std = sc.transform(X_test)
'''sc.scale_标准差, sc.mean_平均值, sc.var_方差'''

lr = LogisticRegression(C=1000.0, random_state=0)
lr.fit(X_train_std, y_train)

# 预测
y_pred = lr.predict(X_test_std)

print('Misclassified samples: %d' % (y_test != y_pred).sum())
print('Accuracy: %.2f' % accuracy_score(y_test, y_pred))

x1_min, x1_max = X_train_std[:, 0].min() - 1, X_train_std[:, 0].max() + 1
x2_min, x2_max = X_train_std[:, 1].min() - 1, X_train_std[:, 1].max() + 1

resolution = 0.01
# xx1 X轴,每一个横都是x的分布,所以每一列元素一样,xx2 y轴 每一列y分布,所以每一横元素一样
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),np.arange(x2_min, x2_max, resolution))

# .ravel() 函数是将多维数组降位一维,注意是原数组的视图,转置之后成为两列元素
z = lr.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
'''
contourf画登高线函数要求 *X* and *Y* must both be 2-D with the same shape as *Z*, or they
    must both be 1-D such that ``len(X)`` is the number of columns in
    *Z* and ``len(Y)`` is the number of rows in *Z*.
'''
# z形状要做调整
z = z.reshape(xx1.shape)

# 填充等高线的颜色, 8是等高线分为几部分
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])

for i, value in enumerate(np.unique(y)):
    temp = X_train_std[np.where(y_train==value)]
    plt.scatter(x=temp[:,0],y=temp[:,1], marker=markers[value],s=69, c=colors[value], label=value)

plt.scatter(x=X_test_std[:, 0],y=X_test_std[:,1], marker= 'o',s=69, c='none', edgecolors='r', label='test test')

plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.contourf(xx1, xx2, z, len(np.unique(y)), alpha = 0.4, cmap = cmap)
plt.legend(loc='upper left')
plt.show()

对个样本的损失函数求导



一种方法就是通过正则化达到偏差-方差权衡( bias-variance tradeoff),调整模型的复杂度。正则化是一种非常有用的方法,例如处理共线性、过滤噪音、防止过拟合。
最普遍的正则化就是所谓的L2regularization


上一篇 下一篇

猜你喜欢

热点阅读