机器学习与数据挖掘机器学习和人工智能入门

基于sklearn的决策树分类器

2017-11-12  本文已影响0人  月见樽

理论基础

决策树

决策树是一种树形结构的机器学习算法,所有的样本起始于根节点,每个具有子节点的父节点都有一个判断,根据判断结果将样本向子节点分流,测试样本从根节点开始向下流动,通过判断最终到达某个没有子节点的叶子节点,这个节点就是该样本所属的类别。
例如,判断一个动物是鸭子,狗还是兔子,可以具有以下的决策树:

决策树训练算法

训练决策树时,可以描述如下

  1. 从父节点找到最优划分属性
  2. 根据属性划分出子节点
  3. 若子节点为空/属性相同(无需划分)或样本相等(无法划分),返回,否则返回第一步继续递归划分

找到最优划分属性时,计算按每个属性划分的信息熵,取信息熵最大的属性为最优划分属性

代码实现

载入数据——泰坦尼克号数据导入

import pandas as pd
titan = pd.read_csv("http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic.txt")
print(titan.head())
   row.names pclass  survived  \
0          1    1st         1   
1          2    1st         0   
2          3    1st         0   
3          4    1st         0   
4          5    1st         1   

                                              name      age     embarked  \
0                     Allen, Miss Elisabeth Walton  29.0000  Southampton   
1                      Allison, Miss Helen Loraine   2.0000  Southampton   
2              Allison, Mr Hudson Joshua Creighton  30.0000  Southampton   
3  Allison, Mrs Hudson J.C. (Bessie Waldo Daniels)  25.0000  Southampton   
4                    Allison, Master Hudson Trevor   0.9167  Southampton   

                         home.dest room      ticket   boat     sex  
0                     St Louis, MO  B-5  24160 L221      2  female  
1  Montreal, PQ / Chesterville, ON  C26         NaN    NaN  female  
2  Montreal, PQ / Chesterville, ON  C26         NaN  (135)    male  
3  Montreal, PQ / Chesterville, ON  C26         NaN    NaN  female  
4  Montreal, PQ / Chesterville, ON  C22         NaN     11    male  
print(titan.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1313 entries, 0 to 1312
Data columns (total 11 columns):
row.names    1313 non-null int64
pclass       1313 non-null object
survived     1313 non-null int64
name         1313 non-null object
age          633 non-null float64
embarked     821 non-null object
home.dest    754 non-null object
room         77 non-null object
ticket       69 non-null object
boat         347 non-null object
sex          1313 non-null object
dtypes: float64(1), int64(2), object(8)
memory usage: 112.9+ KB
None

数据预处理

选取特征

x = titan[["pclass","age","sex"]]
y = titan["survived"]
print(x.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1313 entries, 0 to 1312
Data columns (total 3 columns):
pclass    1313 non-null object
age       633 non-null float64
sex       1313 non-null object
dtypes: float64(1), object(2)
memory usage: 30.9+ KB
None

年龄补全——使用平均值

x['age'].fillna(x['age'].mean(),inplace=True)
print(x.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1313 entries, 0 to 1312
Data columns (total 3 columns):
pclass    1313 non-null object
age       1313 non-null float64
sex       1313 non-null object
dtypes: float64(1), object(2)
memory usage: 30.9+ KB
None


c:\users\qiank\appdata\local\programs\python\python35\lib\site-packages\pandas\core\generic.py:3660: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
  self._update_inplace(new_data)

数据分割

from sklearn.cross_validation import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.25,random_state=1)
print(x_train.shape)
print(x_test.shape)
(984, 3)
(329, 3)

特征转换

from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer(sparse=False)
x_train = vec.fit_transform(x_train.to_dict(orient='record'))
x_test = vec.transform(x_test.to_dict(orient='record'))
print(vec.feature_names_,"\n",x_train[:5])
['age', 'pclass=1st', 'pclass=2nd', 'pclass=3rd', 'sex=female', 'sex=male'] 
 [[ 31.19418104   0.           0.           1.           0.           1.        ]
 [ 31.19418104   0.           0.           1.           0.           1.        ]
 [ 35.           0.           1.           0.           1.           0.        ]
 [ 31.           0.           1.           0.           0.           1.        ]
 [ 26.           0.           0.           1.           0.           1.        ]]

调用决策树分类器

from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier()
dtc.fit(x_train,y_train)
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=None,
            splitter='best')

模型评估

自带评估

dtc.score(x_test,y_test)
0.81155015197568392

评估器

from sklearn.metrics import classification_report
y_pre = dtc.predict(x_test)
print(classification_report(y_pre,y_test,target_names=["died","survived"]))
             precision    recall  f1-score   support

       died       0.91      0.80      0.85       226
   survived       0.66      0.83      0.74       103

avg / total       0.83      0.81      0.82       329

上一篇下一篇

猜你喜欢

热点阅读