数据蛙数据分析每周作业

机器学习调参之贝叶斯优化

2019-01-22  本文已影响2人  Great_smile

一、简介

贝叶斯优化用于机器学习调参,主要思想是,给定优化的目标函数(广义的函数,只需指定输入和输出即可,无需知道内部结构以及数学性质),通过不断地添加样本点来更新目标函数的后验分布(高斯过程,直到后验分布基本贴合于真实分布。简单的说,就是考虑了上一次参数的信息,从而更好的调整当前的参数。

与常规的网格搜索或者随机搜索的区别是:


二、理论

介绍贝叶斯优化调参,必须要从两个部分讲起:


三、hyperopt


#!/usr/bin/env python
# encoding: utf-8
'''
@author: Great
@file: hyper_opt.py
@desc: hyperopt
'''
from sklearn import datasets
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import Perceptron
from sklearn.preprocessing import StandardScaler
iris = datasets.load_iris()
x = iris.data
y = iris.target
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3,random_state=0)
std = StandardScaler()
std.fit(x_train)
std_x_train = std.transform(x_train)
std_x_test = std.transform(x_test)

ppn = Perceptron(n_iter=40,eta0=0.1,random_state=0)
ppn.fit(std_x_train,y_train)
y_pred = ppn.predict(std_x_test)
print(accuracy_score(y_test,y_pred))#0.82222

#hyperopt
#定义目标函数(最小化目标函数,添加负号的最大化accuracy_score,
def percept(args):
    ppn = Perceptron(n_iter=args["n_iter"],
                     eta0= args["eta0"],
                     random_state=0)
    ppn.fit(std_x_train,y_train)
    y_pred = ppn.predict(std_x_test)
    return -accuracy_score(y_test,y_pred)
#定义域空间
from hyperopt import hp
"""
choice:类别变量
quniform:离散均匀(整数间隔均匀)
uniform:连续均匀(间隔为一个浮点数)
loguniform:连续对数均匀(对数下均匀分布)
"""
space = {
    "n_iter":hp.choice("n_iter",range(30,50)),
    "eta0":hp.uniform("eta0",0.05,0.5)
}
#优化算法
from hyperopt import tpe, partial
"""
tpe优化算法
partial指定搜索算法tpe的参数
"""
bayesopt = partial(tpe.suggest, n_startup_jobs=10)
#结果历史
#from hyperopt import Trials
#bayes_trial = Trials()
#最小化目标函数
from hyperopt import fmin
best = fmin(percept,space,bayesopt,max_evals=100)#,trials=bayes_trial)
print(best)
print(percept(best))
#{'eta0': 0.23191782419000273, 'n_iter': 18}
#-0.9777777777777777

四、bayes_opt

*示例
本程序采用随机森林对制作的二分类数据进行分类。 分别采用原始随机森林方法和经过贝叶斯优化的方法。

#!/usr/bin/env python
# encoding: utf-8
'''
@author: Great
@desc: bayes_opt
'''
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from bayes_opt import BayesianOptimization
import numpy as np
#data
x, y = make_classification(n_samples=1000, n_features=10, n_classes=2)
rf = RandomForestClassifier()
#不做优化的结果
print(np.mean(cross_val_score(rf,x,y,scoring="accuracy",cv=20)))
#定义优化参数
def rf_cv(n_estimators, min_samples_split, max_depth, max_features):
    val = cross_val_score(RandomForestClassifier(n_estimators=int(n_estimators),
                          min_samples_split=int(min_samples_split),
                          max_depth = int(max_depth),
                          max_features = min(max_features,0.999),
                          random_state = 2),
            x,y,scoring="accuracy",cv=5).mean()
    return val
#贝叶斯优化
rf_bo = BayesianOptimization(rf_cv,
                             {
                                 "n_estimators":(10,250),
                                 "min_samples_split":(2,25),
                                 "max_features":(0.1,0.999),
                                 "max_depth":(5,15)
                             })
#开始优化
num_iter = 25
init_points = 5
rf_bo.maximize(init_points=init_points,n_iter=num_iter)
#显示优化结果
rf_bo.res["max"]
#附近搜索(已经有不错的参数值的时候)
rf_bo.explore(
    {'n_estimators': [10, 100, 200],
     'min_samples_split': [2, 10, 20],
     'max_features': [0.1, 0.5, 0.9],
     'max_depth': [5, 10, 15]
    })

#验证优化后参数的结果
rf = RandomForestClassifier(max_depth=5, max_features=0.432, min_samples_split=2, n_estimators=190)
np.mean(cross_val_score(rf, x, y, cv=20, scoring='roc_auc'))

五、参考

bayes_opt: https://www.cnblogs.com/yangruiGB2312/p/9374377.html
hyperopt: https://blog.csdn.net/linxid/article/details/81189154
高斯过程: http://www.360doc.com/content/17/0810/05/43535834_678049865.shtml
上一篇 下一篇

猜你喜欢

热点阅读