Spark机器学习实战(五)用分类模型判别页面内容是否长期有效
Spark机器学习实战(五)用分类模型判别页面内容是否长期有效
这篇文章讨论的是分类模型,完成的任务是判别一篇文章的内容是否长久有效。比如,新闻就不具有长久有效的特质,三个月前的新闻没有什么价值,而科普文章则有。我们将会利用Spark的MLlib构建逻辑回归,SVM,朴素贝叶斯以及决策树模型来对同一个数据集进行训练。以一定标准来评价模型并介绍调优的方法。
文章中列出了关键代码,完整代码见我的github repository,这篇文章的代码在
chapter05/src/main/scala/ScalaApp.scala
第1步:准备训练数据
这次要训练的数据来自于Kaggle,任务如上所述,我们把其中的train.tsv文件下载下来,作为我们的训练集。我们先来查看一下我们下载下来的数据大概是什么样子的。我截取了其中某一条数据。
"http://www.bloomberg.com/news/2010-12-23/ibm-predicts-holographic-calls-air-breathing-batteries-by-2015.html"
"4042" ".........." "business" "0.789131" "2.055555556" "0.676470588" "0.205882353"
"0.047058824" "0.023529412" "0.443783175" "0" "0" "0.09077381" "0" "0.245831182"
"0.003883495" "1" "1" "24" "0" "5424" "170" "8" "0.152941176" "0.079129575" "0"
嗯看起来很混乱,其实并不复杂,每条数据由tab隔开。内容顺序依次为:url,urlid,页面内容,内容分类,若干数值特征,最后是0或1表示的内容长久与否,即标签。
我们首先用这条shell命令把数据的第一行去除掉。
$ sed 1d train.tsv > train_noheader.tsv
Spark的分类模型训练数据是以类LabeledPoint表示的,非常容易理解。我们构建该类组成的RDD就算是准备好训练数据了。其中有些数据是缺失的,用问号表示,我们把它替换成0。而朴素贝叶斯只接受非零输入,我们简单地把负数也都替换成0。url和urlid不能作为特征。文本特征很分类特征又有点麻烦,所以我们现在只截取了数值特征作为训练输入,标签在最后。
val sc: SparkContext = new SparkContext("local[2]", "First Spark App")
sc.setLogLevel("ERROR")
val rawData = sc.textFile("data/train_noheader.tsv")
val records = rawData.map(line => line.split("\t"))
val data = records.map { r =>
val trimmed = r.map(_.replaceAll("\"", ""))
val label = trimmed(r.size - 1).toInt
val features = trimmed.slice(4, r.size - 1).map(d => if (d == "?") 0.0 else d.toDouble)
LabeledPoint(label, Vectors.dense(features))
}
data.cache()
val numData = data.count
val nbData = records.map { r =>
val trimmed = r.map(_.replaceAll("\"", ""))
val label = trimmed(r.size - 1).toInt
val features = trimmed.slice(4, r.size - 1).map(d => if (d == "?") 0.0 else d.toDouble)
.map(d => if (d < 0) 0.0 else d)
LabeledPoint(label, Vectors.dense(features))
}
第2步:训练分类模型
模型的构建在Spark中异常简单,import一些类调用一些API,参数都选默认,告知训练迭代次数即可。
val numIterations = 10
val maxTreeDepth = 5
val lrModel = LogisticRegressionWithSGD.train(data, numIterations)
val svmModel = SVMWithSGD.train(data, numIterations)
val nbModel = NaiveBayes.train(nbData)
val dtModel = DecisionTree.train(data, Algo.Classification, Entropy, maxTreeDepth)
第3步:评价分类模型
评价分类模型我们采用以下三种标准:
正确率
很简单,正确数/总数
val lrTotalCorrect = data.map { lp =>
if (lrModel.predict(lp.features) == lp.label) 1 else 0}.sum()
val lrAccuracy = lrTotalCorrect / data.count
println("lrAccuracy:" + lrAccuracy)
val svmTotalCorrect = data.map { lp =>
if (svmModel.predict(lp.features) == lp.label) 1 else 0}.sum()
val svmAccuracy = svmTotalCorrect / data.count
println("svmAccuracy:" + svmAccuracy)
val nbTotalCorrect = nbData.map { lp =>
if (nbModel.predict(lp.features) == lp.label) 1 else 0}.sum()
val nbAccuracy = nbTotalCorrect / data.count
println("nbAccuracy:" + nbAccuracy)
val dtTotalCorrect = data.map { lp =>
val score = dtModel.predict(lp.features)
val predicted = if (score > 0.5) 1 else 0
if (predicted == lp.label) 1 else 0}.sum()
val dtAccuracy = dtTotalCorrect / data.count
println("dtAccuracy:" + dtAccuracy)
结果如下:
lrAccuracy:0.5146720757268425
svmAccuracy:0.5146720757268425
nbAccuracy:0.5803921568627451
dtAccuracy:0.6482758620689655
准确率(precision)和召回率(recall)
准确率即 - 被你判为真的判对了多少?真阳/(真阳+假阳)
召回率即 - 真的被你判出来了多少?真阳/(真阳+假阴)
准确率和召回率受到判决阈值的影响,一般分类模型的输出为0~1之间的一个数,阈值一般设置为0.5。PR曲线则是不断调整阈值得到准确率和召回率的曲线,我们考察的是曲线包围面积,曲线的面积约到则表示这个模型越好。
PR曲线ROC曲线与AUC
ROC曲线和PR曲线类似,不同的是考察的真阳性率与假阳性率。
真阳性率 = 真阳/(真阳+假阴)
假阳性率 = 假阳/(假阳+真阴)
曲线和PR曲线类似,下方面积被称为AUC。
ROC曲线下面的代码计算了PR和ROC下方的面积,Spark中有类可以很方便地计算这些值。
val metrics = Seq(lrModel, svmModel).map {model =>
val scoreAndLabels = data.map {lp =>
(model.predict(lp.features), lp.label)
}
val metrics = new BinaryClassificationMetrics(scoreAndLabels)
(model.getClass.getSimpleName(), metrics.areaUnderPR(), metrics.areaUnderROC())
}
val nbMetrics = Seq(nbModel).map {model =>
val scoreAndLabels = nbData.map {lp =>
(model.predict(lp.features), lp.label)
}
val metrics = new BinaryClassificationMetrics(scoreAndLabels)
(model.getClass.getSimpleName(), metrics.areaUnderPR(), metrics.areaUnderROC())
}
val dtMetrics = Seq(dtModel).map {model =>
val scoreAndLabels = data.map {lp =>
val score = model.predict(lp.features)
(if (score > 0.5) 1.0 else 0.0, lp.label)
}
val metrics = new BinaryClassificationMetrics(scoreAndLabels)
(model.getClass.getSimpleName(), metrics.areaUnderPR(), metrics.areaUnderROC())
}
val allMetrics = metrics ++ nbMetrics ++ dtMetrics
allMetrics.foreach {case (m, pr, roc) =>
println(f"$m, Area under PR: ${pr * 100}%2.4f%%, Area under ROC: ${roc * 100}%2.4f%%")}
结果如下:
LogisticRegressionModel, Area under PR: 75.6759%, Area under ROC: 50.1418%
SVMModel, Area under PR: 75.6759%, Area under ROC: 50.1418%
NaiveBayesModel, Area under PR: 68.0851%, Area under ROC: 58.3559%
DecisionTreeModel, Area under PR: 74.3081%, Area under ROC: 64.8837%
第4步:改进模型性能
我们可以发现,我们训练出来的模型性能不好,仅比随机判别好一丢丢。我们来做一些常识来改进它们。
特征标准化
我们把每一种特征都标准化为均值为0,方差为1。当然Spark为我们提供了函数。注意,标准化不是指每一条数据均值为0,而是指某一种特征被标准化,比如年龄。
val vectors = data.map(lp => lp.features)
val scaler = new StandardScaler(withMean = true, withStd = true).fit(vectors)
val scaledData = data.map(lp => LabeledPoint(lp.label, scaler.transform(lp.features)))
在逻辑回归模型上做个测试:
val lrModelScaled = LogisticRegressionWithSGD.train(scaledData, numIterations)
val lrTotalCorrectScaled = scaledData.map { point =>
if (lrModelScaled.predict(point.features) == point.label) 1 else 0
}.sum()
val lrAccuracyScaled = lrTotalCorrectScaled / numData
val lrPredictionsVsTrue = scaledData.map { point =>
(lrModelScaled.predict(point.features), point.label)
}
val lrMetricsScaled = new BinaryClassificationMetrics(lrPredictionsVsTrue)
val lrPr = lrMetricsScaled.areaUnderPR
val lrRoc = lrMetricsScaled.areaUnderROC
println("Normalize the training data:")
println(f"${lrModelScaled.getClass.getSimpleName}\n" +
f"Accuracy: ${lrAccuracyScaled * 100}%2.4f%%\nArea under PR: " +
f"${lrPr * 100.0}%2.4f%%\nArea under ROC: ${lrRoc * 100.0}%2.4f%%")
结果为:
Normalize the training data:
LogisticRegressionModel
Accuracy: 62.0419%
Area under PR: 72.7254%
Area under ROC: 61.9663%
效果提升非常明显,所以:对逻辑回归,SVM而言,特征标准化非常重要;而决策树和朴素贝叶斯则不受影响。
加入类别特征
我们还记得我们在训练时忽略了训练数据的第四项,代表了页面的类别。我们来把它加入训练数据。还记得方法在系列第三篇文章中有介绍,先统计一共有多少不同类别,再把它映射成one hot的特征向量。
我们加入类别特征,并在逻辑回归模型上作测试:
val categories = records.map(r => r(3)).distinct.collect.zipWithIndex.toMap
val numCategories = categories.size
val dataCategories = records.map { r =>
val trimmed = r.map(_.replaceAll("\"", ""))
val label = trimmed(r.size - 1).toInt
val categoryIdx = categories(r(3))
val categoryFeatures = Array.ofDim[Double](numCategories)
categoryFeatures(categoryIdx) = 1.0
val otherFeatures = trimmed.slice(4, r.size - 1).map(d => if (d == "?") 0.0 else d.toDouble)
val features = categoryFeatures ++ otherFeatures
LabeledPoint(label, Vectors.dense(features))
}
val scalerCats = new StandardScaler(withMean = true, withStd = true).fit(dataCategories.map(lp => lp.features))
val scaledDataCats = dataCategories.map(lp => LabeledPoint(lp.label, scalerCats.transform(lp.features)))
val lrModelScaledCats = LogisticRegressionWithSGD.train(scaledDataCats, numIterations)
val lrTotalCorrectScaledCats = scaledDataCats.map { point =>
if (lrModelScaledCats.predict(point.features) == point.label) 1 else 0
}.sum
val lrAccuracyScaledCats = lrTotalCorrectScaledCats / numData
val lrPredictionsVsTrueCats = scaledDataCats.map { point =>
(lrModelScaledCats.predict(point.features), point.label)
}
val lrMetricsScaledCats = new BinaryClassificationMetrics(lrPredictionsVsTrueCats)
val lrPrCats = lrMetricsScaledCats.areaUnderPR
val lrRocCats = lrMetricsScaledCats.areaUnderROC
println("Add category feature:")
println(f"${lrModelScaledCats.getClass.getSimpleName}\nAccuracy: " +
f"${lrAccuracyScaledCats * 100}%2.4f%%\nArea under PR: " +
f"${lrPrCats * 100.0}%2.4f%%\nArea under ROC: ${lrRocCats * 100.0}%2.4f%%")
结果为:
Add category feature:
LogisticRegressionModel
Accuracy: 66.5720%
Area under PR: 75.7964%
Area under ROC: 66.5483%
性能进一步得到提升。
第5步:模型参数调优
之前我们说过,模型的参数我们都选了默认。实际上,好的参数当然会使效果变好。参数调优必须使用交叉验证。于是我们把训练集分成60%的训练集和40%的测试集。
val trainTestSplit = scaledDataCats.randomSplit(Array(0.6, 0.4), seed = 123)
val train = trainTestSplit(0)
val test = trainTestSplit(1)
之后我们为逻辑回归加入,L2正则化,即损失函数要加上所有参数的平方。并调整L2正则化的比重。代码如下,我们首先构造了两个函数来方便地构造与测试模型:
def trainWithParams(input: RDD[LabeledPoint], regParam: Double,
numIterations: Int, updater: Updater, stepSize: Double) = {
val lr = new LogisticRegressionWithSGD()
lr.optimizer.setRegParam(regParam).setUpdater(updater).setStepSize(stepSize)
lr.run(input)
}
def createMetrics(label: String, data: RDD[LabeledPoint], model: ClassificationModel) = {
val scoreAndLabels = data.map {point =>
(model.predict(point.features), point.label)
}
val metrics = new BinaryClassificationMetrics(scoreAndLabels)
(label, metrics.areaUnderROC)
}
scaledDataCats.cache
val trainTestSplit = scaledDataCats.randomSplit(Array(0.6, 0.4), seed = 123)
val train = trainTestSplit(0)
val test = trainTestSplit(1)
val regResultsTest = Seq(0.0, 0.001, 0.0025, 0.005, 0.01).map {param =>
val model = trainWithParams(train, param, numIterations, new SquaredL2Updater, 1.0)
createMetrics(s"$param L2 regularization parameter", train, model)
}
regResultsTest.foreach { case (param, auc) => println(f"$param, AUC = ${auc * 100}%2.6f%%") }
我们仅仅考察了AUC,结果为:
0.0 L2 regularization parameter, AUC = 66.083019%
0.001 L2 regularization parameter, AUC = 66.128304%
0.0025 L2 regularization parameter, AUC = 66.106659%
0.005 L2 regularization parameter, AUC = 66.108655%
0.01 L2 regularization parameter, AUC = 66.181573%
可见,加入L2正则对模型的效果还是有提升的。
理论上,所有涉及到的参数比如训练步长,optimizer都要交叉验证进行调参。