机器学习机器学习与数据挖掘

kaggle入门--手写数字识别

2017-05-04  本文已影响1372人  michaelgbw
前言:

对于像学大数据,机器学习,深度学习相关的初学者往往面临着两个问题:

View

基本是围绕着competition就是比赛进行的。



打开比赛我们可以看到很多,注意下左面的有颜色的条,绿色是简单的,红色是难,还有蓝色,橙色等。
我们作为初学者橙色以上的就不用看了。。。

Getting started

这里我选择一个入门级别的题,Digit Recognizer,就是数字识别,点开后,我们点击data,

Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255, inclusive.

可以看到描述是28 * 28的图片数据,但是和tensoflow的那个MNISTdemo不同的是这份数据的格式是csv,每一张图片的像素共 784个(28*28),矩阵展开成一维的,我们再看看其他的competition,基本上都是csv格式的数据,不同手动分数据,这里已经给我们分开了test和train数据,我们要做的就是将最后的预测结果还是以csv的形式上传即可,系统为我们评分。

需要注意的是,不上传代码,就是不论我们用什么算法,框架,只要结果对就行,不管是spark mllib ,py sklearn, tensorflow,caffe/caffe2都行~

给一个我实现的随机森林分类树的解法:

#!/usr/bin/python
# -*- coding: utf-8 -*-

from numpy import *
import csv

# array format to int
def toInt(array):
    array=mat(array)
    m,n=shape(array)
    newmat=zeros((m,n))
    for i in xrange(m):
        for j in xrange(n):
                newmat[i,j]=int(array[i,j])
    return newmat
    
def nomalizing(array):
    m,n=shape(array)
    for i in xrange(m):
        for j in xrange(n):
            if array[i,j]!=0:
                if array[i,j] > 128:
                    array[i,j] = 2
                else:
                    array[i,j] = 1
    return array
    
def loadTrainData():
    l=[]
    with open('train.csv','r') as fp:
         lines=csv.reader(fp)
         for line in lines:
             l.append(line) #42001*785
    #remove title
    l.remove(l[0])
    l=array(l)
    label=l[:,0]
    data=l[:,1:]
    return nomalizing(toInt(data)),toInt(label) #label 1*42000  data 42000*784

def loadTestData():
    l=[]
    with open('test.csv') as file:
         lines=csv.reader(file)
         for line in lines:
             l.append(line)#28001*784
    l.remove(l[0])
    data=array(l)
    return nomalizing(toInt(data))  #  data 28000*784

def saveResult(result,csvName):
    with open(csvName,'wb') as myFile:    
        myWriter=csv.writer(myFile)
        myWriter.writerow(["ImageId","Label"])
        index=0;
        for i in result:
            tmp=[]
            index=index+1
            tmp.append(index)
            #tmp.append(i)
            tmp.append(int(i))
            myWriter.writerow(tmp)
            
from sklearn.ensemble import RandomForestClassifier

def RFClassify(trainData,trainLabel,testData):
    nbCF=RandomForestClassifier(n_estimators=200,warm_start = True)
    nbCF.fit(trainData,ravel(trainLabel))
    testLabel=nbCF.predict(testData)
    saveResult(testLabel,'Result.csv')
    return testLabel


def dRecognition():
    trainData,trainLabel=loadTrainData()
    print "load train data finish"
    testData=loadTestData()
    print "load test data finish"
    result=RFClassify(trainData,trainLabel,testData)   
    print "finish!"

if __name__ == '__main__':
    dRecognition()

开始跑,如果机器好的话,也就2-3min就跑完了,毕竟不是DNN(深度神经网络)嘛
然后将得到的文件上传。

可以看到我提交了4次(每次优化下系数,比如树的棵树,是否是纯净叶子节点等。。)最好的accuracy rate是0.96771,之前我就明白,什么都可以用随机森林,但都不够准确。。
我们看看上面排名的:

厉害了,100%的都有,说明题目本身是可以获得精确匹配的。我们看不到人家的代码,也就不知道用了什么方法,或者外星人、、、额,扯远了。

神经网络解法:

这里还有个比较好,就是提供了comment,我们打开一个,根据他的note我大概给出一个TF的版本,但由于还在上班,我没什么环境跑,(公司提供的tensoflow on Yarn,我还没研究明白。。。)还是回校后,没啥事的时候试试“单机单卡”吧,毕竟没钱买好的GPU,唉~~
代码:

import numpy as np
import pandas as pd
import scipy
import tensorflow as tf

# The competition datafiles are in the directory ../input
# Read competition data files:
filedir = os.listdir()
train = pd.read_csv("/home/hdp-like/bigame_gbw/TF28_28/train.csv")
test  = pd.read_csv("/home/hdp-like/bigame_gbw/TF28_28/test.csv")

# Write to the log:
print("Training set has {0[0]} rows and {0[1]} columns".format(train.shape))
print("Test set has {0[0]} rows and {0[1]} columns".format(test.shape))

learning_rate = 0.01
training_iteration = 30
batch_size = 300
display_step = 2


trainfv = train.drop(['label'], axis=1).values.astype(dtype=np.float32)
trainLabels = train['label'].tolist()
ohtrainLabels = tf.one_hot(trainLabels, depth=10)
ohtrainLabelsNdarray = tf.Session().run(ohtrainLabels).astype(dtype=np.float64)
trainfv = np.multiply(trainfv, 1.0 / 255.0)

testData = test.values
testData = np.multiply(testData, 1.0 / 255.0)
#train_x,train_y = make_the_data_ready_conv(train_x,train_y)
#valid_x,valid_y = make_the_data_ready_conv(valid_x,valid_y)

x = tf.placeholder("float",[None,784])
y = tf.placeholder("float",[None,10])

W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))

with tf.name_scope("Wx_b") as scope:
  model = tf.nn.softmax(tf.matmul(x,W) + b)
  
w_h = tf.summary.histogram("weights",W)
b_h = tf.summary.histogram("biases",b)

#loss function with maximum likelihood 
with tf.name_scope("cost_function") as scope:
  cost_function = -tf.reduce_sum(y * tf.log(model))
  tf.summary.scalar("cost_function",cost_function)

#optimization with SGD
with tf.name_scope("train") as scope:
  op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)
init = tf.initialize_all_variables()
merged_summary_op = tf.summary.merge_all()

import math
from random import randint

#  for faster convergence
def random_batch(data,labels,size):
  value = math.floor(len(data) / size)    
  intervall = randint(0,value-1)
  return data[intervall*size:intervall*(size+1)],labels[intervall*size:intervall*(size+1)]

with tf.Session() as sess:
  sess.run(init)
  #run as single machine multiply GPU
  # view the stuff on tensorboard
  for iteration in range(training_iteration):
      avg_cost = 0
      total_batch = int(trainfv.shape[0]/batch_size)
      for i in range(total_batch):
          batch_xs,batch_ys = random_batch(trainfv,ohtrainLabelsNdarray,total_batch) 
          sess.run(op,feed_dict={x: batch_xs, y: batch_ys})
          avg_cost += sess.run(cost_function,feed_dict={x: batch_xs, y: batch_ys}) / total_batch
          if iteration % display_step == 0:
              print "Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost)
  print "Training Finished" 

  correct_prediction = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
  accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  print "\nAccuracy of the current model: ",sess.run(accuracy, feed_dict={x: trainfv[0:10000], y: ohtrainLabelsNdarray[0:10000]})
  
  prob = sess.run(tf.argmax(model,1), feed_dict = {x: testData})
  which = 1
  print 'predicted labe: {}'.format(str(prob[which]))
  
  print(prob)
  #import csv
  #outputFile_dir = '../input/output.csv'
  #header = ['ImageID','Label']
  #with open(outputFile_dir, 'w', newline='') as csvFile:
  #    writer = csv.writer(csvFile, delimiter = ',')
  #    writer.writerow(header)
  #    for i, p in enumerate(prob):
  #        writer.writerow([str(i+1), str(p)])    

哦,还有就是注释啥的别用中文,生成出来的文件是Utf8,这可能影响检查的准确性。。。还有就是,中文注释多low呀~

结束了

之前有个大二的学弟or学妹(我不知道男女)问我,我想入门大数据,做了几个featured的,几个。。还是featured的,当时我回复他微信,默默地看了看朋友圈,,,,人家把我屏蔽了。看来是遇到大神了,要不就是外星人~我只求,永远不拉黑我就好

上一篇 下一篇

猜你喜欢

热点阅读