2. 文本向量化

2018-08-08  本文已影响0人  韧心222

在scikit-learn中,对文本数据进行特征提取,其实就是将文本数据转换为计算机能够处理的数字形式。Scikit-learning提供了三种向量化的方法,分别是:

这些向量化方法都在sklearn.feature_extraction.text

CountVectorizer

先来看一下CountVectorizer的构造函数:

class sklearn.feature_extraction.text.CountVectorizer(
            input=u'content', 
            encoding=u'utf-8', 
            decode_error=u'strict', 
            strip_accents=None, 
            lowercase=True, 
            preprocessor=None, 
            tokenizer=None, 
            stop_words=None, 
            token_pattern=u'(?u)\b\w\w+\b', 
            ngram_range=(1, 1), 
            analyzer=u'word', 
            max_df=1.0, 
            min_df=1, 
            max_features=None, 
            vocabulary=None, 
            binary=False, 
            dtype=<type 'numpy.int64'>)

本文重点介绍以下几个输入参数,其中:

先来看一下在英文情况下,analyzer设置成word、char和char_wb的区别

from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, HashingVectorizer

corps = [
    "When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.", 
    "Either a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents. Indices in the mapping should not be repeated and should not have any gap between 0 and the largest index."
]


vectorizer = CountVectorizer(analyzer='word')
vectorizer.fit(corps)
vector = vectorizer.transform(corps)
print(vectorizer.vocabulary_)
print(vector.shape)
print(vector.toarray())
{'when': 58, 'building': 7, 'the': 53, 'vocabulary': 57, 'ignore': 24, 'terms': 50, 'that': 52, 'have': 21, 'document': 12, 'frequency': 17, 'strictly': 49, 'higher': 22, 'than': 51, 'given': 20, 'threshold': 55, 'corpus': 8, 'specific': 47, 'stop': 48, 'words': 60, 'if': 23, 'float': 16, 'parameter': 42, 'represents': 45, 'proportion': 43, 'of': 39, 'documents': 13, 'integer': 30, 'absolute': 0, 'counts': 9, 'this': 54, 'is': 31, 'ignored': 25, 'not': 38, 'none': 37, 'either': 14, 'mapping': 35, 'dict': 11, 'where': 59, 'keys': 33, 'are': 4, 'and': 2, 'values': 56, 'indices': 28, 'in': 26, 'feature': 15, 'matrix': 36, 'or': 40, 'an': 1, 'iterable': 32, 'over': 41, 'determined': 10, 'from': 18, 'input': 29, 'should': 46, 'be': 5, 'repeated': 44, 'any': 3, 'gap': 19, 'between': 6, 'largest': 34, 'index': 27}
(2, 61)
[[1 0 0 0 0 0 0 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 2 1 1 0 0 0 0 1 2 0 0 0 0
  0 1 1 1 0 0 2 1 0 1 0 1 1 1 1 1 1 3 1 1 0 2 1 0 1]
 [0 1 3 1 2 1 1 0 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 0 0 2 1 2 1 0 1 1 1 1 2
  1 0 3 0 1 1 0 0 1 0 2 0 0 0 2 0 0 4 0 0 1 1 0 1 0]]
vectorizer = CountVectorizer(analyzer = 'char')
vectorizer.fit(corps)
vector = vectorizer.transform(corps)
print(vectorizer.vocabulary_)
print(vector.shape)
print(vector.toarray())
{'w': 28, 'h': 14, 'e': 11, 'n': 19, ' ': 0, 'b': 8, 'u': 26, 'i': 15, 'l': 17, 'd': 10, 'g': 13, 't': 25, 'v': 27, 'o': 20, 'c': 9, 'a': 7, 'r': 23, 'y': 30, 'm': 18, 's': 24, 'f': 12, 'q': 22, '(': 1, 'p': 21, '-': 4, ')': 2, '.': 5, ',': 3, 'k': 16, 'x': 29, '0': 6}
(2, 31)
[[40  1  1  2  1  3  0 15  4 10  6 27  6  6 12 16  0  7  5 16 19  8  1 20
  15 23  9  4  2  0  4]
 [53  1  1  3  0  5  1 22  4  5 13 37  3  6  9 18  1  6  8 20 10  7  0 16
  11 20  7  5  2  2  3]]
vectorizer = CountVectorizer(analyzer='char_wb')
vectorizer.fit(corps)
vector = vectorizer.transform(corps)
print(vectorizer.vocabulary_)
print(vector.shape)
print(vector.toarray())
{' ': 0, 'w': 28, 'h': 14, 'e': 11, 'n': 19, 'b': 8, 'u': 26, 'i': 15, 'l': 17, 'd': 10, 'g': 13, 't': 25, 'v': 27, 'o': 20, 'c': 9, 'a': 7, 'r': 23, 'y': 30, 'm': 18, 's': 24, 'f': 12, 'q': 22, '(': 1, 'p': 21, '-': 4, ')': 2, '.': 5, ',': 3, 'k': 16, 'x': 29, '0': 6}
(2, 31)
[[ 82   1   1   2   1   3   0  15   4  10   6  27   6   6  12  16   0   7
    5  16  19   8   1  20  15  23   9   4   2   0   4]
 [108   1   1   3   0   5   1  22   4   5  13  37   3   6   9  18   1   6
    8  20  10   7   0  16  11  20   7   5   2   2   3]]

可以看到,当analyzer设置成word时,CountVectorizer会按照词对文本进行统计,因此词表的大小明显为61(也就是说文本中共有61个不同的词);当analyzer设置成char时,CountVectorizer会按照字母对文本进行统计,此时词表大小为31;当analyzer设置成char_wb时,从结果中并不能看出和char的差异,其实两者的差异主要是在于,char_wb是只在一个词内部(以空格为界限)进行字母的n-gram,来看下面的例子:

vectorizer = CountVectorizer(analyzer = 'char', ngram_range=(5,5))
vectorizer.fit(['Hello word'])
vector = vectorizer.transform(corps)
print(vectorizer.get_feature_names())


vectorizer = CountVectorizer(analyzer = 'char_wb', ngram_range=(5,5))
vectorizer.fit(['Hello word'])
vector = vectorizer.transform(corps)
print(vectorizer.get_feature_names())
[' word', 'ello ', 'hello', 'llo w', 'lo wo', 'o wor']
[' hell', ' word', 'ello ', 'hello', 'word ']

上面的例子也引出了CountVectorizer的另一个参数ngram_range,这个参数的含义比较好理解,当我们设置

ngram_range = (a, b)

a表示的是最小的n-gram,b表示的是最大选取多少个n-gram

TfidfVectorizer

同样,我们还是先看看的定义

class sklearn.feature_extraction.text.TfidfVectorizer(input=u'content', 
    encoding=u'utf-8', 
    decode_error=u'strict', 
    strip_accents=None, 
    lowercase=True, 
    preprocessor=None, 
    tokenizer=None, 
    analyzer=u'word', 
    stop_words=None, 
    token_pattern=u'(?u)\b\w\w+\b', 
    ngram_range=(1, 1), 
    max_df=1.0, 
    min_df=1, 
    max_features=None, 
    vocabulary=None, 
    binary=False, 
    dtype=<type 'numpy.int64'>, 
    norm=u'l2', 
    use_idf=True, 
    smooth_idf=True, 
    sublinear_tf=False)

该类的定义与CountVectorizer十分相似,在此不做过多的介绍了,只是简单介绍一些参数:

TF-IDF模型是一种最常用向量空间模型,其示例代码与CountVectorizer基本一致。

vectorizer = HashingVectorizer(norm = 'l1')
vectorizer.fit(corps)
vector = vectorizer.transform(corps)
print(vector.toarray())
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]]

接下来,看最后一个文本向量化的方法。

HashingVectorizer

对于任意一种语言而言,其词项都会有成千上万,而一篇文档中所使用的词项有非常有限,因此这样构造的特征向量往往会造成内存空间的浪费,为了避免这种情况,于是人们又想出了新的文本特征化方法,其基本原理可以参考https://www.cnblogs.com/pinard/p/6688348.html

对应于scikit-learning工具包,也提供了如下的实现,其构造函数如下:

class sklearn.feature_extraction.text.HashingVectorizer(input=u'content', 
    encoding=u'utf-8', 
    decode_error=u'strict', 
    strip_accents=None, 
    lowercase=True, 
    preprocessor=None, 
    tokenizer=None, 
    stop_words=None, 
    token_pattern=u'(?u)\b\w\w+\b', 
    ngram_range=(1, 1), 
    analyzer=u'word', 
    n_features=1048576, 
    binary=False, 
    norm=u'l2', 
    alternate_sign=True, 
    non_negative=False, 
    dtype=<type 'numpy.float64'>)

其中最重要的参数就是:

这个类在使用上与前面的类有所不同,不需要进行fit操作,只需要直接转换即可

vectorizer = HashingVectorizer(n_features = 20)
vector = vectorizer.transform(corps)
print(vector.shape)
print(vector.toarray())
(2, 20)
[[-0.26726124 -0.13363062  0.          0.          0.          0.13363062
   0.13363062  0.          0.40089186 -0.13363062  0.53452248  0.26726124
  -0.13363062  0.          0.          0.          0.          0.
  -0.40089186  0.40089186]
 [ 0.          0.12403473  0.          0.          0.24806947 -0.24806947
  -0.24806947 -0.24806947  0.49613894  0.3721042   0.12403473  0.12403473
   0.12403473  0.          0.12403473  0.12403473  0.24806947  0.12403473
  -0.3721042   0.24806947]]

最后需要说明的是,目前在scikit-learning的特征提取工具包中,并没有提供stem方法,需要结合ntlk的stem工具包来实现。目前看到已经有人在社区提到了这个问题,相信不久后的将来应该会提供stem的选项

上一篇下一篇

猜你喜欢

热点阅读