gensim测试文本相似度

2018-05-20  本文已影响0人  lwyaoshen

如何计算两个文档的相似度(二)

from gensim import corpora, models, similarities
documents = ["Shipment of gold damaged in a fire",
             "Delivery of silver arrived in a silver truck",
             "Shipment of gold arrived in a truck"]

正常情况下,需要对英文文本做一些预处理工作,
譬如去停用词,对文本进行tokenize,stemming以及过滤掉低频的词,
但是为了说明问题,也是为了和这篇"LSI Fast Track Tutorial"保持一致, 以下的预处理仅仅是将英文单词小写化:

texts = [[word for word in document.lower().split()] for document in documents]
print(texts)
dictionary = corpora.Dictionary(texts)
print(dictionary.token2id)

然后就可以将用字符串表示的文档转换为用id表示的文档向量:

corpus = [dictionary.doc2bow(text) for text in texts]
tfidf = models.TfidfModel(corpus)

基于这个TF-IDF模型,我们可以将上述用词频表示文档向量表示为一个用tf-idf值表示的文档向量:

corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
    print(doc)
print(tfidf.dfs)
print(tfidf.idfs)

输出:

[['shipment', 'of', 'gold', 'damaged', 'in', 'a', 'fire'], ['delivery', 'of', 'silver', 'arrived', 'in', 'a', 'silver', 'truck'], ['shipment', 'of', 'gold', 'arrived', 'in', 'a', 'truck']]
{'a': 0, 'damaged': 1, 'fire': 2, 'gold': 3, 'in': 4, 'of': 5, 'shipment': 6, 'arrived': 7, 'delivery': 8, 'silver': 9, 'truck': 10}
[(1, 0.6633689723434505), (2, 0.6633689723434505), (3, 0.2448297500958463), (6, 0.2448297500958463)]
[(7, 0.16073253746956623), (8, 0.4355066251613605), (9, 0.871013250322721), (10, 0.16073253746956623)]
[(3, 0.5), (6, 0.5), (7, 0.5), (10, 0.5)]
{0: 3, 1: 1, 2: 1, 3: 2, 4: 3, 5: 3, 6: 2, 7: 2, 8: 1, 9: 1, 10: 2}
{0: 0.0, 1: 1.5849625007211563, 2: 1.5849625007211563, 3: 0.5849625007211562, 4: 0.0, 5: 0.0, 6: 0.5849625007211562, 7: 0.5849625007211562, 8: 1.5849625007211563, 9: 1.5849625007211563, 10: 0.5849625007211562}

有了tf-idf值表示的文档向量,我们就可以训练一个LSI模型,
和Latent Semantic Indexing (LSI) A Fast Track Tutorial中的例子相似,我们设置topic数为2:

lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2)
lsi.print_topics(2)

输出

 0.438*"gold" + 0.438*"shipment" + 0.366*"truck" + 0.366*"arrived" + 0.345*"damaged" + 0.345*"fire" + 0.297*"silver" + 0.149*"delivery" + 0.000*"in" + 0.000*"a"
2013-05-27 19:15:26,468 : INFO : topic #1(1.000): 0.728*"silver" + 0.364*"delivery" + -0.364*"fire" + -0.364*"damaged" + 0.134*"truck" + 0.134*"arrived" + -0.134*"shipment" + -0.134*"gold" + -0.000*"a" + -0.000*"in"

有了这个lsi模型,我们就可以将文档映射到一个二维的topic空间中:

corpus_lsi = lsi[corpus_tfidf]
for doc in corpus_lsi:
    print doc

输出

[(0, 0.67211468809878649), (1, -0.54880682119355917)]
[(0, 0.44124825208697727), (1, 0.83594920480339041)]
[(0, 0.80401378963792647)]

可以看出,文档1,3和topic1更相关,文档2和topic2更相关;

我们也可以顺手跑一个LDA模型:

lda = models.LdaModel(copurs_tfidf, id2word=dictionary, num_topics=2)
lda.print_topics(2)

输出:

 topic #0: 0.119*silver + 0.107*shipment + 0.104*truck + 0.103*gold + 0.102*fire + 0.101*arrived + 0.097*damaged + 0.085*delivery + 0.061*of + 0.061*in
2013-05-27 19:44:40,026 : INFO : topic #1: 0.110*gold + 0.109*silver + 0.105*shipment + 0.105*damaged + 0.101*arrived + 0.101*fire + 0.098*truck + 0.090*delivery + 0.061*of + 0.061*in

lda模型中的每个主题单词都有概率意义,其加和为1,值越大权重越大,物理意义比较明确,不过反过来再看这三篇文档训练的2个主题的LDA模型太平均了,没有说服力。

好了,我们回到LSI模型,有了LSI模型,我们如何来计算文档直接的相思度,或者换个角度,给定一个查询Query,如何找到最相关的文档?当然首先是建索引了:

index = similarities.MatrixSimilarity(lsi[corpus])  

还是以这篇英文tutorial中的查询Query为例:gold silver truck。首先将其向量化:

>>> query = "gold silver truck"
>>> query_bow = dictionary.doc2bow(query.lower().split())
>>> print query_bow
[(3, 1), (9, 1), (10, 1)]

再用之前训练好的LSI模型将其映射到二维的topic空间:

>>> query_lsi = lsi[query_bow]
>>> print query_lsi
[(0, 1.1012835748628467), (1, 0.72812283398049593)]

最后就是计算其和index中doc的余弦相似度了:

>>> sims = index[query_lsi]
>>> print list(enumerate(sims))
[(0, 0.40757114), (1, 0.93163693), (2, 0.83416492)]

当然,我们也可以按相似度进行排序:

>>> sort_sims = sorted(enumerate(sims), key=lambda item: -item[1])
>>> print sort_sims
[(1, 0.93163693), (2, 0.83416492), (0, 0.40757114)]

可以看出,这个查询的结果是doc2 > doc3 > doc1,和fast tutorial是一致的,虽然数值上有一些差别:

上一篇下一篇

猜你喜欢

热点阅读