污力_机器学习

[python] spacy

2019-04-20  本文已影响0人  VanJordan

Spacy功能简介

可以用于进行分词,命名实体识别,词性识别等等,但是首先需要下载预训练模型

pip install --user spacy
python -m spacy download en_core_web_sm
pip install neuralcoref
pip install textacy

sentencizer

import spacy
nlp = spacy.load('en_core_web_sm')# 加载预训练模型

txt = "some text read from one paper ..."
doc = nlp(txt)

for sent in doc.sents:
    print(sent)
    print('#'*50)
def set_custom_boundaries(doc):
    '''spacy does not set $. and }. as end of sentence.
    This custom boundary will fix that bug.  '''
    for token in doc[:-1]:
        if "$." in token.text or "}." in token.text or token.text == ";":
            doc[token.i+1].is_sent_start = True
    return doc

#add custom boundary once, skip if already exist
try:
    nlp.add_pipe(set_custom_boundaries, before="parser")
except:
    pass

Tokenization

将句子切分成单词,英文中一般使用空格分隔

import spacy
nlp = spacy.load('en_core_web_sm')

txt = "A magnetic monopole is a hypothetical elementary particle."
doc = nlp(txt)
tokens = [token for token in doc]
print(tokens)

Part-of-speech tagging

pos = [token.pos_ for token in doc]
print(pos)
>>> ['DET', 'ADJ', 'NOUN', 'VERB', 'DET', 'ADJ', 'ADJ', 'NOUN', 'PUNCT']
# 对应于中文是 【冠词,形容词,名词,动词,冠词,形容词,形容词,名词,标点】
# 原始句子是 [A, magnetic, monopole, is, a, hypothetical, elementary, particle, .]

Lemmatization

lem = [token.lemma_ for token in doc]
print(lem)
>>> ['a', 'magnetic', 'monopole', 'be', 'a', 'hypothetical', 'elementary', 'particle', '.']

Stop words

stop_words = [token.is_stop for token in doc]
print(stop_words)
>>> [True, False, False, True, True, False, False, False, False]
# 可以看到,这个磁单极的例子中停用词有 a 和 is。

Dependency Parsing

依存分析,标记单词是主语,谓语,宾语还是连接词。程序中使用 token.dep_ 提取。

dep = [token.dep_ for token in doc]
print(dep)
>>> ['det', 'amod', 'nsubj', 'ROOT', 'det', 'amod', 'amod', 'attr', 'punct']

Noun Chunks

noun_chunks = [nc for nc in doc.noun_chunks]
print(noun_chunks)
>>> [A magnetic monopole, a hypothetical elementary particle]

Named Entity Recognization

txt = ''''European authorities fined Google a record $5.1 billion
on Wednesday for abusing its power in the mobile phone market and
ordered the company to alter its practices'
'''
doc = nlp(txt)
ners = [(ent.text, ent.label_) for ent in doc.ents]
print(ners)
>>> [('European', 'NORP'), ('Google', 'ORG'), ('$5.1 billion', 'MONEY'), ('Wednesday', 'DATE')]

Coreference Resolution

txt = "My sister has a son and she loves him."

# 将预训练的神经网络指代消解加入到spacy的管道中
import neuralcoref
neuralcoref.add_to_pipe(nlp)

doc = nlp(txt)
doc._.coref_clusters
>>> [My sister: [My sister, she], a son: [a son, him]]

Display

可视化。把这条功能单独列出来,是因为它太酷了。举几个简单的例子,第一个例子是对依存分析的可视化,

txt = '''In particle physics, a magnetic monopole is a 
hypothetical elementary particle.'''
displacy.render(nlp(txt), style='dep', jupyter=True,\
                options = {'distance': 90})
from spacy import displacy
displacy.render(doc, style='ent', jupyter=True)

知识提取

这一部分使用了 textacy, 需要通过pip命令进行安装,textacy.extract 里面的 semistructured_statements() 函数可以提取主语是 Magnetic Monopole,谓语原型是 be 的所有事实。首先将维基百科上的关于磁单极的这篇介绍的文字拷贝到 magneti_monopole.txt 中。

import textacy.extract

nlp = spacy.load('en_core_web_sm')

with open("magnetic_monopole.txt", "r") as fin:
    txt = fin.read()

doc = nlp(txt)
statements = textacy.extract.semistructured_statements(doc, "monopole")
for statement in statements:
    subject, verb, fact = statement
    print(f" - {fact}")
- a singular solution of Maxwell's equation (because it requires removing the worldline from spacetime
- a [[topological defect]] in a compact U(1) gauge theory
- a new [[elementary particle]], and would violate [[Gauss's law for magnetism
上一篇 下一篇

猜你喜欢

热点阅读