也许我跳过了文档的一部分,但我想确定的是标准 NER 工具集中每个实体的唯一 ID。例如:
import spacy
from spacy import displacy
import en_core_web_sm
nlp = en_core_web_sm.load()
text = "This is a text about Apple Inc based in San Fransisco. "\
"And here is some text about Samsung Corp. "\
"Now, here is some more text about Apple and its products for customers in Norway"
doc = nlp(text)
for ent in doc.ents:
print('ID:{}\t{}\t"{}"\t'.format(ent.label,ent.label_,ent.text,))
displacy.render(doc, jupyter=True, style='ent')
返回:
ID:381 ORG "Apple Inc" ID:382 GPE "San Fransisco" ID:381 ORG "Samsung Corp." ID:381 ORG "Apple" ID:382 GPE "Norway"
我一直在查看 ent.ent_id
和 ent.ent_id_
但根据 docs,它们处于非事件状态.我在 ent.root
中也找不到任何东西。
例如,在 GCP NLP 中每个实体都返回一个 ⟨entity⟩number,使您能够识别文本中同一实体的多个实例。
This is a ⟨text⟩2 about ⟨Apple Inc⟩1 based in ⟨San Fransisco⟩4. And here is some ⟨text⟩3 about ⟨Samsung Corp⟩6. Now, here is some more ⟨text⟩8 about ⟨Apple⟩1 and its ⟨products⟩5 for ⟨customers⟩7 in ⟨Norway⟩9"
spaCy 是否支持类似的东西?或者有没有办法使用 NLTK 或 Stanford?
最佳答案
您可以使用 neuralcoref 库来获得与 SpaCy 模型一起使用的共指解析:
# Load your usual SpaCy model (one of SpaCy English models)
import spacy
nlp = spacy.load('en')
# Add neural coref to SpaCy's pipe
import neuralcoref
neuralcoref.add_to_pipe(nlp)
# You're done. You can now use NeuralCoref as you usually manipulate a SpaCy document annotations.
doc = nlp(u'My sister has a dog. She loves him.')
doc._.has_coref
doc._.coref_clusters
在这里找到安装和使用说明:https://github.com/huggingface/neuralcoref
关于python - spaCy 共指解析 - 命名实体识别(NER)以返回唯一实体 ID?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53750468/