我正在尝试将股票代码添加到被识别为 ORG 实体的字符串中。对于每个符号,我都会:
nlp.matcher.add(symbol, u'ORG', {}, [[{u'orth': symbol}]])
我可以看到这个符号被添加到模式中:
print "Patterns:", nlp.matcher._patterns
但是添加之前无法识别的任何符号在添加之后也不会被识别。显然,这些标记已经存在于词汇表中(这就是词汇长度不变的原因)。
我应该采取什么不同的做法?我错过了什么?
谢谢
这是我的示例代码:
“练习添加股票代码作为 ORG 实体的简短片段”
from spacy.en import English
import spacy.en
from spacy.attrs import ORTH, TAG, LOWER, IS_ALPHA, FLAG63
import os
import csv
import sys
nlp = English() #Load everything for the English model
print "Before nlp vocab length", len(nlp.matcher.vocab)
symbol_list = [u"CHK", u"JONE", u"NE", u"DO", u"ESV"]
txt = u"""drive double-digit rallies in Chesapeake Energy (NYSE: CHK), (NYSE: NE), (NYSE: DO), (NYSE: ESV), (NYSE: JONE)"""# u"""Drive double-digit rallies in Chesapeake Energy (NYSE: CHK), Noble Corporation (NYSE:NE), Diamond Offshore (NYSE:DO), Ensco (NYSE:ESV), and Jones Energy (NYSE: JONE)"""
before = nlp(txt)
for tok in before: #Before adding entities
print tok, tok.orth, tok.tag_, tok.ent_type_
for symbol in symbol_list:
print "adding symbol:", symbol
print "vocab length:", len(nlp.matcher.vocab)
print "pattern length:", nlp.matcher.n_patterns
nlp.matcher.add(symbol, u'ORG', {}, [[{u'orth': symbol}]])
print "Patterns:", nlp.matcher._patterns
print "Entities:", nlp.matcher._entities
for ent in nlp.matcher._entities:
print ent.label
tokens = nlp(txt)
print "\n\nAfter:"
print "After nlp vocab length", len(nlp.matcher.vocab)
for tok in tokens:
print tok, tok.orth, tok.tag_, tok.ent_type_
最佳答案
这是基于 docs 的工作示例:
import spacy
nlp = spacy.load('en')
def merge_phrases(matcher, doc, i, matches):
'''
Merge a phrase. We have to be careful here because we'll change the token indices.
To avoid problems, merge all the phrases once we're called on the last match.
'''
if i != len(matches)-1:
return None
spans = [(ent_id, label, doc[start : end]) for ent_id, label, start, end in matches]
for ent_id, label, span in spans:
span.merge('NNP' if label else span.root.tag_, span.text, nlp.vocab.strings[label])
matcher = spacy.matcher.Matcher(nlp.vocab)
matcher.add(entity_key='stock-nyse', label='STOCK', attrs={}, specs=[[{spacy.attrs.ORTH: 'NYSE'}]], on_match=merge_phrases)
matcher.add(entity_key='stock-esv', label='STOCK', attrs={}, specs=[[{spacy.attrs.ORTH: 'ESV'}]], on_match=merge_phrases)
doc = nlp(u"""drive double-digit rallies in Chesapeake Energy (NYSE: CHK), (NYSE: NE), (NYSE: DO), (NYSE: ESV), (NYSE: JONE)""")
matcher(doc)
print(['%s|%s' % (t.orth_, t.ent_type_) for t in doc])
->
['drive|', 'double|', '-|', 'digit|', 'rallies|', 'in|', 'Chesapeake|ORG', 'Energy|ORG', '(|', 'NYSE|STOCK', ':|', 'CHK|', ')|', ',|', '(|', 'NYSE|STOCK', ':|', 'NE|GPE', ')|', ',|', '(|', 'NYSE|STOCK', ':|', 'DO|', ')|', ',|', '(|', 'NYSE|STOCK', ':|', 'ESV|STOCK', ')|', ',|', '(|', 'NYSE|STOCK', ':|', 'JONE|ORG', ')|']
NYSE
和 ESV
现在标记为 STOCK
实体类型。基本上,在每场比赛中,您应该手动合并 token 和/或分配您想要的实体类型。还有acceptor函数允许您在匹配时过滤/拒绝匹配。
关于python - 如何在 spacy nlp 中添加新实体 (ORG) 实例,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40345852/