python - 如何为 spacy NLP 创建字典?

标签 python dictionary nlp spacy

我打算使用 spaCy NLP 引擎,我是从字典开始的。我读过this resourcethis但不能开始做。

我有这个代码:

from spacy.en import English
import _regex
parser = English()

# Test Data
multiSentence = "There is an art, it says, or rather, a knack to flying." \
                 "The knack lies in learning how to throw yourself at the ground and miss." \
                 "In the beginning the Universe was created. This has made a lot of people "\
                 "very angry and been widely regarded as a bad move."
parsedData = parser(multiSentence)
for i, token in enumerate(parsedData):
    print("original:", token.orth, token.orth_)
    print("lowercased:", token.lower, token.lower_)
    print("lemma:", token.lemma, token.lemma_)
    print("shape:", token.shape, token.shape_)
    print("prefix:", token.prefix, token.prefix_)
    print("suffix:", token.suffix, token.suffix_)
    print("log probability:", token.prob)
    print("Brown cluster id:", token.cluster)
    print("----------------------------------------")
    if i > 1:
        break

# Let's look at the sentences
sents = []
for span in parsedData.sents:
    # go from the start to the end of each span, returning each token in the sentence
    # combine each token using join()
    sent = ''.join(parsedData[i].string for i in range(span.start, span.end)).strip()
    sents.append(sent)

print('To show sentence')
for sentence in sents:
    print(sentence)


# Let's look at the part of speech tags of the first sentence
for span in parsedData.sents:
    sent = [parsedData[i] for i in range(span.start, span.end)]
    break

for token in sent:
    print(token.orth_, token.pos_)

# Let's look at the dependencies of this example:
example = "The boy with the spotted dog quickly ran after the firetruck."
parsedEx = parser(example)
# shown as: original token, dependency tag, head word, left dependents, right dependents
for token in parsedEx:
    print(token.orth_, token.dep_, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights])

# Let's look at the named entities of this example:
example = "Apple's stocks dropped dramatically after the death of Steve Jobs in October."
parsedEx = parser(example)
for token in parsedEx:
    print(token.orth_, token.ent_type_ if token.ent_type_ != "" else "(not an entity)")

print("-------------- entities only ---------------")
# if you just want the entities and nothing else, you can do access the parsed examples "ents" property like this:
ents = list(parsedEx.ents)
for entity in ents:
    print(entity.label, entity.label_, ' '.join(t.orth_ for t in entity))

messyData = "lol that is rly funny :) This is gr8 i rate it 8/8!!!"
parsedData = parser(messyData)
for token in parsedData:
    print(token.orth_, token.pos_, token.lemma_)

我在哪里可以更改这些 token (token.orth、token.orth_ 等等):

print("original:", token.orth, token.orth_)
    print("lowercased:", token.lower, token.lower_)
    print("lemma:", token.lemma, token.lemma_)
    print("shape:", token.shape, token.shape_)
    print("prefix:", token.prefix, token.prefix_)
    print("suffix:", token.suffix, token.suffix_)
    print("log probability:", token.prob)
    print("Brown cluster id:", token.cluster)

我可以将这些标记保存在自己的字典中吗?感谢您的帮助

最佳答案

目前还不清楚您需要的数据结构是什么,但让我们尝试回答一些问题。

问:我在哪里可以更改这些 token (token.orth、token.orth_、...)?

这些标记不应更改,因为它们是由 spacy 的英文模型创建的注释。 (参见 annotations 的定义)

有关各个注释含义的详细信息,请参阅 spaCy Documentation for [ orth , pos , tag, lema and text ]

问:但是我们可以更改这些标记的注释吗?

可能,是和否。

查看代码,我们看到 spacy.tokens.doc.Doc class 是一个相当复杂的 Cython 对象:

cdef class Doc:
    """
    A sequence of `Token` objects. Access sentences and named entities,
    export annotations to numpy arrays, losslessly serialize to compressed
    binary strings.
    Aside: Internals
        The `Doc` object holds an array of `TokenC` structs.
        The Python-level `Token` and `Span` objects are views of this
        array, i.e. they don't own the data themselves.
    Code: Construction 1
        doc = nlp.tokenizer(u'Some text')
    Code: Construction 2
        doc = Doc(nlp.vocab, orths_and_spaces=[(u'Some', True), (u'text', True)])
    """

但一般来说它是spacy.tokens.token.Token 的序列包含 a 的对象本质上与 spacy.Vocab 密切相关对象。

首先,让我们看看这些注解中的一些是否是可变的。让我们从 POS 标签开始:

>>> import spacy
>>> nlp = spacy.load('en')
>>> doc = nlp('This is a foo bar sentence.')

>>> type(doc[0]) # First word. 
<class 'spacy.tokens.token.Token'>

>>> dir(doc[0]) # Properties/functions available for the Token object. 
['__bytes__', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__pyx_vtable__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__unicode__', 'ancestors', 'check_flag', 'children', 'cluster', 'conjuncts', 'dep', 'dep_', 'doc', 'ent_id', 'ent_id_', 'ent_iob', 'ent_iob_', 'ent_type', 'ent_type_', 'has_repvec', 'has_vector', 'head', 'i', 'idx', 'is_alpha', 'is_ancestor', 'is_ancestor_of', 'is_ascii', 'is_bracket', 'is_digit', 'is_left_punct', 'is_lower', 'is_oov', 'is_punct', 'is_quote', 'is_right_punct', 'is_space', 'is_stop', 'is_title', 'lang', 'lang_', 'left_edge', 'lefts', 'lemma', 'lemma_', 'lex_id', 'like_email', 'like_num', 'like_url', 'lower', 'lower_', 'n_lefts', 'n_rights', 'nbor', 'norm', 'norm_', 'orth', 'orth_', 'pos', 'pos_', 'prefix', 'prefix_', 'prob', 'rank', 'repvec', 'right_edge', 'rights', 'sentiment', 'shape', 'shape_', 'similarity', 'string', 'subtree', 'suffix', 'suffix_', 'tag', 'tag_', 'text', 'text_with_ws', 'vector', 'vector_norm', 'vocab', 'whitespace_']

# The POS tag assigned by spacy's model.
>>> doc[0].tag_ 
'DT'

# Let's try to override it.
>>> doc[0].tag_ = 'NN'

# It works!!!
>>> doc[0].tag_
'NN'

# What if we overwrite index of the tag_ rather than the form?
>>> doc[0].tag
474
>>> doc[0].tag = 123
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "spacy/tokens/token.pyx", line 206, in spacy.tokens.token.Token.tag.__set__ (spacy/tokens/token.cpp:6755)
  File "spacy/morphology.pyx", line 64, in spacy.morphology.Morphology.assign_tag (spacy/morphology.cpp:4540)
KeyError: 123
>>> doc[0].tag = 352
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "spacy/tokens/token.pyx", line 206, in spacy.tokens.token.Token.tag.__set__ (spacy/tokens/token.cpp:6755)
  File "spacy/morphology.pyx", line 64, in spacy.morphology.Morphology.assign_tag (spacy/morphology.cpp:4540)
KeyError: 352

所以不知何故,如果您更改 POS 标记的形式 (.pos_),它仍然存在,但是没有原则性的方法来获取正确的 key ,因为这些 key 是从 Cython 属性自动生成的.

我们来看另一个注解.orth_:

>>> doc[0].orth_
'This'
>>> doc[0].orth_ = 'that'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: attribute 'orth_' of 'spacy.tokens.token.Token' objects is not writable

现在我们看到有一些像 .orth_ 这样的 Token 注释被保护不被覆盖。这很可能是因为它会破坏标记映射回输入字符串的原始偏移量的方式。

Ans:似乎Token 对象的某些属性可以更改,有些则不能。

问:那么哪些Token属性可以修改,哪些不可以?

一个简单的检查方法是在 https://github.com/explosion/spaCy/blob/master/spacy/tokens/token.pyx#L32 的 Cython 属性中寻找 __set__ 函数。 .

这将允许可变变量,并且很可能这些是可以覆盖/更改的 token 属性。

例如

property lemma_:
    def __get__(self):
        return self.vocab.strings[self.c.lemma]
    def __set__(self, unicode lemma_):
        self.c.lemma = self.vocab.strings[lemma_]

property pos_:
    def __get__(self):
        return parts_of_speech.NAMES[self.c.pos]

property tag_:
    def __get__(self):
        return self.vocab.strings[self.c.tag]
    def __set__(self, tag):
        self.tag = self.vocab.strings[tag]

我们会看到 .tag_.lemma_ 是可变的,但 .pos_ 不是:

>>> doc[0].lemma_
'this'
>>> doc[0].lemma_ = 'that'
>>> doc[0].lemma_
'that'

>>> doc[0].tag_ 
'DT'
>>> doc[0].tag_ = 'NN'
>>> doc[0].tag_
'NN'

>>> doc[0].pos_
'NOUN'
>>> doc[0].pos_ = 'VERB'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: attribute 'pos_' of 'spacy.tokens.token.Token' objects is not writable

问:我可以将那些标记保存在自己的字典中吗?

我不太清楚那是什么意思。但也许,你的意思是 pickle .

不知何故,默认的 pickle 对 Cython 对象的作用很奇怪,因此您可能需要其他方法来保存 spacy.tokens.doc.Docspacy.tokens。 spacy 创建的 token.Token 对象,即

>>> import pickle
>>> import spacy

>>> nlp = spacy.load('en')
>>> doc = nlp('This is a foo bar sentence.')

>>> doc
This is a foo bar sentence.

# Pickle the Doc object.
>>> pickle.dump(doc, open('spacy_processed_doc.pkl', 'wb'))

# Now you see me.
>>> doc
This is a foo bar sentence.
# Now you don't
>>> doc = None
>>> doc

# Let's load the saved pickle.
>>> doc = pickle.load(open('spacy_processed_doc.pkl', 'rb'))
>>> doc

>>> type(doc)
<class 'spacy.tokens.doc.Doc'>
>>> doc[0]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "spacy/tokens/doc.pyx", line 185, in spacy.tokens.doc.Doc.__getitem__ (spacy/tokens/doc.cpp:5550)
TypeError: 'NoneType' object is not subscriptable

关于python - 如何为 spacy NLP 创建字典?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44096479/

相关文章:

c++ "error: no matching function for call to"在集合中计数时

machine-learning - 如何使用 doc2vec 嵌入作为神经网络的输入

python - Python 中的行标记

python - pyvenv-3.4 返回非零退出状态 1

python - Django Rest框架,使用django-hvad翻译模型

algorithm - 使用 O(log(n)) 实现最近向量搜索算法

machine-learning - 使用斯坦福依存解析器进行依存解析

python - 使用 Numpy 在元组列表之间进行外部减法

以 HashSet<int> 作为值的 c# 字典得到所有的交集

python - 如何在字典值中的元组中搜索数字?