java - 为什么斯坦福 CoreNLP 服务器将命名实体拆分为单个标记?

标签 java nlp stanford-nlp

我使用此命令来发布数据(从斯坦福网站复制一些意大利面):

wget --post-data 'Barack Obama was President of the United States of America in 2016' 'localhost:9000/?properties={"annotators": "ner", "outputFormat": "json"}' -O out.json

响应如下所示:

{
    "sentences": [{
        "index": 0,
        "tokens": [{
            "index": 1,
            "word": "Barack",
            "originalText": "Barack",
            "lemma": "Barack",
            "characterOffsetBegin": 0,
            "characterOffsetEnd": 6,
            "pos": "NNP",
            "ner": "PERSON",
            "before": "",
            "after": " "
        }, {
            "index": 2,
            "word": "Obama",
            "originalText": "Obama",
            "lemma": "Obama",
            "characterOffsetBegin": 7,
            "characterOffsetEnd": 12,
            "pos": "NNP",
            "ner": "PERSON",
            "before": " ",
            "after": " "
        }, {
            "index": 3,
            "word": "was",
            "originalText": "was",
            "lemma": "be",
            "characterOffsetBegin": 13,
            "characterOffsetEnd": 16,
            "pos": "VBD",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 4,
            "word": "President",
            "originalText": "President",
            "lemma": "President",
            "characterOffsetBegin": 17,
            "characterOffsetEnd": 26,
            "pos": "NNP",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 5,
            "word": "of",
            "originalText": "of",
            "lemma": "of",
            "characterOffsetBegin": 27,
            "characterOffsetEnd": 29,
            "pos": "IN",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 6,
            "word": "the",
            "originalText": "the",
            "lemma": "the",
            "characterOffsetBegin": 30,
            "characterOffsetEnd": 33,
            "pos": "DT",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 7,
            "word": "United",
            "originalText": "United",
            "lemma": "United",
            "characterOffsetBegin": 34,
            "characterOffsetEnd": 40,
            "pos": "NNP",
            "ner": "LOCATION",
            "before": " ",
            "after": " "
        }, {
            "index": 8,
            "word": "States",
            "originalText": "States",
            "lemma": "States",
            "characterOffsetBegin": 41,
            "characterOffsetEnd": 47,
            "pos": "NNPS",
            "ner": "LOCATION",
            "before": " ",
            "after": " "
        }, {
            "index": 9,
            "word": "of",
            "originalText": "of",
            "lemma": "of",
            "characterOffsetBegin": 48,
            "characterOffsetEnd": 50,
            "pos": "IN",
            "ner": "LOCATION",
            "before": " ",
            "after": " "
        }, {
            "index": 10,
            "word": "America",
            "originalText": "America",
            "lemma": "America",
            "characterOffsetBegin": 51,
            "characterOffsetEnd": 58,
            "pos": "NNP",
            "ner": "LOCATION",
            "before": " ",
            "after": " "
        }, {
            "index": 11,
            "word": "in",
            "originalText": "in",
            "lemma": "in",
            "characterOffsetBegin": 59,
            "characterOffsetEnd": 61,
            "pos": "IN",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 12,
            "word": "2016",
            "originalText": "2016",
            "lemma": "2016",
            "characterOffsetBegin": 62,
            "characterOffsetEnd": 66,
            "pos": "CD",
            "ner": "DATE",
            "normalizedNER": "2016",
            "before": " ",
            "after": "",
            "timex": {
                "tid": "t1",
                "type": "DATE",
                "value": "2016"
            }
        }]
    }]
}

我做错了什么吗?我有 Java 客户端代码,至少可以将 Barack ObamaUnited States of America 识别为完整的 NER,但使用该服务似乎会单独处理每个 token 。有什么想法吗?

最佳答案

您应该将 entitymentions 注释器添加到注释器列表中。

关于java - 为什么斯坦福 CoreNLP 服务器将命名实体拆分为单个标记?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43985744/

相关文章:

machine-learning - 如何连接词向量形成句子向量

google-api - Google Language API 如何将文本拆分成句子来分配情绪?

java - 斯坦福NLP OpenIE 4错误

stanford-nlp - 斯坦福 OpenIE 使用定制的 NER 模型

java - 尝试编写 Spring Batch 单元测试时出现 NoSuchBeanDefinitionException

java - 为什么我收到 java.lang.UnsatisfiedLinkError (从 java 调用 c 函数时)?

java - 默认的最大 Java 堆大小是如何确定的?

java - 我无法通过 ssh 连接到 mysql,我不知道为什么使用 Java

java - 字典上的部分匹配

python - 斯坦福解析器的 nltk 接口(interface)