我正在尝试使用 Stanford CoreNLP 管道的 coreference 模块,但我最终在 Java 中遇到了 OutOfMemory 错误。我已经增加了堆大小(通过 Eclipse 中的 Run->Run Configurations->VM Arguments)并将它们设置为 -Xmx3g -Xms1g。我什至尝试了 -Xmx12g -Xms4g,但这也无济于事。我在 OS X 10.8.5 上使用 Eclipse Juno,在 64 位机器上使用 Java 1.6。 有人知道我还可以尝试什么吗?
我正在使用网站 ( http://nlp.stanford.edu/software/corenlp.shtml ) 中的示例代码:
Properties props = new Properties();
props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
String text = "Stanford University is located in California. It is a great university";
Annotation document = new Annotation(text);
pipeline.annotate(document);
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for(CoreMap sentence: sentences) {
for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
String word = token.get(TextAnnotation.class);
String pos = token.get(PartOfSpeechAnnotation.class);
String ne = token.get(NamedEntityTagAnnotation.class);
}
Tree tree = sentence.get(TreeAnnotation.class);
SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
}
Map<Integer, CorefChain> graph = document.get(CorefChainAnnotation.class);
我得到了错误:
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.9 sec].
Adding annotator lemma
Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [3.1 sec].
Initializing JollyDayHoliday for sutime
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/defs.sutime.txt
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.sutime.txt
Jan 9, 2014 10:39:37 AM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Ignoring inactive rule: temporal-composite-8:ranges
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
Adding annotator dcoref
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.String.substring(String.java:1939)
at java.lang.String.subSequence(String.java:1972)
at java.util.regex.Pattern.split(Pattern.java:1002)
at java.lang.String.split(String.java:2292)
at java.lang.String.split(String.java:2334)
at edu.stanford.nlp.dcoref.Dictionaries.loadGenderNumber(Dictionaries.java:382)
at edu.stanford.nlp.dcoref.Dictionaries.<init>(Dictionaries.java:553)
at edu.stanford.nlp.dcoref.Dictionaries.<init>(Dictionaries.java:463)
at edu.stanford.nlp.dcoref.SieveCoreferenceSystem.<init>(SieveCoreferenceSystem.java:282)
at edu.stanford.nlp.pipeline.DeterministicCorefAnnotator.<init>(DeterministicCorefAnnotator.java:52)
at edu.stanford.nlp.pipeline.StanfordCoreNLP$11.create(StanfordCoreNLP.java:775)
at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:81)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:260)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:127)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:123)
at extraction.BaselineApproach.main(BaselineApproach.java:88)
最佳答案
问题似乎不是 Stanford CoreNLP 或 Java,而是 Eclipse。 这是我尝试过的:
后来发现 Eclipse 没有使用我指定的 VM 设置。
然后我尝试了以下方法来修复它:
当这也不起作用时,我尝试:
当这也不起作用时,我重新安装了 Eclipse。现在一切都恢复正常了:我可以设置默认设置并针对特定应用覆盖它们。
关于java - 使用 Stanford CoreNLP - Java 堆空间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21018382/