我正在尝试使用 NLTK 来解析俄语文本,但它不适用于 А 等缩写和首字母。 И. Манташева 和Я。 Вышинский.
相反,它像下面这样中断:
организовывал забастовки и демонстрации, поднимал рабочих на бакинских предприятиях А.
И.
Манташева.
当我使用来自 https://github.com/mhq/train_punkt 的 russian.pickle
时,它做了同样的事情,
这是一般的 NLTK 限制还是特定于语言的限制?
最佳答案
正如一些评论所暗示的,您想要使用的是 Punkt 句子分段器/分词器。
NLTK 还是特定语言?
都没有。正如您已经意识到的,您不能简单地拆分每个周期。 NLTK 附带了几个受过不同语言训练的 Punkt 分段器。但是,如果您遇到问题,最好的办法是使用更大的训练语料库供 Punkt 分词器学习。
文档链接
- https://nltk.googlecode.com/svn/trunk/doc/howto/tokenize.html
- https://nltk.googlecode.com/svn/trunk/doc/api/nltk.tokenize.punkt.PunktSentenceTokenizer-class.html
实现示例
下面是为您指明正确方向的部分代码。您应该能够通过提供俄语文本文件为自己做同样的事情。一个来源可能是 Russian version的 Wikipedia database dump ,但我将其作为潜在的次要问题留给您。
import logging
try:
import cPickle as pickle
except ImportError:
import pickle
import nltk
def create_punkt_sent_detector(fnames, punkt_fname, progress_count=None):
"""Makes a pass through the corpus to train a Punkt sentence segmenter.
Args:
fname: List of filenames to be used for training.
punkt_fname: Filename to save the trained Punkt sentence segmenter.
progress_count: Display a progress count every integer number of pages.
"""
logger = logging.getLogger('create_punkt_sent_detector')
punkt = nltk.tokenize.punkt.PunktTrainer()
logger.info("Training punkt sentence detector")
doc_count = 0
try:
for fname in fnames:
with open(fname, mode='rb') as f:
punkt.train(f.read(), finalize=False, verbose=False)
doc_count += 1
if progress_count and doc_count % progress_count == 0:
logger.debug('Pages processed: %i', doc_count)
except KeyboardInterrupt:
print 'KeyboardInterrupt: Stopping the reading of the dump early!'
logger.info('Now finalzing Punkt training.')
punkt.finalize_training(verbose=True)
learned = punkt.get_params()
sbd = nltk.tokenize.punkt.PunktSentenceTokenizer(learned)
with open(punkt_fname, mode='wb') as f:
pickle.dump(sbd, f, protocol=pickle.HIGHEST_PROTOCOL)
return sbd
if __name__ == 'main':
punkt_fname = 'punkt_russian.pickle'
try:
with open(punkt_fname, mode='rb') as f:
sent_detector = pickle.load(f)
except (IOError, pickle.UnpicklingError):
sent_detector = None
if sent_detector is None:
corpora = ['russian-1.txt', 'russian-2.txt']
sent_detector = create_punkt_sent_detector(fnames=corpora,
punkt_fname=punkt_fname)
tokenized_text = sent_detector.tokenize("some russian text.",
realign_boundaries=True)
print '\n'.join(tokenized_text)
关于python - NLTK 可以识别首字母后跟点吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14088688/