python-2.7 - 在特定文件上测试 NLTK 分类器

标签 python-2.7 nlp classification nltk text-classification

以下代码运行 朴素贝叶斯影评分类器 .
该代码生成了一份信息量最大的功能列表。

注:**movie review**文件夹位于 nltk .

from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
stop = stopwords.words('english')

documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]


word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]

numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents[numtrain:]]

classifier = NaiveBayesClassifier.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)

link of code来自 alvas

我该怎么办 测试 上的分类器具体文件 ?

如果我的问题有歧义或错误,请告诉我。

最佳答案

首先,仔细阅读这些答案,它们包含您需要的部分答案,并简要说明分类器的作用以及它在 NLTK 中的工作原理:

  • nltk NaiveBayesClassifier training for sentiment analysis
  • Using my own corpus instead of movie_reviews corpus for Classification in NLTK
  • http://www.nltk.org/book/ch06.html


  • 在注释数据上测试分类器

    现在回答你的问题。我们假设您的问题是这个问题的后续:Using my own corpus instead of movie_reviews corpus for Classification in NLTK

    如果您的测试文本的结构与 movie_review 语料库的结构相同,那么您可以像读取训练数据一样简单地读取测试数据:

    以防万一代码的解释不清楚,这里有一个演练:
    traindir = '/home/alvas/my_movie_reviews'
    mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
    

    上面两行是读取一个目录my_movie_reviews,结构如下:
    \my_movie_reviews
        \pos
            123.txt
            234.txt
        \neg
            456.txt
            789.txt
        README
    

    然后下一行提取带有 pos/neg 标记的文档,该标记是目录结构的一部分。
    documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
    

    以下是对上述行的解释:
    # This extracts the pos/neg tag
    labels = [i for i.split('/')[0]) for i in mr.fileids()]
    # Reads the words from the corpus through the CategorizedPlaintextCorpusReader object
    words = [w for w in mr.words(i)]
    # Removes the stopwords
    words = [w for w in mr.words(i) if w.lower() not in stop]
    # Removes the punctuation
    words = [w for w in mr.words(i) w not in string.punctuation]
    # Removes the stopwords and punctuations
    words = [w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation]
    # Removes the stopwords and punctuations and put them in a tuple with the pos/neg labels
    documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
    

    读取测试数据时应该应用相同的过程!!!

    现在进入特征处理:

    以下几行是分类器的额外前 100 个特征:
    # Extract the words features and put them into FreqDist
    # object which records the no. of times each unique word occurs
    word_features = FreqDist(chain(*[i for i,j in documents]))
    # Cuts the FreqDist to the top 100 words in terms of their counts.
    word_features = word_features.keys()[:100]
    

    接下来将文档处理为可分类格式:
    # Splits the training data into training size and testing size
    numtrain = int(len(documents) * 90 / 100)
    # Process the documents for training data
    train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
    # Process the documents for testing data
    test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents[numtrain:]]
    

    现在解释 train_set 和 `test_set 的长列表推导式:
    # Take the first `numtrain` no. of documents
    # as training documents
    train_docs = documents[:numtrain]
    # Takes the rest of the documents as test documents.
    test_docs = documents[numtrain:]
    # These extract the feature sets for the classifier
    # please look at the full explanation on https://stackoverflow.com/questions/20827741/nltk-naivebayesclassifier-training-for-sentiment-analysis/
    train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in train_docs]
    

    测试文档中的特征提取也需要像上面一样处理文档!!!

    因此,您可以通过以下方式读取测试数据:
    stop = stopwords.words('english')
    
    # Reads the training data.
    traindir = '/home/alvas/my_movie_reviews'
    mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
    
    # Converts training data into tuples of [(words,label), ...]
    documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
    
    # Now do the same for the testing data.
    testdir = '/home/alvas/test_reviews'
    mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
    # Converts testing data into tuples of [(words,label), ...]
    test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]
    

    然后继续上述处理步骤,只需按照@yvespeirsman 的回答执行此操作即可获取测试文档的标签:
    #### FOR TRAINING DATA ####
    stop = stopwords.words('english')
    
    # Reads the training data.
    traindir = '/home/alvas/my_movie_reviews'
    mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
    
    # Converts training data into tuples of [(words,label), ...]
    documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
    # Extract training features.
    word_features = FreqDist(chain(*[i for i,j in documents]))
    word_features = word_features.keys()[:100]
    # Assuming that you're using full data set
    # since your test set is different.
    train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents]
    
    #### TRAINS THE TAGGER ####
    # Train the tagger
    classifier = NaiveBayesClassifier.train(train_set)
    
    #### FOR TESTING DATA ####
    # Now do the same reading and processing for the testing data.
    testdir = '/home/alvas/test_reviews'
    mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
    # Converts testing data into tuples of [(words,label), ...]
    test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]
    # Reads test data into features:
    test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in test_documents]
    
    #### Evaluate the classifier ####
    for doc, gold_label in test_set:
        tagged_label = classifier.classify(doc)
        if tagged_label == gold_label:
            print("Woohoo, correct")
        else:
            print("Boohoo, wrong")
    

    如果上面的代码和解释对你来说没有意义,那么你必须在继续之前阅读本教程:http://www.nltk.org/howto/classify.html

    现在假设您的测试数据中没有注释,即您的 test.txt 不像 movie_review 那样位于目录结构中,而只是一个纯文本文件:
    \test_movie_reviews
        \1.txt
        \2.txt
    

    那么将其读入分类语料库是没有意义的,您可以简单地阅读和标记文档,即:
    for infile in os.listdir(`test_movie_reviews): 
      for line in open(infile, 'r'):
           tagged_label = classifier.classify(doc)
    

    但是 你不能在没有注释的情况下评估结果 ,所以你不能检查标签,如果 if-else ,还有 你需要标记你的文本 如果你使用 Catpu 或 notP 。

    如果您只想标记纯文本文件 test.txt :
    import string
    from itertools import chain
    from nltk.corpus import stopwords
    from nltk.probability import FreqDist
    from nltk.classify import NaiveBayesClassifier
    from nltk.corpus import movie_reviews
    from nltk import word_tokenize
    
    stop = stopwords.words('english')
    
    # Extracts the documents.
    documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]
    # Extract the features.
    word_features = FreqDist(chain(*[i for i,j in documents]))
    word_features = word_features.keys()[:100]
    # Converts documents to features.
    train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
    # Train the classifier.
    classifier = NaiveBayesClassifier.train(train_set)
    
    # Tag the test file.
    with open('test.txt', 'r') as fin:
        for test_sentence in fin:
            # Tokenize the line.
            doc = word_tokenize(test_sentence.lower())
            featurized_doc = {i:(i in doc) for i in word_features}
            tagged_label = classifier.classify(featurized_doc)
            print(tagged_label)
    

    再一次,请不要只是复制和粘贴解决方案,而要尝试了解其工作原理和方式。

    关于python-2.7 - 在特定文件上测试 NLTK 分类器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29301952/

    相关文章:

    python - 如何从模块中捕获所有非内置函数的名称并将它们放入列表中

    python - 如何在 Pandas 中创建多级数据框?

    mysql - 如何查看不同表中的 2 列之间是否存在匹配(以百分比表示)

    python - 仅忽略 ngram_range=1 的停用词

    python - 有效计算python中的词频

    python - 属性错误 : 'module' object has no attribute 'to_rgb'

    java - 使用 Lucene 和 Java 标记化、删除停用词

    python - scikit-learn 中 predict 与 predict_proba 的区别

    machine-learning - 训练朴素贝叶斯分类器

    machine-learning - weka中的ClusterMembership类有什么作用?