python - 如何使用 scikit-learn 中的散列技巧对二元语法进行矢量化?

标签 python machine-learning scipy nlp scikit-learn

我有一些二元语法,假设:[('word','word'),('word','word'),...,('word','word')]。我如何使用 scikit 的 HashingVectorizer 创建一个特征向量,随后将其呈现给某些分类算法,例如SVC 或朴素贝叶斯或任何类型的分类算法?

最佳答案

首先,您必须了解不同的向量化器在做什么。大多数矢量化器都基于 bag-of-word 方法,其中文档是将标记映射到矩阵上的方法。

来自 sklearn 文档,CountVectorizerHashVectorizer :

Convert a collection of text documents to a matrix of token counts

比如这些句子

The Fulton County Grand Jury said Friday an investigation of Atlanta's recent primary election produced no evidence that any irregularities took place .

The jury further said in term-end presentments that the City Executive Committee , which had over-all charge of the election , `` deserves the praise and thanks of the City of Atlanta '' for the manner in which the election was conducted .

使用这个粗糙的矢量化器:

from collections import Counter
from itertools import chain
from string import punctuation

from nltk.corpus import brown, stopwords

# Let's say the training/testing data is a list of words and POS
sentences = brown.sents()[:2]

# Extract the content words as features, i.e. columns.
vocabulary = list(chain(*sentences))
stops = stopwords.words('english') + list(punctuation)
vocab_nostop = [i.lower() for i in vocabulary if i not in stops]

# Create a matrix from the sentences
matrix = [Counter([w for w in words if w in vocab_nostop]) for words in sentences]

print matrix

会变成:

[Counter({u"''": 1, u'``': 1, u'said': 1, u'took': 1, u'primary': 1, u'evidence': 1, u'produced': 1, u'investigation': 1, u'place': 1, u'election': 1, u'irregularities': 1, u'recent': 1}), Counter({u'the': 6, u'election': 2, u'presentments': 1, u'``': 1, u'said': 1, u'jury': 1, u'conducted': 1, u"''": 1, u'deserves': 1, u'charge': 1, u'over-all': 1, u'praise': 1, u'manner': 1, u'term-end': 1, u'thanks': 1})]

因此,考虑到非常大的数据集,这可能效率很低,因此 sklearn 开发人员构建了更高效的代码。 sklearn 最重要的特性之一是您甚至不需要在矢量化之前将数据集加载到内存中。

由于不清楚您的任务是什么,我认为您正在寻找一般用途。假设您将其用于语言 ID。

假设您在 train.txt 中训练数据的输入文件:

Pošto je EULEX obećao da će obaviti istragu o prošlosedmičnom izbijanju nasilja na sjeveru Kosova, taj incident predstavlja još jedan ispit kapaciteta misije da doprinese jačanju vladavine prava.
De todas as provações que teve de suplantar ao longo da vida, qual foi a mais difícil? O início. Qualquer começo apresenta dificuldades que parecem intransponíveis. Mas tive sempre a minha mãe do meu lado. Foi ela quem me ajudou a encontrar forças para enfrentar as situações mais decepcionantes, negativas, as que me punham mesmo furiosa.
Al parecer, Andrea Guasch pone que una relación a distancia es muy difícil de llevar como excusa. Algo con lo que, por lo visto, Alex Lequio no está nada de acuerdo. ¿O es que más bien ya ha conseguido la fama que andaba buscando?
Vo väčšine golfových rezortov ide o veľký komplex niekoľkých ihrísk blízko pri sebe spojených s hotelmi a ďalšími možnosťami trávenia voľného času – nie vždy sú manželky či deti nadšenými golfistami, a tak potrebujú iný druh vyžitia. Zaujímavé kombinácie ponúkajú aj rakúske, švajčiarske či talianske Alpy, kde sa dá v zime lyžovať a v lete hrať golf pod vysokými alpskými končiarmi.

而你对应的标签是波斯尼亚语、葡萄牙语、西类牙语和斯洛伐克语,即

[bs,pt,es,sr]

这是使用 CountVectorizer 和朴素贝叶斯分类器的一种方法。下面的例子来自https://github.com/alvations/bayeslineDSL shared task .

让我们从向量器开始。首先,向量化器获取输入文件,然后将训练集转换为向量化矩阵并初始化向量化器(即特征):

import codecs

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB

trainfile = 'train.txt'
testfile = 'test.txt'

# Vectorizing data.
train = []
word_vectorizer = CountVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','es','sr']
print word_vectorizer.get_feature_names()

[输出]:

[u'acuerdo', u'aj', u'ajudou', u'al', u'alex', u'algo', u'alpsk\xfdmi', u'alpy', u'andaba', u'andrea', u'ao', u'apresenta', u'as', u'bien', u'bl\xedzko', u'buscando', u'come\xe7o', u'como', u'con', u'conseguido', u'da', u'de', u'decepcionantes', u'deti', u'dificuldades', u'dif\xedcil', u'distancia', u'do', u'doprinese', u'druh', u'd\xe1', u'ela', u'encontrar', u'enfrentar', u'es', u'est\xe1', u'eulex', u'excusa', u'fama', u'foi', u'for\xe7as', u'furiosa', u'golf', u'golfistami', u'golfov\xfdch', u'guasch', u'ha', u'hotelmi', u'hra\u0165', u'ide', u'ihr\xedsk', u'incident', u'intranspon\xedveis', u'in\xedcio', u'in\xfd', u'ispit', u'istragu', u'izbijanju', u'ja\u010danju', u'je', u'jedan', u'jo\u0161', u'kapaciteta', u'kde', u'kombin\xe1cie', u'komplex', u'kon\u010diarmi', u'kosova', u'la', u'lado', u'lequio', u'lete', u'llevar', u'lo', u'longo', u'ly\u017eova\u0165', u'mais', u'man\u017eelky', u'mas', u'me', u'mesmo', u'meu', u'minha', u'misije', u'mo\u017enos\u0165ami', u'muy', u'm\xe1s', u'm\xe3e', u'na', u'nada', u'nad\u0161en\xfdmi', u'nasilja', u'negativas', u'nie', u'nieko\u013ek\xfdch', u'no', u'obaviti', u'obe\u0107ao', u'para', u'parecem', u'parecer', u'pod', u'pone', u'pon\xfakaj\xfa', u'por', u'potrebuj\xfa', u'po\u0161to', u'prava', u'predstavlja', u'pri', u'prova\xe7\xf5es', u'pro\u0161losedmi\u010dnom', u'punham', u'qual', u'qualquer', u'que', u'quem', u'rak\xfaske', u'relaci\xf3n', u'rezortov', u'sa', u'sebe', u'sempre', u'situa\xe7\xf5es', u'sjeveru', u'spojen\xfdch', u'suplantar', u's\xfa', u'taj', u'tak', u'talianske', u'teve', u'tive', u'todas', u'tr\xe1venia', u'una', u've\u013ek\xfd', u'vida', u'visto', u'vladavine', u'vo', u'vo\u013en\xe9ho', u'vysok\xfdmi', u'vy\u017eitia', u'v\xe4\u010d\u0161ine', u'v\u017edy', u'ya', u'zauj\xedmav\xe9', u'zime', u'\u0107e', u'\u010dasu', u'\u010di', u'\u010fal\u0161\xedmi', u'\u0161vaj\u010diarske']

假设您的测试文档在 test.txt 中,标签是西类牙语 es 和葡萄牙语 pt:

Por ello, ha insistido en que Europa tiene que darle un toque de atención porque Portugal esta incumpliendo la directiva del establecimiento del peaje
Estima-se que o mercado homossexual só na Cidade do México movimente cerca de oito mil milhões de dólares, aproximadamente seis mil milhões de euros

现在,您可以使用经过训练的分类器标记测试文档:

import codecs, re, time
from itertools import chain

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB

trainfile = 'train.txt'
testfile = 'test.txt'

# Vectorizing data.
train = []
word_vectorizer = CountVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','es','sr']

# Training NB
mnb = MultinomialNB()
mnb.fit(trainset, tags)

# Tagging the documents
codecs.open(testfile,'r','utf8')
testset = word_vectorizer.transform(codecs.open(testfile,'r','utf8'))
results = mnb.predict(testset)

print results

[输出]:

['es' 'pt']

有关文本分类的更多信息,您可能会发现此 NLTK 相关问题/答案很有用,请参阅 nltk NaiveBayesClassifier training for sentiment analysis

要使用 HashingVectorizer,您需要注意它会生成负向量值,而 MultinomialNaiveBayes 分类器不会处理负值,因此您必须使用另一个分类器,例如:

import codecs, re, time
from itertools import chain

from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import Perceptron

trainfile = 'train.txt'
testfile = 'test.txt'

# Vectorizing data.
train = []
word_vectorizer = HashingVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','es','sr']

# Training Perceptron
pct = Perceptron(n_iter=100)
pct.fit(trainset, tags)

# Tagging the documents
codecs.open(testfile,'r','utf8')
testset = word_vectorizer.transform(codecs.open(testfile,'r','utf8'))
results = pct.predict(testset)

print results

[输出]:

['es' 'es']

但请注意,在这个小例子中,感知器的结果更差。不同的分类器适合不同的任务,不同的特征适合不同的向量,不同的分类器接受不同的向量。

没有完美的模型,只有更好或更坏

关于python - 如何使用 scikit-learn 中的散列技巧对二元语法进行矢量化?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26602794/

相关文章:

python - 在 python 中插入/推断丢失的日期?

python - 用使用静态全局变量的 SWIG 包装 C 代码

python - 如果每个值都相等,则删除 pandas 数据框行

python - 如何为不同语言设置 matplotlib.dates.DateFormatter

apache-spark - 是否可以将经过训练的 Spark ML 模型或交叉验证器保存到 postgresql 数据库?

python - ValueError : Expected 2D array, 在 svm 识别期间得到一维数组

python - 在 scipy 函数 curve_fit 中使用不确定数量的参数

将图像下载为 numpy 数组时 Python 崩溃

python - 读取没有 EOF 标志的文件

python - 如何在Tensorflow中用Logistic层替换Softmax输出层?