r - 如何在 R 中附加到文档术语矩阵?

标签 r tm

我想将两个文档术语矩阵附加在一起。我有一行数据,想对它们使用不同的控制函数(n-gram 分词器、删除停用词和文本的字长范围,这些都不适用于我的非文本字段)。

当我使用 tm_combine: c(dtm_text,dtm_inputs) 时,它将第二组添加为新行。我想将这些属性附加到同一行。

library("tm")   

  BigramTokenizer <-
  function(x)
    unlist(lapply(ngrams(words(x), 2), paste, collapse = " "), 
           use.names = FALSE)

# Data to be tokenized
 txt_fields   <- paste("i like your store","i love your products","i am happy")
# Data not to be tokenized
 other_inputs <- paste("cd1_ABC","cd2_555","cd3_7654")

 # NGram tokenize text data 
  dtm_text <- DocumentTermMatrix(Corpus(VectorSource(txt_fields)),
                               control = list(
                                              tokenize = BigramTokenizer,


                                      stopwords=TRUE,
                                                  wordLengths=c(2, Inf),
                                                  bounds=list(global = c(1,Inf))))

    # Do not perform tokenization of other inputs
      dtm_inputs <- DocumentTermMatrix(Corpus(VectorSource(other_inputs)),
                                   control = list(
                                                  bounds = list(global = c(1,Inf))))
    # DESIRED OUTPUT
<<DocumentTermMatrix (documents: 1, terms: 12)>>
Non-/sparse entries: 12/0
Sparsity           : 0%
Maximal term length: 13
Weighting          : term frequency (tf)

    Terms
Docs am happy happy like like your love love your products products am store store love
   1        1     1    1         1    1         1        1           1     1          1
    Terms
Docs your products your store cd1_abc cd2_555 cd3_7654
   1       1       1        1
   1             1          1

最佳答案

我建议使用text2vec (但我有偏见,因为我是作者)。

library(text2vec)
# Data to be tokenized
txt_fields   <- paste("i like your store","i love your products","i am happy")
# Data not to be tokenized
other_inputs <- paste("cd1_ABC","cd2_555","cd3_7654")
stopwords = tm::stopwords("en")

# tokenize by whitespace
txt_toknens = strsplit(txt_fields, ' ', TRUE)
vocab = create_vocabulary(itoken(txt_toknens), ngram = c(1, 2), stopwords = stopwords)
# if you need word lengths:
# vocab$vocab = vocab$vocab[nchar(terms) > 1]
# but note, it will not remove "i_am", etc.
# you can add word "i" to stopwords to remove such terms
txt_vectorizer = vocab_vectorizer(vocab)
dtm_text = create_dtm(itoken(txt_fields),  vectorizer = txt_vectorizer)

# also tokenize by whitespace, but won't create bigrams in next step
other_inputs_toknes = strsplit(other_inputs, ' ', TRUE)
vocab_other = create_vocabulary(itoken(other_inputs))
other_vectorizer = vocab_vectorizer(vocab_other)
dtm_other = create_dtm(itoken(other_inputs),  vectorizer = other_vectorizer)
# combine
result = cbind(dtm_text, dtm_other)

关于r - 如何在 R 中附加到文档术语矩阵?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38641146/

相关文章:

r - tm 如何与雪互动?

c - 在 C 中使用 tm 时出现段错误

r - 在图例项之间添加水平间距

r - 堆叠条形图中每个条形的不同颜色 - 基础图形

r - 将随机值添加到向量中的随机 block

r - R 中两个数据集之间的近似字符串匹配

R 语料库弄乱了我的 UTF-8 编码文本

r - STM : how to keep metadata when converting from tm to stm document-term matrix?

r - 使用 RDCOMClient 将 R 连接到 SSAS 多维数据集

r - 如何将参数传递给在另一个函数中调用的函数?