R:考虑标点符号做分词

标签 r tm text-segmentation

我使用NGramTokenizer()进行1~3克分割,但似乎没有考虑标点符号,并删除了标点符号。

所以分词对我来说并不理想。

(如结果:氧化剂氨基、氧化剂氨基酸、颗粒氧化剂等。)

是否有任何分段方法可以保留标点符号(我想我可以使用词性标记来过滤掉分段工作后包含标点符号的字符串。)

或者还有其他方式可以考虑标点符号来做分词吗?将会更加 非常适合我。

text <-  "the slurry includes: attrition pellet, oxidant, amino acid and water."

corpus_text <- VCorpus(VectorSource(text))
content(corpus_text[[1]])

BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3))
dtm <-  DocumentTermMatrix(corpus_text, control = list(tokenize = BigramTokenizer))
mat <- as.matrix(dtm)
colnames(mat)

 [1] "acid"                      "acid and"                  "acid and water"           
 [4] "amino"                     "amino acid"                "amino acid and"           
 [7] "and"                       "and water"                 "attrition"                
[10] "attrition pellet"          "attrition pellet oxidant"  "includes"                 
[13] "includes attrition"        "includes attrition pellet" "oxidant"                  
[16] "oxidant amino"             "oxidant amino acid"        "pellet"                   
[19] "pellet oxidant"            "pellet oxidant amino"      "slurry"                   
[22] "slurry includes"           "slurry includes attrition" "the"                      
[25] "the slurry"                "the slurry includes"       "water"    

最佳答案

您可以使用quanteda包的tokenize函数,如下所示:

library(quanteda)
text <- "some text, with commas, and semicolons; and even fullstop. to be toekinzed"
tokens(text, what = "word", remove_punct = FALSE, ngrams = 1:3)

输出:

tokens from 1 document.
text1 :
 [1] "some"              "text"              ","                 "with"             
 [5] "commas"            ","                 "and"               "semicolons"       
 [9] ";"                 "and"               "even"              "fullstop"         
[13] "."                 "to"                "be"                "toekinzed"        
[17] "some text"         "text ,"            ", with"            "with commas"      
[21] "commas ,"          ", and"             "and semicolons"    "semicolons ;"     
[25] "; and"             "and even"          "even fullstop"     "fullstop ."       
[29] ". to"              "to be"             "be toekinzed"      "some text ,"      
[33] "text , with"       ", with commas"     "with commas ,"     "commas , and"     
[37] ", and semicolons"  "and semicolons ;"  "semicolons ; and"  "; and even"       
[41] "and even fullstop" "even fullstop ."   "fullstop . to"     ". to be"          
[45] "to be tokeinzed"  

有关函数中每个参数的详细信息,请参阅 documentation

更新: 有关文档术语频率,请参阅 Constructing a document-frequency matrix

作为示例,请尝试以下操作:

对于二元组(请注意,您不需要标记化):

dfm(text, remove_punct = FALSE, ngrams = 2, concatenator = " ")

关于R:考虑标点符号做分词,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46335845/

相关文章:

r - 有些单词不会使用 tm ("easier"或 "easiest"进行词干提取)

r - 组合 tm R 中的单词未达到预期结果

java - 正则表达式将文本文档分割成句子

r - 识别值在另一个特定值之后出现的情况

r - 根据变量中保存的字符串计算公式

R:从概率密度分布生成数据

java - 从段落中删除一个句子

r - 如果按索引或按值打印,则在 for 循环内打印值会给出不同的结果

r - 如何在 R 中附加到文档术语矩阵?

UAX 29 Unicode 文本分段的 Javascript 实现?