java - JAVA 中的 StringTokenizer

标签 java stringtokenizer

StringTokenizer 用于在 JAVA 中标记字符串。该字符串使用斯坦福大学的 Parts Of Speech MaxentTagger 进行标记。标记文本的子字符串被用来仅显示 POS 标记和迭代的单词。

这是标记前的文本:

Man has always had this notion that brave deeds are manifest in physical actions. While it is not entirely erroneous, there doesn't lie the singular path to valor. From of old, it is a sign of strength to fight back a wild animal. It is understandable if fought in defense; however, to go the extra mile and instigate an animal and fight it is the lowest degree of civilization man can exhibit. More so, in this age of reasoning and knowledge. Tradition may call it, but adhering blindly to it is idiocy, be it the famed Jallikattu in Tamil Nadu (The Indian equivalent to the Spanish Bullfighting) or the cock-fights. Pelting stones at a dog and relishing it howl in pain is dreadful. If one only gave as much as a trickle of thought and conscience the issue would surface as deplorable in every aspect. Animals play a part along with us in our ecosystem. And, some animals are dearer: the stray dogs that guard our street, the intelligent crow, the beast of burden and the everyday animals of pasture. Literature has voiced in its own way: In The Lord of the Rings the fellowship treated Bill Ferny's pony with utmost care; in Harry Potter when they didn’t heed Hermione's advice on the treatment of house elves they learned the hard way that it caused their own undoing; and Jack London, writes all about animals.Indeed, Kindness to animals is a virtue.

这是 POS 标记的文本:

Man_NN has_VBZ always_RB had_VBN this_DT notion_NN that_IN brave_VBP deeds_NNS are_VBP manifest_JJ in_IN physical_JJ actions_NNS ._. While_IN it_PRP is_VBZ not_RB entirely_RB erroneous_JJ ,_, there_EX does_VBZ n't_RB lie_VB the_DT singular_JJ path_NN to_TO valor_NN ._. From_IN of_IN old_JJ ,_, it_PRP is_VBZ a_DT sign_NN of_IN strength_NN to_TO fight_VB back_RP a_DT wild_JJ animal_NN ._. It_PRP is_VBZ understandable_JJ if_IN fought_VBN in_IN defense_NN ;_: however_RB ,_, to_TO go_VB the_DT extra_JJ mile_NN and_CC instigate_VB an_DT animal_NN and_CC fight_VB it_PRP is_VBZ the_DT lowest_JJS degree_NN of_IN civilization_NN man_NN can_MD exhibit_VB ._. More_RBR so_RB ,_, in_IN this_DT age_NN of_IN reasoning_NN and_CC knowledge_NN ._. Tradition_NN may_MD call_VB it_PRP ,_, but_CC adhering_JJ blindly_RB to_TO it_PRP is_VBZ idiocy_NN ,_, be_VB it_PRP the_DT famed_JJ Jallikattu_NNP in_IN Tamil_NNP Nadu_NNP -LRB-_-LRB- The_DT Indian_JJ equivalent_NN to_TO the_DT Spanish_JJ Bullfighting_NN -RRB-_-RRB- or_CC the_DT cock-fights_NNS ._. Pelting_VBG stones_NNS at_IN a_DT dog_NN and_CC relishing_VBG it_PRP howl_NN in_IN pain_NN is_VBZ dreadful_JJ ._. If_IN one_CD only_RB gave_VBD as_RB much_JJ as_IN a_DT trickle_VB of_IN thought_NN and_CC conscience_NN the_DT issue_NN would_MD surface_VB as_IN deplorable_JJ in_IN every_DT aspect_NN ._. Animals_NNS play_VBP a_DT part_NN along_IN with_IN us_PRP in_IN our_PRP$ ecosystem_NN ._. And_CC ,_, some_DT animals_NNS are_VBP dearer_RBR :_: the_DT stray_JJ dogs_NNS that_WDT guard_VBP our_PRP$ street_NN ,_, the_DT intelligent_JJ crow_NN ,_, the_DT beast_NN of_IN burden_NN and_CC the_DT everyday_JJ animals_NNS of_IN pasture_NN ._. Literature_NN has_VBZ voiced_VBN in_IN its_PRP$ own_JJ way_NN :_: In_IN The_DT Lord_NN of_IN the_DT Rings_NNP the_DT fellowship_NN treated_VBN Bill_NNP Ferny_NNP 's_POS pony_NN with_IN utmost_JJ care_NN ;_: in_IN Harry_NNP Potter_NNP when_WRB they_PRP did_VBD n't_RB heed_VB Hermione_NNP 's_POS advice_NN on_IN the_DT treatment_NN of_IN house_NN elves_NNS they_PRP learned_VBD the_DT hard_JJ way_NN that_IN it_PRP caused_VBD their_PRP$ own_JJ undoing_NN ;_: and_CC Jack_NNP London_NNP ,_, writes_VBZ all_DT about_IN animals_NNS ._. Indeed_RB ,_, Kindness_NN to_TO animals_NNS is_VBZ a_DT virtue_NN ._.

下面是获取上述子字符串的代码:

String line;
StringBuilder sb=new StringBuilder();
try(FileInputStream input = new FileInputStream("E:\\D.txt"))
    {
    int data = input.read();
    while(data != -1)
        {
        sb.append((char)data);
        data = input.read();
        }
    }
catch(FileNotFoundException e)
{
    System.err.println("File Not Found Exception : " + e.getMessage());
}
line=sb.toString();
String line1=line;//Copy for Tagger
line+=" T";       
List<String> sentenceList = new ArrayList<String>();//TAGGED DOCUMENT
MaxentTagger tagger = new MaxentTagger("E:\\Installations\\Java\\Tagger\\english-left3words-distsim.tagger");
String tagged = tagger.tagString(line1);
File file = new File("A.txt");
BufferedWriter output = new BufferedWriter(new FileWriter(file));
output.write(tagged);
output.close();
DocumentPreprocessor dp = new DocumentPreprocessor("C:\\Users\\Admin\\workspace\\Project\\A.txt");
int largest=50;
int m=0;
StringTokenizer st1;
for (List<HasWord> sentence : dp) 
{
   String sentenceString = Sentence.listToString(sentence);
   sentenceList.add(sentenceString.toString());
}
String[][] Gloss=new String[sentenceList.size()][largest];
String[] Adj=new String[largest];
String[] Adv=new String[largest];
String[] Noun=new String[largest];
String[] Verb=new String[largest];
int adj=0,adv=0,noun=0,verb=0;
for(int i=0;i<sentenceList.size();i++)
{
    st1= new StringTokenizer(sentenceList.get(i)," ,(){}[]/.;:&?!");
    m=0;//Count for Gloss 2nd dimension
    //GETTING THE POS's COMPARTMENTALISED
    while(st1.hasMoreTokens())
    {
        String token=st1.nextToken();
        if(token.length()>1)//TO SKIP PAST TOKENS FOR PUNCTUATION MARKS
        {
        System.out.println(token);
        String s=token.substring(token.lastIndexOf("_")+1,token.length());
        System.out.println(s);
        if(s.equals("JJ")||s.equals("JJR")||s.equals("JJS"))
        {
            Adj[adj]=token.substring(0,token.lastIndexOf("_"));
            System.out.println(Adj[adj]);
            adj++;
        }
        if(s.equals("NN")||s.equals("NNS"))
        {
            Noun[noun]=token.substring(0,  token.lastIndexOf("_"));
            System.out.println(Noun[noun]);
            noun++;
        }
        if(s.equals("RB")||s.equals("RBR")||s.equals("RBS"))
        {
            Adv[adv]=token.substring(0,token.lastIndexOf("_"));
            System.out.println(Adv[adv]);
            adv++;
        }
        if(s.equals("VB")||s.equals("VBD")||s.equals("VBG")||s.equals("VBN")||s.equals("VBP")||s.equals("VBZ"))
        {
            Verb[verb]=token.substring(0,token.lastIndexOf("_"));
            System.out.println(Verb[verb]);
            verb++;
        }
        }
    }
    i++;//TO SKIP PAST THE LINES WHERE AN EXTRA UNDERSCORE OCCURS FOR FULLSTOP
 }

D.txt 包含纯文本。

关于问题:

每个单词都会在空格处标记化。除了'n't_RB',它被分别标记为n't和RB。

输出结果如下:

Man_NN
NN
Man
has_VBZ 
VBZ
has
always_RB
RB
always
had_VBN
VBN
had
this_DT
DT
notion_NN
NN
notion
that_IN
IN
brave_VBP
VBP
brave
deeds_NNS
NNS
deeds
are_VBP
VBP
are
manifest_JJ
JJ
manifest
in_IN
IN
physical_JJ
JJ
physical
actions_NNS
NNS
actions
While_IN
IN
it_PRP
PRP
is_VBZ
VBZ
is
not_RB
RB
not
entirely_RB
RB
entirely
erroneous_JJ
JJ
erroneous
there_EX
EX
does_VBZ
VBZ
does
n't
n't
RB
RB

但是,如果我只是在分词器中运行“there_EX does_VBZ n't_RB lie_VB”,“n't_RB”就会被一起分词。当我运行该程序时,我得到一个 StringIndexOutOfBounds 异常,这是可以理解的,因为 'n't' 或 'RB' 中没有 '_'。 有人可以看看吗?谢谢。

最佳答案

DocumentPreprocessor据说文档

NOTE: If a null argument is used, then the document is assumed to be tokenized and DocumentPreprocessor performs no tokenization.

由于您从文件加载的文档已经在程序的第一步中被标记化,您应该:

DocumentPreprocessor dp = new DocumentPreprocessor("./data/stanford-nlp/A.txt");
dp.setTokenizerFactory(null);

然后它会正确输出' 单词,例如

...
did_VBD
VBD
did
n't_RB
RB
n't
heed_VB
VB
heed
Hermione_NNP
NNP
's_POS
POS
...

关于java - JAVA 中的 StringTokenizer,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29444966/

相关文章:

java - Mouselister 无法与 if 语句一起使用

java - 如何知道一个 JSP 文件是否被另一个文件包含?

android - 检查带有预期分隔符的字符串

java - 如何使用 Java 的 StringTokenizer 访问特定的标记?

java - 如何让 String Tokenizer 忽略文本?

java - 将 arraylist 添加到自定义 ListView

java - 如何使用 apache commons BooleanUtils.and 方法?

java - 为什么要使用 getter 和 setter/accessors?

java - 无法在Hadoop的MapReduce代码中的ArrayList <String>中 “.add(StringTokenizer.nextToken())”失败

java - 将字符串转换为整数数组 ex String st = "1 2 3 4 5"转换为 ar=[1,2,3,4,5]