我正在尝试处理具有不可打印字符的 HDFS 文件。我希望使用 MapReduce 去除这些字符。
我曾尝试使用 Pig TextLoader 和 MR TextInputFormat(在 MR 程序中),结果是从遇到不可打印字符的位置将记录拆分为多个。以下是示例数据:
===数据==(2条记录)=
4614:2011-12-20-08.45.08.169176^2011-12-20-18.15.08.100008^597^0^57^ZUKA^Grase^^^Grase,Dr^^^N^N^N^Dr^KG^ONLY INFORMATION ENTERED^UNKNOWN^0 ^ ^^^611190362
�^0^^^^^^0^Grase,Dr^^^, ,^^^^^^597^^^<fnm>DR</fnm><lnm>GRASE</lnm>^^^^^^^^SINGLE^0^0
6063:2010-07-04-04.00.00.100001^2010-07-04-04.01.00.180144^017^0^095^WEETE ^Wien^^^Wien,Colock^^^N^N^N^Colock^KG^ONLY INFORMATION ENTERED^UNKNOWN^0 ^ ^295111915^^������9905^0^^^^^^0^Wien,Colock^40001 KIN RD^300 CAMORE ST^500 BLACK AVE^Woesfield, HA, 43723.^John Ball^^^25719110^617^������9905^^<fnm>COLOCK</fnm><lnm>WIEN</lnm>^^^^^^^^SINGLE^0^0
[在 less 编辑器中,具有特殊字符的列的第一条记录如下:
611190362^M<EF><BF><BD>
]
在 vi 或更小的版本中,第一条记录排在一行中,但在 MR 或 pig 中读取时,由于这些字符的存在,这条记录被拆分。
我想在从 HDFS 读取数据时避免记录拆分成新行,并进一步希望处理这些记录以去除这些特殊字符。
这是我使用基本 UDF 尝试过的内容(下面的片段)。虽然,该程序对字符 > 0x80 进行 strip 化,但对拆分记录执行 strip 化。
任何帮助/指点将不胜感激!!
/*
*
* Pig Code:
* register '/basepath/udf/DIF.jar'
rel1 = LOAD '/user/home/test' USING USING TextLoader();
rel2 = FOREACH rel1 GENERATE StripNonPrintable(s) as recordline;
dump rel2;
*
*/
//Imports
public class StripNonPrintable extends EvalFunc<String>
{
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return null;
try{
String s = new String();
s = (String)input.get(0);
//s = "2001-12-20-08.45.08.169176^2001-12-20-08.45.08.131408^597^0^57^ZUCKA^Grase^^^Grase,Dr^^^N^N^N^Dr^KG^ONLY INFORMATION ENTERED^UNKNOWN^0 ^ ^^^6785790362�^0^^^^^^0^Grase,Dr^^^, ,^^^^^^597^^^<fnm>DR</fnm><lnm>GRASE</lnm>^^^^^^^^SINGLE^0^0";
int length = s.length();
char[] oldChars = new char[length];
s.getChars(0, length, oldChars, 0);
int newLen = 0;
for (int j = 0; j < length; j++) {
char ch = oldChars[j];
if (ch < 0x80 ) {
oldChars[newLen] = ch;
newLen++;
}
}
s = new String(oldChars, 0, newLen);
//System.out.println("New String = \n " + s);
return s;
}catch(Exception e){
return null ;
}
}
}
最佳答案
包裹java.lang.Character
.有一个函数 getType
其中:
Returns a value indicating a character's general category
导入java.lang.Character
并替换:
if (ch < 0x80 )
使用以下代码:
int c = Character.getType(ch);
if(c != Character.CONTROL ||
c != Character.CONNECTOR_PUNCTUATION ||
c != Character.CURRENCY_SYMBOL ||
c != Character.DASH_PUNCTUATION ||
c != Character.DECIMAL_DIGIT_NUMBER ||
c != Character.ENCLOSING_MARK ||
c != Character.END_PUNCTUATION ||
c != Character.FINAL_QUOTE_PUNCTUATION ||
c != Character.INITIAL_QUOTE_PUNCTUATION ||
c != Character.LETTER_NUMBER ||
c != Character.LOWERCASE_LETTER ||
c != Character.MATH_SYMBOL ||
c != Character.MODIFIER_LETTER ||
c != Character.MODIFIER_SYMBOL ||
c != Character.OTHER_LETTER ||
c != Character.OTHER_NUMBER || //remove it if you want to get rid of ½
c != Character.OTHER_PUNCTUATION ||
c != Character.OTHER_SYMBOL ||
c != Character.START_PUNCTUATION ||
c != Character.TITLECASE_LETTER ||
c != Character.UPPERCASE_LETTER)
使用这些组合,删除不需要的字符。
关于java - 使用 Hadoop Map-Reduce 去除不可打印的字符,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20267922/