debugging - 如何使用 Cascading 获取 Hadoop 以显示调试日志输出?

标签 debugging logging hadoop stdout cascading

我在获取 Hadoop 和 Cascading 时遇到问题1.2.6 向我展示应该来自使用 Debug 的输出筛选。 Cascading guide says this is how you can view the current tuples .我正在使用它来尝试查看任何调试输出:

Debug debug = new Debug(Debug.Output.STDOUT, true);
debug.setPrintTupleEvery(1);
debug.setPrintFieldsEvery(1);
assembly = new Each( assembly, DebugLevel.VERBOSE, debug );

我是 Hadoop 和 Cascading 的新手,但可能我没有找到正确的位置,或者我缺少一些简单的 log4j 设置(我没有对您获得的默认值进行任何更改使用 Cloudera hadoop-0.20.2-cdh3u3

这是我正在使用的 WordCount 示例类(从 cascading user guide 复制),其中添加了调试语句:

package org.cascading.example;

import cascading.flow.Flow;
import cascading.flow.FlowConnector;
import cascading.operation.Aggregator;
import cascading.operation.Debug;
import cascading.operation.DebugLevel;
import cascading.operation.Function;
import cascading.operation.aggregator.Count;
import cascading.operation.regex.RegexGenerator;
import cascading.pipe.Each;
import cascading.pipe.Every;
import cascading.pipe.GroupBy;
import cascading.pipe.Pipe;
import cascading.scheme.Scheme;
import cascading.scheme.TextLine;
import cascading.tap.Hfs;
import cascading.tap.SinkMode;
import cascading.tap.Tap;
import cascading.tuple.Fields;

import java.util.Properties;

public class WordCount {
    public static void main(String[] args) {
        String inputPath = args[0];
        String outputPath = args[1];

        // define source and sink Taps.
        Scheme sourceScheme = new TextLine( new Fields( "line" ) );
        Tap source = new Hfs( sourceScheme, inputPath );

        Scheme sinkScheme = new TextLine( new Fields( "word", "count" ) );
        Tap sink = new Hfs( sinkScheme, outputPath, SinkMode.REPLACE );

        // the 'head' of the pipe assembly
        Pipe assembly = new Pipe( "wordcount" );

        // For each input Tuple
        // using a regular expression
        // parse out each word into a new Tuple with the field name "word"
        String regex = "(?<!\\pL)(?=\\pL)[^ ]*(?<=\\pL)(?!\\pL)";
        Function function = new RegexGenerator( new Fields( "word" ), regex );

        assembly = new Each( assembly, new Fields( "line" ), function );

        Debug debug = new Debug(Debug.Output.STDOUT, true);
        debug.setPrintTupleEvery(1);
        debug.setPrintFieldsEvery(1);
        assembly = new Each( assembly, DebugLevel.VERBOSE, debug );

        // group the Tuple stream by the "word" value
        assembly = new GroupBy( assembly, new Fields( "word" ) );

        // For every Tuple group
        // count the number of occurrences of "word" and store result in
        // a field named "count"
        Aggregator count = new Count( new Fields( "count" ) );
        assembly = new Every( assembly, count );

        // initialize app properties, tell Hadoop which jar file to use
        Properties properties = new Properties();
        FlowConnector.setApplicationJarClass( properties, WordCount.class );

        // plan a new Flow from the assembly using the source and sink Taps
        FlowConnector flowConnector = new FlowConnector();
        FlowConnector.setDebugLevel( properties, DebugLevel.VERBOSE );
        Flow flow = flowConnector.connect( "word-count", source, sink, assembly );

        // execute the flow, block until complete
        flow.complete();

        // Ask Cascading to create a GraphViz DOT file
        // brew install graphviz # install viewer to look at dot file
        flow.writeDOT("build/flow.dot");
    }
}

它工作正常,我只是在任何地方都找不到任何调试语句来显示这些词。我已经通过 hadoop dfs -ls 以及 jobtracker web ui 查看了 HDFS 文件系统。 . jobtracker 中映射器的日志输出没有任何 STDOUT 输出:

Task Logs: 'attempt_201203131143_0022_m_000000_0'



stdout logs


stderr logs
2012-03-13 14:32:24.642 java[74752:1903] Unable to load realm info from SCDynamicStore


syslog logs
2012-03-13 14:32:24,786 INFO org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
2012-03-13 14:32:25,278 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2012-03-13 14:32:25,617 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2012-03-13 14:32:25,903 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin : null
2012-03-13 14:32:25,945 INFO cascading.tap.hadoop.MultiInputSplit: current split input path: hdfs://localhost/usr/tnaleid/shakespeare/input/comedies/cymbeline
2012-03-13 14:32:25,980 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library not loaded
2012-03-13 14:32:25,988 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1
2012-03-13 14:32:26,002 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100
2012-03-13 14:32:26,246 INFO org.apache.hadoop.mapred.MapTask: data buffer = 79691776/99614720
2012-03-13 14:32:26,247 INFO org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680
2012-03-13 14:32:27,623 INFO org.apache.hadoop.mapred.MapTask: Starting flush of map output
2012-03-13 14:32:28,274 INFO org.apache.hadoop.mapred.MapTask: Finished spill 0
2012-03-13 14:32:28,310 INFO org.apache.hadoop.mapred.Task: Task:attempt_201203131143_0022_m_000000_0 is done. And is in the process of commiting
2012-03-13 14:32:28,337 INFO org.apache.hadoop.mapred.Task: Task 'attempt_201203131143_0022_m_000000_0' done.
2012-03-13 14:32:28,361 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1

最后,我还编写了 DOT 文件,其中没有我期望的 Debug 语句(尽管可能那些被删除了):

word count flow diagram

是否有某个日志文件我丢失了,或者它是我需要设置的配置设置?

最佳答案

我从 mailing list 得到了答案对此。

将其更改为这样有效:

assembly = new Each( assembly, new Fields( "line" ), function );

// simpler debug statement
assembly = new Each( assembly, new Debug("hello", true) );

assembly = new GroupBy( assembly, new Fields( "word" ) );

在 stderr 下的 jobdetails UI 中输出:

Task Logs: 'attempt_201203131143_0028_m_000000_0'



stdout logs


stderr logs
2012-03-13 16:21:41.304 java[78617:1903] Unable to load realm info from SCDynamicStore
hello: ['word']
hello: ['CYMBELINE']
<SNIP>

我直接从文档中尝试过这个,但这对我不起作用(即使我也将 FlowConnector debugLevel 设置为 VERBOSE):

assembly = new Each( assembly, DebugLevel.VERBOSE, new Debug() );

它似乎与文档中的 DebugLevel.VERBOSE 有关,因为当我尝试这个时,我仍然没有得到输出:

assembly = new Each( assembly, DebugLevel.VERBOSE, new Debug("hello", true) );

更改它以删除 DebugLevel 也会给我输出

assembly = new Each( assembly, new Debug() );

我也可以通过这样做让它切换到标准输出:

assembly = new Each( assembly, new Debug(Debug.Output.STDOUT) );

我敢打赌,我仍然对 VERBOSE 日志级别的东西进行了错误配置,或者 1.2.6 不再与文档匹配,但至少现在我可以看到输出在日志中。

关于debugging - 如何使用 Cascading 获取 Hadoop 以显示调试日志输出?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9691005/

相关文章:

logging - tensorflow log_softmax tf.nn.log(tf.nn.softmax(predict)) tf.nn.softmax_cross_entropy_with_logits

java - 远程运行 MapReduce

hadoop - HDFS 错误放置 : `input' : No such file or directory

hadoop - Apache flume 和 Apache storm 有什么区别?

VSCode 中的 Android 模拟器 Chrome 调试

debugging - 如何避免自定义 Julia 迭代器中的内存分配?

调试在 ASP.NET Core 或 VS2017 中根本不起作用

android-studio - Android Checkstyle-调试自定义检查

MySQL 通用日志缺少时间戳

java - Spring boot 日志记录/Java 日志记录 - 显示配置/设置的工具