java - 如何覆盖特定类的log4j设置

标签 java hadoop log4j hbase

对于Hadoop HBASE集群,我想覆盖log4j以将特定类org.apache.hadoop.hbase.tool.Canary的日志输出到控制台。

当前,Hbase应用程序文件的log4j.properties如下所示:

hbase.root.logger=INFO,RFA,RFAE
hbase.log.dir=.
hbase.log.file=hbase.log

# Define the root logger to the system property "hbase.root.logger".
log4j.rootLogger=${hbase.root.logger}

# Logging Threshold
log4j.threshold=ALL

# Rolling File Appender properties
hbase.log.maxfilesize=128MB
hbase.log.maxbackupindex=10
hbase.log.layout=org.apache.log4j.PatternLayout
hbase.log.pattern=%d{ISO8601} %p %c: %m%n

#
# Daily Rolling File Appender
# Hacked to be the Rolling File Appender
# Rolling File Appender
log4j.appender.DRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}

log4j.appender.DRFA.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.DRFA.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.DRFA.layout=${hbase.log.layout}
log4j.appender.DRFA.layout.ConversionPattern=${hbase.log.pattern}
log4j.appender.DRFA.Append=true

# Rolling File Appender
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}

log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.RFA.layout=${hbase.log.layout}
log4j.appender.RFA.layout.ConversionPattern=${hbase.log.pattern}
log4j.appender.RFA.Append=true

#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n

#
# Error log appender, each log event will include hostname
#
hbase.error.log.file=hbase_error.log
log4j.appender.RFAE=org.apache.log4j.RollingFileAppender
log4j.appender.RFAE.File=${hbase.log.dir}/${hbase.error.log.file}
log4j.appender.RFAE.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.RFAE.MaxBackupIndex=${hbase.log.maxbackupindex}

log4j.appender.RFAE.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAE.layout.ConversionPattern=%d{ISO8601} data-analytics1-data-namenode-dev-001 %p %c: %m%n

log4j.appender.RFAE.Threshold=ERROR
log4j.appender.RFAE.Append=true

# Custom Logging levels
org.apache.hadoop.hbase.regionserver.compactions.CompactionProgress=DEBUG
log4j.logger.org.apache.zookeeper=WARN
#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
log4j.logger.org.apache.hadoop.hbase=INFO
# Make these two classes INFO-level. Make them DEBUG to see more zk debug.
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=WARN
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=WARN
# Snapshot Debugging
log4j.logger.org.apache.hadoop.hbase.regionserver.snapshot=DEBUG
#log4j.logger.org.apache.hadoop.dfs=DEBUG
# Set this class to log INFO only otherwise its OTT

# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)
#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG

# Uncomment the below if you want to remove logging of client region caching'
# and scan of .META. messages
# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO
# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO

请指教。
谢谢!

最佳答案

将以下行添加到log4j.properties:

log4j.logger.org.apache.hadoop.hbase.tool.Canary=INFO, console

您可以将INFO更改为此类所需的任何日志记录级别。

如果还希望防止该类使用其他附加程序,请通过添加以下行来更改其可加性:
log4j.additivity.org.apache.hadoop.hbase.tool.Canary = false

关于java - 如何覆盖特定类的log4j设置,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38560379/

相关文章:

java - 单例模式在我的 Java 代码中不返回相同的对象

hadoop - hive hadoop 上可用的数据可视化工具

java - Hadoop 文本是可变的

java - Log4j 如何阻止我的记录器打印到控制台?

java - 如何从 shell 脚本覆盖 Log4j 值?

java - Java 中的多态性和 ArrayList

java - 从 int 到 short 的可能有损转换

java - Java中的十六进制转十进制

hadoop - 如何在类似于pyspark的java中将parquet文件写入分区?

logging - 如何为 Kafka 生产者配置日志记录?