Spark修改控制台日志级别

2020-04-15  本文已影响0人  程序媛啊

一、[一、修改conf/log4j.properties]

背景:INFO日志过多不易于观察错误和执行结果,需要调整日志输出级别。
1、修改conf/log4j.properties
cp log4j.properties.template log4j.properties
vi log4j.properties
log4j.rootCategory=INFO, console
修改为:
log4j.rootCategory=WARN, console
# Set everything to be logged to the console
log4j.rootCategory=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN

# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR

二、重启集群

spark-sql效果如下:
image.png
spark-shell效果如下
image.png
注:代码做如下修改
SparkSession.builder.getOrCreate().sparkContext.setLogLevel("WARN")
上一篇下一篇

猜你喜欢

热点阅读