周蓬勃SPARKSPARK 错误归纳

Spark 之 ERROR netty.Inbox: Ignor

2018-03-07  本文已影响326人  步闲

速记:

最近在执行spark任务时报出了如下错误:

18/03/07 19:14:36 ERROR netty.Inbox: Ignoring error
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped.
        at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161)
        at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
        at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:188)
        at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:514)
        at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:364)
        at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497)
        at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:134)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:133)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:133)
        at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142)
        at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
        at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
        at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

网上百度来一篇文章,错误几乎一模一样,链接如下:
SparkException: Could not find CoarseGrainedScheduler or it has been stopped.

提到如下解决思路:
解决方案

然而并不奏效,但一个问题可能有多种原因导致,所以此解也值得收录一下。

解决

经过一般调试,发现原来是因为spark任务生成task任务过少,而任务提交时所指定的Excutor 数过多导致,故调小 --num-executors 参数问题得以解决。

上一篇 下一篇

猜你喜欢

热点阅读