Spark优化与实践

网络原因导致 spark task 卡住

2017-05-10  本文已影响360人  breeze_lsw

主机名映射出错

背景:

Yarn集群新加入了一批Spark机器后发现运行Spark任务时,一些task会无限卡住且driver端没有任何提示。

这里写图片描述

解决:

进入task卡住的节点查看container stderr日志,发现在获取其他节点block信息时,连接不上其他的机器节点,不停重试。


这里写图片描述

怀疑部分旧节点的/etc/hosts文件被运维更新漏了,查看/etc/hosts,发现没有加入新节点的地址,加入后问题解决。

在集群节点不断增多的情况下,可以使用dns避免某些节点忘记修改/etc/hosts文件导致的错误。

产生原因:

在读取shuffle数据时,本地的block会从本地的BlockManager读取数据块,远程的block则通过 BlockTransferService 读取,其中包含了hostname作为地址信息,如果没有hostname和ip的映射信息,则会获取失败,将调用
RetryingBlockFetcher进行重试,如果继续失败则会抛出异常: Exception while beginning fetch ...

NettyBlockTransferService

override def fetchBlocks(
    host: String,
    port: Int,
    execId: String,
    blockIds: Array[String],
    listener: BlockFetchingListener): Unit = {
  logTrace(s"Fetch blocks from $host:$port (executor id $execId)")
  try {
    val blockFetchStarter = new RetryingBlockFetcher.BlockFetchStarter {
      override def createAndStart(blockIds: Array[String], listener: BlockFetchingListener) {
        val client = clientFactory.createClient(host, port)
        new OneForOneBlockFetcher(client, appId, execId, blockIds.toArray, listener).start()
      }
    }

    val maxRetries = transportConf.maxIORetries()
    if (maxRetries > 0) {
      // Note this Fetcher will correctly handle maxRetries == 0; we avoid it just in case there's
      // a bug in this code. We should remove the if statement once we're sure of the stability.
      new RetryingBlockFetcher(transportConf, blockFetchStarter, blockIds, listener).start()
    } else {
      blockFetchStarter.createAndStart(blockIds, listener)
    }
  } catch {
    case e: Exception =>
      logError("Exception while beginning fetchBlocks", e)
      blockIds.foreach(listener.onBlockFetchFailure(_, e))
  }
}

RetryingBlockFetcher

private void fetchAllOutstanding() {
  // Start by retrieving our shared state within a synchronized block.
  String[] blockIdsToFetch;
  int numRetries;
  RetryingBlockFetchListener myListener;
  synchronized (this) {
    blockIdsToFetch = outstandingBlocksIds.toArray(new String[outstandingBlocksIds.size()]);
    numRetries = retryCount;
    myListener = currentListener;
  }

  // Now initiate the fetch on all outstanding blocks, possibly initiating a retry if that fails.
  try {
    fetchStarter.createAndStart(blockIdsToFetch, myListener);
  } catch (Exception e) {
    logger.error(String.format("Exception while beginning fetch of %s outstanding blocks %s",
      blockIdsToFetch.length, numRetries > 0 ? "(after " + numRetries + " retries)" : ""), e);

    if (shouldRetry(e)) {
      initiateRetry();
    } else {
      for (String bid : blockIdsToFetch) {
        listener.onBlockFetchFailure(bid, e);
      }
    }
  }
}

读取kafka失败,不断重试

这里写图片描述

原因:

防火墙原因连不上9092端口


201F2070-BD87-49DC-A453-B06478BB0605.png

总结:

遇到类似问题一般在executor的日志下都能找到结果

上一篇下一篇

猜你喜欢

热点阅读