记一次Hive任务hang住的问题(1)
2020-03-12 本文已影响0人
井地儿
1 问题背景
beeline
偶尔出现任务进程卡住, 长时间等待无响应的情况.下面尝试进行分析及定位代码位置.
2 分析步骤
2.1 首先定位进程的<pid>
这里我们很明确是HiveServer2进程:
ps -ef | grep hiveserver2
或
jps
2.2 定位繁忙的线程
top -Hp <pid>
如下, 看出PID是 117426
top - 17:10:29 up 69 days, 39 min, 3 users, load average: 8.70, 11.67, 10.29
Threads: 604 total, 5 running, 599 sleeping, 0 stopped, 0 zombie
%Cpu(s): 14.7 us, 1.6 sy, 0.0 ni, 83.5 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 26389854+total, 60735220 free, 12119641+used, 81966912 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 13862755+avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
117426 hadoop 20 0 68.4g 54.0g 15208 R 99.7 21.5 131:39.68 java
46085 hadoop 20 0 68.4g 54.0g 15208 S 5.0 21.5 0:00.32 java
49034 hadoop 20 0 68.4g 54.0g 15208 S 5.0 21.5 0:00.15 java
2.3 抓取服务进程jstack现场并分析
将jstack现场抓取到文件jstack.js中:
jstack -l <pid>
> jstack.js
需要将第三步中繁忙的线程ID转为十六进制:printf "%x\n" 117426
[hadoop@hadoop-server /tmp]$ printf "%x\n" 117426
1cab2
然后在jstack.js中查找该线程信息:
可以看出是在CombineFileInputFormat.getSplits
时hang住了,下面就可以从代码分析为什么会hang住.初步判断是因为文件太大,导致splits时hang住.
"Thread-17999" #19884 prio=5 os_prio=0 tid=0x00007f9021c4f800 nid=0x1cab2 runnable [0x00007f9001e74000]
java.lang.Thread.State: RUNNABLE
at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:237)
at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:76)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:309)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:470)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571)
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:347)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:338)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:434)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:138)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:80)
Locked ownable synchronizers:
- None