Kubernetes因限制内存配置引发的错误
今天对一个pod进行内存资源调整后, 一直卡在ContainerCreating的状态, 执行describe命令查看该 Pod 详细信息后发现如下 。
[root@master-01 ~]# kubectl describe pod busybox-deploy-b9b6d4ff9-vwfnq ...Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 40s default-scheduler Successfully assigned default/busybox-deploy-b9b6d4ff9-vwfnq to 192.168.209.132Warning FailedCreatePodSandBox 28s (x4 over 37s) kubelet, 192.168.209.132 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-b9b6d4ff9-vwfnq": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:297: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknownWarning FailedCreatePodSandBox 26s (x9 over 38s) kubelet, 192.168.209.132 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-b9b6d4ff9-vwfnq": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknownNormal SandboxChanged 26s (x12 over 37s) kubelet, 192.168.209.132 Pod sandbox changed, it will be killed and re-created.
通过查看以上 Event 信息中, 能解读到的信息极其有限
connection reset by peer: 连接被重置。
Pod sandbox changed, it will be killed and re-created: pause 容器引导的 Pod 环境被改变, 重新创建 Pod 中的 pause 引导。
copying bootstrap data to pipe caused "write init-p: broken pipe"": unknown:Google 说的docker和内核不兼容。
参考 https://github.com/docker/for-linux/issues/595
Pod sandbox changed, it will be killed and re-created:
pause 容器引导的 Pod 环境被改变, 重新创建 Pod 中的 pause 引导。
看完以上错误并不能定位出问题根源,只能大致了解到是因为创建SandBox失败导致的, 接下来查看 kubelet 的日志。
4月 01 16:22:06 master-01 kubelet[69210]: I0401 16:22:06.994710 69210 operation_generator.go:567] MountVolume.SetUp succeeded for volume "default-token-tb5bg" (UniqueName: "kubernetes.io/secret/968128e3-5456-11e9-8874-000c29721372-default-token-tb5bg") pod "busybox-deploy-bcd9cc57f-k2pzt" (UID: "968128e3-5456-11e9-8874-000c29721372")4月 01 16:22:07 master-01 kubelet[69210]: I0401 16:22:07.009632 69210 handler.go:325] Added event &{/kubepods/pod968128e3-5456-11e9-8874-000c29721372/88060715d0bfa605b05ba1ec0dfc637b112f328be76e38537f31a778c52d54ca 2019-04-01 16:22:07.381370221 +0800 CST m=+9485.400866715 oom {<nil>}}4月 01 16:22:07 master-01 kubelet[69210]: I0401 16:22:07.009673 69210 manager.go:1242] Created an OOM event in container "/kubepods/pod968128e3-5456-11e9-8874-000c29721372/88060715d0bfa605b05ba1ec0dfc637b112f328be76e38537f31a778c52d54ca" at 2019-04-01 16:22:07.381370221 +0800 CST m=+9485.4008667154月 01 16:22:07 master-01 kubelet[69210]: I0401 16:22:07.009682 69210 handler.go:325] Added event &{/kubepods/pod968128e3-5456-11e9-8874-000c29721372 2019-04-01 16:22:07.381370221 +0800 CST m=+9485.400866715 oomKill {0xc000807480}}4月 01 16:22:07 master-01 kubelet[69210]: E0401 16:22:07.087320 69210 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-bcd9cc57f-k2pzt": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown4月 01 16:22:07 master-01 kubelet[69210]: E0401 16:22:07.087372 69210 kuberuntime_sandbox.go:65] CreatePodSandbox for pod "busybox-deploy-bcd9cc57f-k2pzt_default(968128e3-5456-11e9-8874-000c29721372)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-bcd9cc57f-k2pzt": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown4月 01 16:22:07 master-01 kubelet[69210]: E0401 16:22:07.087389 69210 kuberuntime_manager.go:662] createPodSandbox for pod "busybox-deploy-bcd9cc57f-k2pzt_default(968128e3-5456-11e9-8874-000c29721372)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-bcd9cc57f-k2pzt": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown
kubelet 的日志中, 与 describe 出来的信息差不多, tail 的时候更直观的感觉到频繁的Sandbox创建的过程, 可以看到有 OCI 运行时报错, 只能去 docker 的日志中找找看了。
4月 01 16:43:03 master-01 dockerd[68740]: time="2019-04-01T16:43:03.379216541+08:00" level=error msg="Handler for POST /v1.38/containers/33e5014e5a63ebd3a07d0f238fe5ef4dd23c1a28490ef59f16a1f6e3efd5c050/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:293: copying bootstrap data to pipe caused \\\"write init-p: broken pipe\\\"\": unknown"4月 01 16:43:04 master-01 dockerd[68740]: time="2019-04-01T16:43:04.421003016+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:04 master-01 dockerd[68740]: time="2019-04-01T16:43:04.421724442+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:04 master-01 dockerd[68740]: time="2019-04-01T16:43:04.459995603+08:00" level=error msg="f51e1feaed0f43d47447802bee4b356a0ce3866263dd3573d1dae88a84e77d91 cleanup: failed to delete container from containerd: no such container"4月 01 16:43:04 master-01 dockerd[68740]: time="2019-04-01T16:43:04.460063726+08:00" level=error msg="Handler for POST /v1.38/containers/f51e1feaed0f43d47447802bee4b356a0ce3866263dd3573d1dae88a84e77d91/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"4月 01 16:43:05 master-01 dockerd[68740]: time="2019-04-01T16:43:05.508063189+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:05 master-01 dockerd[68740]: time="2019-04-01T16:43:05.508756604+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:05 master-01 dockerd[68740]: time="2019-04-01T16:43:05.556766523+08:00" level=error msg="a25dcbe45d6cde5e8bcf141c8a3e88accb1a2660390f2effa1ca8d6364c0d504 cleanup: failed to delete container from containerd: no such container"4月 01 16:43:05 master-01 dockerd[68740]: time="2019-04-01T16:43:05.556842037+08:00" level=error msg="Handler for POST /v1.38/containers/a25dcbe45d6cde5e8bcf141c8a3e88accb1a2660390f2effa1ca8d6364c0d504/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"4月 01 16:43:06 master-01 dockerd[68740]: time="2019-04-01T16:43:06.579456571+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:06 master-01 dockerd[68740]: time="2019-04-01T16:43:06.579428794+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:06 master-01 dockerd[68740]: time="2019-04-01T16:43:06.618244855+08:00" level=error msg="df537cafea28982180b226574bc0534416042001653829017f514576efd96a3c cleanup: failed to delete container from containerd: no such container"4月 01 16:43:06 master-01 dockerd[68740]: time="2019-04-01T16:43:06.618342042+08:00" level=error msg="Handler for POST /v1.38/containers/df537cafea28982180b226574bc0534416042001653829017f514576efd96a3c/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"4月 01 16:43:07 master-01 dockerd[68740]: time="2019-04-01T16:43:07.721668002+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:07 master-01 dockerd[68740]: time="2019-04-01T16:43:07.724387734+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:07 master-01 dockerd[68740]: time="2019-04-01T16:43:07.768836577+08:00" level=error msg="de574260f24893b4385ce0cb4e530aadc153fac4b8549b85ba6da47070f5091a cleanup: failed to delete container from containerd: no such container"4月 01 16:43:07 master-01 dockerd[68740]: time="2019-04-01T16:43:07.768881236+08:00" level=error msg="Handler for POST /v1.38/containers/de574260f24893b4385ce0cb4e530aadc153fac4b8549b85ba6da47070f5091a/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"4月 01 16:43:08 master-01 dockerd[68740]: time="2019-04-01T16:43:08.747728984+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:08 master-01 dockerd[68740]: time="2019-04-01T16:43:08.747780229+08:00" level=error msg="stream copy error: reading from a closed fifo"
Docker 的日志中, 除了已经看了很多遍的connection reset by peer之外没啥大发现,Docker 的日志和 kubelet 的日志的共同点就是, kubelet 频繁 recreate Sandbox。
执行 docker ps -a 命令发现存在大量 create 状态的 pause 容器
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESfdfec614a029 hub.test.tech/library/pod-infrastructure:latest "/pod" Less than a second ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_366056be0890536c hub.test.tech/library/pod-infrastructure:latest "/pod" 1 second ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3659cf8cb3311a60 hub.test.tech/library/pod-infrastructure:latest "/pod" 2 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3658bedd3580b8a2 hub.test.tech/library/pod-infrastructure:latest "/pod" 4 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_365711969320abcf hub.test.tech/library/pod-infrastructure:latest "/pod" 5 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3656c7facaef9671 hub.test.tech/library/pod-infrastructure:latest "/pod" 6 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_365538a23131e0e2 hub.test.tech/library/pod-infrastructure:latest "/pod" 7 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_36541951dd1736ee hub.test.tech/library/pod-infrastructure:latest "/pod" 8 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3653dc3b3037c100 hub.test.tech/library/pod-infrastructure:latest "/pod" 9 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3652266f48961a4e hub.test.tech/library/pod-infrastructure:latest "/pod" 10 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3651c56159f99738 hub.test.tech/library/pod-infrastructure:latest "/pod" 11 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_365035fb00ce7522 hub.test.tech/library/pod-infrastructure:latest "/pod" 12 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_36494445f2ce25e6 hub.test.tech/library/pod-infrastructure:latest "/pod" 13 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3648890e1a7dfd53 hub.test.tech/library/pod-infrastructure:latest "/pod" 14 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3647
查看 demesg -T 信息, 发现了大量 oom-killer 的字眼的日志, 初步判断是由于内存溢出, 导致系统主动 kill 进程.
[一 4月 1 17:22:49 2019] Task in /kubepods/pod968128e3-5456-11e9-8874-000c29721372/0c04b9baa40f781fbeb4d53e0ca67101e6e6d588ecdcbec5874e753f1a42e457 killed as a result of limit of /kubepods/pod968128e3-5456-11e9-8874-000c29721372[一 4月 1 17:22:49 2019] memory: usage 0kB, limit 0kB, failcnt 48032[一 4月 1 17:22:49 2019] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0[一 4月 1 17:22:49 2019] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0[一 4月 1 17:22:49 2019] Memory cgroup stats for /kubepods/pod968128e3-5456-11e9-8874-000c29721372: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB[一 4月 1 17:22:49 2019] Memory cgroup stats for /kubepods/pod968128e3-5456-11e9-8874-000c29721372/0c04b9baa40f781fbeb4d53e0ca67101e6e6d588ecdcbec5874e753f1a42e457: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB[一 4月 1 17:22:49 2019] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name[一 4月 1 17:22:49 2019] [121875] 0 121875 5749 1418 16 0 -999 5[一 4月 1 17:22:49 2019] Memory cgroup out of memory: Kill process 121875 (5) score 0 or sacrifice child[一 4月 1 17:22:49 2019] Killed process 121875 (5) total-vm:22996kB, anon-rss:3028kB, file-rss:432kB, shmem-rss:2212kB[一 4月 1 17:22:49 2019] docker0: port 4(veth0a5cbbd) entered disabled state[一 4月 1 17:22:49 2019] device veth0a5cbbd left promiscuous mode[一 4月 1 17:22:49 2019] docker0: port 4(veth0a5cbbd) entered disabled state[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered blocking state[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered disabled state[一 4月 1 17:22:50 2019] device veth3bbadf8 entered promiscuous mode[一 4月 1 17:22:50 2019] IPv6: ADDRCONF(NETDEV_UP): veth3bbadf8: link is not ready[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered blocking state[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered forwarding state[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered disabled state[一 4月 1 17:22:50 2019] SLUB: Unable to allocate memory on node -1 (gfp=0xd0)[一 4月 1 17:22:50 2019] cache: kmalloc-4096(19:1db6c350afbbf07ad909f2c9f896a69333e182d83d513fc283d728cd7d6cf0b5), object size: 4096, buffer size: 4096, default order: 3, min order: 0[一 4月 1 17:22:50 2019] node 0: slabs: 0, objs: 0, free: 0[一 4月 1 17:22:50 2019] exe invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=-999[一 4月 1 17:22:50 2019] exe cpuset=1db6c350afbbf07ad909f2c9f896a69333e182d83d513fc283d728cd7d6cf0b5 mems_allowed=0[一 4月 1 17:22:50 2019] CPU: 1 PID: 121918 Comm: exe Kdump: loaded Tainted: G ------------ T 3.10.0-957.el7.x86_64 #1[一 4月 1 17:22:50 2019] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 04/13/2018[一 4月 1 17:22:50 2019] Call Trace:[一 4月 1 17:22:50 2019] [<ffffffff94f61dc1>] dump_stack+0x19/0x1b[一 4月 1 17:22:50 2019] [<ffffffff94f5c7ea>] dump_header+0x90/0x229[一 4月 1 17:22:50 2019] [<ffffffff949b9dc6>] ? find_lock_task_mm+0x56/0xc0[一 4月 1 17:22:50 2019] [<ffffffff94a31118>] ? try_get_mem_cgroup_from_mm+0x28/0x60[一 4月 1 17:22:50 2019] [<ffffffff949ba274>] oom_kill_process+0x254/0x3d0[一 4月 1 17:22:50 2019] [<ffffffff94a34f16>] mem_cgroup_oom_synchronize+0x546/0x570[一 4月 1 17:22:50 2019] [<ffffffff94a34390>] ? mem_cgroup_charge_common+0xc0/0xc0[一 4月 1 17:22:50 2019] [<ffffffff949bab04>] pagefault_out_of_memory+0x14/0x90[一 4月 1 17:22:50 2019] [<ffffffff94f5acf2>] mm_fault_error+0x6a/0x157[一 4月 1 17:22:50 2019] [<ffffffff94f6f7a8>] __do_page_fault+0x3c8/0x500[一 4月 1 17:22:50 2019] [<ffffffff94f6f915>] do_page_fault+0x35/0x90[一 4月 1 17:22:50 2019] [<ffffffff94f6b758>] page_fault+0x28/0x30[一 4月 1 17:22:50 2019] Task in /kubepods/pod968128e3-5456-11e9-8874-000c29721372/1db6c350afbbf07ad909f2c9f896a69333e182d83d513fc283d728cd7d6cf0b5 killed as a result of limit of /kubepods/pod968128e3-5456-11e9-8874-000c29721372[一 4月 1 17:22:50 2019] memory: usage 0kB, limit 0kB, failcnt 48062[一 4月 1 17:22:50 2019] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0[一 4月 1 17:22:50 2019] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0[一 4月 1 17:22:50 2019] Memory cgroup stats for /kubepods/pod968128e3-5456-11e9-8874-000c29721372: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB[一 4月 1 17:22:50 2019] Memory cgroup stats for /kubepods/pod968128e3-5456-11e9-8874-000c29721372/1db6c350afbbf07ad909f2c9f896a69333e182d83d513fc283d728cd7d6cf0b5: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB[一 4月 1 17:22:50 2019] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name[一 4月 1 17:22:50 2019] [121918] 0 121918 5749 1481 15 0 -999 exe[一 4月 1 17:22:50 2019] Memory cgroup out of memory: Kill process 121918 (exe) score 0 or sacrifice child[一 4月 1 17:22:50 2019] Killed process 121918 (exe) total-vm:22996kB, anon-rss:3052kB, file-rss:2872kB, shmem-rss:0kB[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered disabled state[一 4月 1 17:22:50 2019] device veth3bbadf8 left promiscuous mode[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered disabled state
发生这样的情况的概率并不高, 一般情况下有两种类型的 oom kill。
1.由于 pod 内进程超出了 pod 指定 Limit 限制的值, 将导致 oom kill, 此时 pod 退出的 Reason 会显示 OOMKilled。
2.另一种情况是 pod 内的进程给自己设置了可用内存, 比如 jvm 内存限制设置为2G, pod Limit 设置为6G, 此时由于程序的原因导致内存使用超过2G 时, 也会引发 oom kill。
这两种内存溢出的 kill 区别是第一种原因直接显示在 pod 的 Event 里; 第二种你在 Event 里找不到, 在宿主机的 dmesg 里面可以找到 invoked oom-killer 的日志
这次的情况看起来像属于第二种情况, 于是赶紧再次 describe pod, 查看 Limit 限制
[root@master-01 ~]# kubectl describe pod busybox-deploy-b9b6d4ff9-vwfnq
终于找到了原因, 就是因为对内存的 Limit 导致的. 这里显示内存 Limit 为300m, 实际上是因为在创建资源时, 写的是 300m,资源单位为(CPU-单位m、内存-单位Mi)
理论上来说, 按照之前的经验, 此种情况(实际使用内存超过 Limit 内存的情况)应该属于第一种, 会在 Event 里显示的由于 OOMkilled 原因导致 Terminated 状态. 但实际情况却是在 Event 里找不到 oom kill 的日志, 且 pod 状态处于 ContainerCreating 状态.
由于 OOMkilled 处于 Terminated 的状态是因为 pod 已经正常被引导创建后, 导致的内存溢出
由于系统 invoked oom-killer 导致的处于 ContainerCreating 状态的 pod 是因为 pod 还没正常被创建, pod 中的 pause 容器都没有被正常引导就已经被 cgroup 的内存限制而招来杀身之祸
注意: 调整资源的时候单位可得写对,不然可能会出莫名其妙的问题
往期文章一览
2、Kubernetes集群搭建之企业级环境中基于Harbor搭建自己的私有仓库
4、Kubernetes集群搭建之CNI-Flanneld部署篇
6、Kubernetes系列之Coredns and Dashboard介绍篇
7、Kubernetes系列之监控Metres-server实战篇
如果您觉得不错,请别忘了转发、分享、点赞让更多的人去学习, 您的举手之劳,就是对小编最好的支持,非常感谢!
image