Hadoop-3.1.3(二)指令

2021-07-22  本文已影响0人  _大叔_

前面已经教了如何安装伪分布式和分布式,那么接下来就要学习 hadoop 有哪些命令。

创建一个 test 目录

[root@node113 hadoop-3.1.3]# hadoop fs -mkdir /test

查看根目录下有哪些文件

[root@node113 hadoop-3.1.3]# hadoop fs -ls /
Found 1 items
drwxr-xr-x   - root supergroup          0 2021-05-08 13:22 /test

将 linux 操作系统下的 /home/tmp/biguncle 文件 copy 到 hdfs 的 test 目录下

[root@node113 tmp]# hadoop fs -put /home/tmp/biguncle /test

查看文件被切割几块(-blocks)及每块的位置信息(-locations)

[root@node113 home]# hadoop fsck /test/biguncle -files -blocks -locations
Connecting to namenode via http://node113:9870/fsck?ugi=root&files=1&blocks=1&locations=1&path=%2Ftest%2Fbiguncle
FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path /test/biguncle at Sat May 08 13:50:40 CST 2021
# biguncle 大小为 0 bytes,副本数为 1,0个块
/test/biguncle 0 bytes, replicated: replication=1, 0 block(s):  OK


Status: HEALTHY
 Number of data-nodes:  1
 Number of racks:               1
 Total dirs:                    0
 Total symlinks:                0

Replicated Blocks:
 Total size:    0 B
 Total files:   1
 Total blocks (validated):      0
 Minimally replicated blocks:   0
 Over-replicated blocks:        0
 Under-replicated blocks:       0
 Mis-replicated blocks:         0
 Default replication factor:    1
 Average block replication:     0.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
FSCK ended at Sat May 08 13:50:40 CST 2021 in 9 milliseconds


The filesystem under path '/test/biguncle' is HEALTHY

将 hdfs 中的 biguncle 文件下载到 linux 的home目录下

[root@node113 home]# hadoop fs -get /test/biguncle /home
[root@node113 home]# ls
biguncle  tmp

删除 hdfs 的 test 目录中的指定文件

[root@node113 home]# hadoop fs -rm /test/biguncle
Deleted /test/biguncle

删除 test 目录,但前提是没其他文件

[root@node113 home]# hadoop fs -rmdir /test

删除 test 目录,即使目录里有其他文件

[root@node113 home]# hadoop fs -rmr /test
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted /test

查看 test 目录下 biguncle 文件

[root@node113 home]# hadoop fs -cat /test/biguncle
我再做一个测试

查看 test 目录下 biguncle 文件末尾的数据,默认10行

[root@node113 home]# hadoop fs -tail /test/biguncle
我再做一个测试

hadoop 文件移动,或重新名命

[root@node113 home]# hadoop fs -mv /test/biguncle /test1/bu

hadoop 执行一个 jar 文件

hadoop har xxx.jar

hadoop 创建一个空文件,必须再目录下创建

[root@node113 home]# hadoop fs -touchz /test/bbb.txt

将目录下的所有文件合并成一个文件,并下载到 linux

[root@node113 home]# hadoop fs -ls /test
Found 3 items
-rw-r--r--   1 root supergroup          0 2021-05-08 14:19 /test/bbb.txt
-rw-r--r--   1 root supergroup          0 2021-05-08 14:22 /test/ccc.txt
-rw-r--r--   1 root supergroup          0 2021-05-08 14:23 /test/ddd.txt
[root@node113 home]# hadoop fs -getmerge /test /home/tmp/fff.txt
[root@node113 tmp]# ls
fff.txt

将目录下的文件 copy 到其他目录

[root@node113 tmp]# hadoop fs -cp /test/bbb.txt /test1

查看某个文件的大小,也可以查看指定目录

[root@node113 tmp]# hadoop fs -du /test
0  0  /test/bbb.txt
0  0  /test/ccc.txt
0  0  /test/ddd.txt

递归查看目录下所有文件

[root@node113 tmp]# hadoop fs -lsr /
drwxr-xr-x   - root supergroup          0 2021-05-08 14:23 /test
-rw-r--r--   1 root supergroup          0 2021-05-08 14:19 /test/bbb.txt
-rw-r--r--   1 root supergroup          0 2021-05-08 14:22 /test/ccc.txt
-rw-r--r--   1 root supergroup          0 2021-05-08 14:23 /test/ddd.txt
drwxr-xr-x   - root supergroup          0 2021-05-08 14:26 /test1
-rw-r--r--   1 root supergroup          0 2021-05-08 14:26 /test1/bbb.txt
-rw-r--r--   1 root supergroup         22 2021-05-08 14:13 /test1/bu

手动执行 fsimage 文件和 edit 文件合并元数据

hadoop dfsadmin -rollEdits
上一篇下一篇

猜你喜欢

热点阅读