03.Hadoop:双namenode节点HDFS环境部署(se
本节主要内容:
双namenode节点HDFS环境部署(sencondarynamenode节点追加)
1.系统环境:
OS:CentOS Linux release 7.5.1804 (Core)
CPU:2核心
Memory:1GB
运行用户:root
JDK版本:1.8.0_252
Hadoop版本:cdh5.16.2
2.集群各节点角色规划为:
172.26.37.245 node1.hadoop.com namenode
172.26.37.246 node2.hadoop.com datanode
172.26.37.247 node3.hadoop.com datanode
172.26.37.248 node4.hadoop.com sencondarynamenode
一.安装
1.sencondarynamenode节点
# yum install hadoop-hdfs-secondarynamenode -y
二.配置文件
1.namenode节点
hdfs-site.xml文件相关变更
# vi /etc/hadoop/conf/hdfs-site.xml
追加以下内容:
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node4.hadoop.com:50090</value>
</property>
追加后完整内容:
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.name.dir</name>
<value>file:///data/hdfs/data</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node4.hadoop.com:50090</value>
</property>
</configuration>
2.secondarynamenode节点
core-site.xml文件相关设定
# vi /etc/hadoop/conf/core-site.xml
configuration追加以下内容:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node1.hadoop.com:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
<property>
<name>fs.checkpoint.operiod</name>
<value>60</value>
</property>
<property>
<name>fs.checkpoint.size</name>
<value>67108864</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/app/user/hdfs/secondaryname</value>
</property>
</configuration>
core-site.xml的configuration相关内容说明
声明hdfs域
<property>
<name>fs.defaultFS</name>
<value>hdfs://node1.hadoop.com:8020</value>
</property>
默认数据存放目录/home/hadoop/tmp
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
和namenode的检查时间,default为3600秒
<property>
<name>fs.checkpoint.operiod</name>
<value>60</value>
</property>
定义了edits日志文件的最大值,一旦超过这个值会导致强制执行检查点(即使没到检查点的最大时间间隔)。default是64MB。
<property>
<name>fs.checkpoint.size</name>
<value>67108864</value>
</property>
checkpoint数据存储目录,如果不配置此项,将存储在默认的hadoop.tmp.dir下
<property>
<name>fs.checkpoint.dir</name>
<value>/app/user/hdfs/secondaryname</value>
</property>
hdfs-site.xml文件相关设定
# vi /etc/hadoop/conf/hdfs-site.xml
configuration追加以下内容:
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node4.hadoop.com:50090</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>0.0.0.0:50070</value>
</property>
</configuration>
创建相关目录并追加权限
# mkdir -p /app/user/hdfs/namesecondary
# chown -R hdfs:hdfs /app/user/hdfs/namesecondary
三.启动服务并验证
1.namenode节点
# systemctl restart hadoop-hdfs-namenode
# systemctl status hadoop-hdfs-namenode
2.secondarynamenode节点
# systemctl start hadoop-hdfs-secondarynamenode
# systemctl status hadoop-hdfs-secondarynamenode
3.验证
进入secondarynamenode节点的缓存目录/home/hadoop/tmp
# cd /home/hadoop/tmp/dfs/namesecondary
# ls
current in_use.lock
# ll current/
total 24
-rw-r--r-- 1 hdfs hdfs 42 Jun 14 23:08 edits_0000000000000000004-0000000000000000005
-rw-r--r-- 1 hdfs hdfs 395 Jun 14 23:08 fsimage_0000000000000000003
-rw-r--r-- 1 hdfs hdfs 62 Jun 14 23:08 fsimage_0000000000000000003.md5
-rw-r--r-- 1 hdfs hdfs 395 Jun 14 23:08 fsimage_0000000000000000005
-rw-r--r-- 1 hdfs hdfs 62 Jun 14 23:08 fsimage_0000000000000000005.md5
-rw-r--r-- 1 hdfs hdfs 205 Jun 14 23:08 VERSION
看到数据表示已经同步过来
四.创建文件验证hdfs工作正常
任一主机(namenode/secondarynamenode/datanode都可以)
# # sudo -u hdfs hadoop fs -mkdir /secondary
# # sudo -u hdfs hadoop fs -ls /
Found 3 items
drwxr-xr-x - hdfs supergroup 0 2020-06-14 23:31 /secondary
drwxr-xr-x - hdfs supergroup 0 2020-06-14 23:32 /secondaryt
drwxrwxrwt - hdfs supergroup 0 2020-06-14 01:48 /tmp
五.namesecondary节点Web确认
http://172.26.37.248:50090/,节点上什么信息都没有