hadoop新增服务器
2018-12-02 本文已影响7人
宁君26
新增服务器hadoop4
Hadoop1 | hadoop2 | hadoop3 |
---|---|---|
(HDFS) | ||
NameNode | SecondaryNameNode | |
DataNode | DataNode | DataNode |
(YARN) | ||
NodeManager | ResourceManager | NodeManager |
1)环境准备
(1)克隆一台虚拟机
(2)修改ip地址和主机名称以及hosts文件
(3)修改xcall和xsync文件,增加新增节点的同步
(4) 使用xsync同步hosts.profile文件
(5) 配置ssh登录
(6)删除原来HDFS文件系统留存的文件
/opt/module/hadoop-2.7.2/data
2)服役新节点具体步骤
(1)在namenode所在节点hadoop1的/opt/module/hadoop-2.7.2/etc/hadoop目录下创建dfs.hosts文件
[atguigu@hadoop1 hadoop]$ pwd
/opt/module/hadoop-2.7.2/etc/hadoop
[atguigu@hadoop1 hadoop]$ touch dfs.hosts
[atguigu@hadoop1 hadoop]$ vi dfs.hosts
添加如下主机名称(包含新服役的节点)
hadoop1
Hadoop2
Hadoop3
Hadoop4
(2)在namenode所在节点hadoop1的hdfs-site.xml配置文件中增加dfs.hosts属性
<property>
<name>dfs.hosts</name> <value>/opt/module/hadoop2.7.2/etc/hadoop/dfs.hosts
</value>
</property>
(3)刷新namenode (在hadoop1上)
[atguigu@hadoop1 hadoop-2.7.2]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful
(4)更新resourcemanager节点 (在hadoop1上)
[atguigu@hadoop1 hadoop-2.7.2]$ yarn rmadmin -refreshNodes
17/06/24 14:17:11 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.1.103:8033
(5)在namenode的slaves文件中增加新主机名称
增加hadoop4 不需要分发
hadoop1
hadoop2
hadoop3
hadoop4
(6)单独命令启动新的数据节点和节点管理器(在hadoop4上)
[atguigu@hadoop4 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-atguigu-datanode-hadoop105.out
[atguigu@hadoop4 hadoop-2.7.2]$ sbin/yarn-daemon.sh start nodemanager
starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-atguigu-nodemanager-hadoop105.out
(7)在web浏览器上检查是否ok
3)如果数据不均衡,可以用命令实现集群的再平衡
在hadoop1上
[atguigu@hadoop1 sbin]$ ./start-balancer.sh
starting balancer, logging to /opt/module/hadoop-2.7.2/logs/hadoop-atguigu-balancer-hadoop102.out
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved