hadoop2.6.0-cdh5.7伪分布式安装(2018-12

2019-04-16  本文已影响0人  程序猿TT

1、安装jdk

export JAVA_HOME=/home/hadoop/app/jdk1.8.0_191
export PATH=$JAVA_HOME/bin:$PATH

2、安装ssh

3、下载并解压hadoop

4、hadoop 配置文件的修改(hadoop_home/etc/hadoop)

vim hadoop-env.sh
// -----------------------------------------------------------------------(修改)
export JAVA_HOME=/home/hadoop/app/jdk1.8.0_191

// =========================================
vim core-site.xml 
// ------------------------------------------------------------------------
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop:8020</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/app/tmp</value>
    </property>
</configuration>
// =========================================
vim hdfs-site.xml 
// ------------------------  副本节点 由于是伪分布式 节点只有一个,所以指定节点为1  -------------------
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

// ------------------------  配置slaves -------------------
slaves

5、启动hdfs

# jps
7796 SecondaryNameNode
7529 NameNode
7615 DataNode

浏览器访问方式: http://192.168.80.83:50070

6、 停止hdfs

7、 Java API 操作HDFS文件

public void listFiles() throws Exception {
        Path filePath = new Path("/hdfsapi/test/");
        FileStatus[] fileStaStatus = fileSystem.listStatus(filePath);
        for (FileStatus fileStatus : fileStaStatus) {
            boolean isDir = fileStatus.isDirectory();
            String status = isDir ? "文件夹" : "文件";
            short replication = fileStatus.getReplication();
            Long len = fileStatus.getLen();
            String path = fileStatus.getPath().toString();
            // 打印文件信息
            System.out.println(status + "\t" + replication + "\t" + len + "\t" + path);
        }
    }

文件 3 243793920 hdfs://192.168.80.83:8020/hdfsapi/test/MySQL-5.6.tar
文件 3 12 hdfs://192.168.80.83:8020/hdfsapi/test/b.txt
文件 3 983911 hdfs://192.168.80.83:8020/hdfsapi/test/mysql-connector-java-5.1.38.jar

    public void listFiles() throws Exception {
        Path filePath = new Path("/");
        FileStatus[] fileStaStatus = fileSystem.listStatus(filePath);
        for (FileStatus fileStatus : fileStaStatus) {
            boolean isDir = fileStatus.isDirectory();
            String status = isDir ? "文件夹" : "文件";
            short replication = fileStatus.getReplication();
            Long len = fileStatus.getLen();
            String path = fileStatus.getPath().toString();

            System.out.println(status + "\t" + replication + "\t" + len + "\t" + path);
        }
    }

文件 1 311585484 hdfs://192.168.80.83:8020/hadoop-2.6.0-cdh5.7.0.tar.gz
文件夹 0 0 hdfs://192.168.80.83:8020/hdfsapi
文件 1 51 hdfs://192.168.80.83:8020/hello.txt

8、Hadoop1.x 时:

9、YARN 架构

10 、YARN 环境搭建

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

11、测试yarn

上一篇 下一篇

猜你喜欢

热点阅读