编译安装大数据平台权限管理组件 - Apache Ranger

2020-11-11  本文已影响0人  端碗吹水

官方文档:

编译Ranger源码

首先准备好Java和Maven环境:

[root@hadoop01 ~]# java -version
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)
[root@hadoop01 ~]# mvn -v
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /usr/local/maven
Java version: 1.8.0_261, vendor: Oracle Corporation, runtime: /usr/local/jdk/1.8/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1062.el7.x86_64", arch: "amd64", family: "unix"
[root@hadoop01 ~]# 

直接从GitHub上拉取Ranger的源码,或者从官网下载相应版本的源码包:

[root@hadoop01 ~]# cd /usr/local/src
[root@hadoop01 /usr/local/src]# git clone https://github.com/apache/ranger

进入源码目录:cd ranger,修改该目录下的pom文件,主要修改两个地方,第一是将仓库相关配置都给注释掉:

<!--
    <repositories>
        <repository>
            <id>apache.snapshots.https</id>
            <name>Apache Development Snapshot Repository</name>
            <url>https://repository.apache.org/content/repositories/snapshots</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
        <repository>
            <id>apache.public.https</id>
            <name>Apache Development Snapshot Repository</name>
            <url>https://repository.apache.org/content/repositories/public</url>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
    <repository>
      <id>repo</id>
      <url>file://${basedir}/local-repo</url>
      <snapshots>
         <enabled>true</enabled>
      </snapshots>
  </repository>
    </repositories>
-->

第二是修改hadoop相关组件的version,将版本改为你所安装的版本:

<hadoop.version>3.3.0</hadoop.version>
<hbase.version>2.2.6</hbase.version>
<hive.version>3.1.2</hive.version>

然后修改security-admin/pom.xml文件中与nodejs的相关配置。在文件内搜索<id>install node and npm</id>,将configuration标签的内容修改如下:

<configuration>
    <nodeVersion>v10.13.0</nodeVersion>
    <!--<npmVersion>6.4.1</npmVersion>-->
</configuration>

然后继续在文件内搜索<id>npm install for packaging</id>,将configuration标签的内容修改如下:

<configuration>
    <workingDirectory>${project.build.directory}/jsmain</workingDirectory>
    <arguments>install -registry=https://registry.npm.taobao.org --cache-max=0 --no-save</arguments>
</configuration>

继续在文件内搜索<id>npm install for tests</id>,将configuration标签的内容修改如下:

<configuration>
    <skip>${skipJSTests}</skip>
    <workingDirectory>${project.build.directory}/jstest</workingDirectory>
    <arguments>install -registry=https://registry.npm.taobao.org --cache-max=0 --no-save</arguments>
</configuration>

完成以上的修改后,使用maven命令进行编译打包:

[root@hadoop01 /usr/local/src]# cd ranger/
[root@hadoop01 /usr/local/src/ranger]# mvn -DskipTests=true clean package

经过一段漫长的等待后,编译打包完成将输出如下信息:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for ranger 3.0.0-SNAPSHOT:
[INFO] 
[INFO] ranger ............................................. SUCCESS [  0.763 s]
[INFO] Jdbc SQL Connector ................................. SUCCESS [  0.903 s]
[INFO] Credential Support ................................. SUCCESS [ 35.119 s]
[INFO] Audit Component .................................... SUCCESS [ 24.206 s]
[INFO] ranger-plugin-classloader .......................... SUCCESS [  0.823 s]
[INFO] Common library for Plugins ......................... SUCCESS [  4.650 s]
[INFO] ranger-intg ........................................ SUCCESS [  1.672 s]
[INFO] Installer Support Component ........................ SUCCESS [  0.494 s]
[INFO] Credential Builder ................................. SUCCESS [  2.276 s]
[INFO] Embedded Web Server Invoker ........................ SUCCESS [  4.777 s]
[INFO] Key Management Service ............................. SUCCESS [ 27.430 s]
[INFO] HBase Security Plugin Shim ......................... SUCCESS [01:47 min]
[INFO] HBase Security Plugin .............................. SUCCESS [ 25.536 s]
[INFO] Hdfs Security Plugin ............................... SUCCESS [ 13.548 s]
[INFO] Hive Security Plugin ............................... SUCCESS [01:41 min]
[INFO] Knox Security Plugin Shim .......................... SUCCESS [ 12.290 s]
[INFO] Knox Security Plugin ............................... SUCCESS [02:12 min]
[INFO] Storm Security Plugin .............................. SUCCESS [  3.999 s]
[INFO] YARN Security Plugin ............................... SUCCESS [  1.452 s]
[INFO] Ozone Security Plugin .............................. SUCCESS [ 16.509 s]
[INFO] Ranger Util ........................................ SUCCESS [  1.000 s]
[INFO] Unix Authentication Client ......................... SUCCESS [  0.590 s]
[INFO] User Group Synchronizer Util ....................... SUCCESS [  0.457 s]
[INFO] Security Admin Web Application ..................... SUCCESS [01:15 min]
[INFO] KAFKA Security Plugin .............................. SUCCESS [ 13.393 s]
[INFO] SOLR Security Plugin ............................... SUCCESS [ 19.696 s]
[INFO] NiFi Security Plugin ............................... SUCCESS [  1.556 s]
[INFO] NiFi Registry Security Plugin ...................... SUCCESS [  1.586 s]
[INFO] Kudu Security Plugin ............................... SUCCESS [  0.809 s]
[INFO] Unix User Group Synchronizer ....................... SUCCESS [ 34.854 s]
[INFO] Ldap Config Check Tool ............................. SUCCESS [  0.643 s]
[INFO] Unix Authentication Service ........................ SUCCESS [  0.917 s]
[INFO] Unix Native Authenticator .......................... SUCCESS [  0.475 s]
[INFO] KMS Security Plugin ................................ SUCCESS [  7.668 s]
[INFO] Tag Synchronizer ................................... SUCCESS [02:24 min]
[INFO] Hdfs Security Plugin Shim .......................... SUCCESS [  0.906 s]
[INFO] Hive Security Plugin Shim .......................... SUCCESS [  5.423 s]
[INFO] YARN Security Plugin Shim .......................... SUCCESS [  0.914 s]
[INFO] OZONE Security Plugin Shim ......................... SUCCESS [  0.944 s]
[INFO] Storm Security Plugin shim ......................... SUCCESS [  0.961 s]
[INFO] KAFKA Security Plugin Shim ......................... SUCCESS [  0.881 s]
[INFO] SOLR Security Plugin Shim .......................... SUCCESS [  1.096 s]
[INFO] Atlas Security Plugin Shim ......................... SUCCESS [ 12.065 s]
[INFO] KMS Security Plugin Shim ........................... SUCCESS [  7.139 s]
[INFO] ranger-examples .................................... SUCCESS [  0.017 s]
[INFO] Ranger Examples - Conditions and ContextEnrichers .. SUCCESS [  1.479 s]
[INFO] Ranger Examples - SampleApp ........................ SUCCESS [  0.384 s]
[INFO] Ranger Examples - Ranger Plugin for SampleApp ...... SUCCESS [  0.831 s]
[INFO] sample-client ...................................... SUCCESS [  0.865 s]
[INFO] Apache Ranger Examples Distribution ................ SUCCESS [  1.262 s]
[INFO] Ranger Tools ....................................... SUCCESS [  3.747 s]
[INFO] Atlas Security Plugin .............................. SUCCESS [  1.149 s]
[INFO] SchemaRegistry Security Plugin ..................... SUCCESS [ 32.873 s]
[INFO] Sqoop Security Plugin .............................. SUCCESS [  6.273 s]
[INFO] Sqoop Security Plugin Shim ......................... SUCCESS [  0.810 s]
[INFO] Kylin Security Plugin .............................. SUCCESS [03:13 min]
[INFO] Kylin Security Plugin Shim ......................... SUCCESS [  9.244 s]
[INFO] Presto Security Plugin ............................. SUCCESS [ 21.863 s]
[INFO] Presto Security Plugin Shim ........................ SUCCESS [01:42 min]
[INFO] Elasticsearch Security Plugin Shim ................. SUCCESS [  3.510 s]
[INFO] Elasticsearch Security Plugin ...................... SUCCESS [  1.047 s]
[INFO] Apache Ranger Distribution ......................... SUCCESS [03:07 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

此时在target目录下可以看到打包好的插件安装包:

[root@hadoop01 /usr/local/src/ranger]# ls target/
antrun                                             ranger-3.0.0-SNAPSHOT-hive-plugin.tar.gz     ranger-3.0.0-SNAPSHOT-presto-plugin.tar.gz        ranger-3.0.0-SNAPSHOT-storm-plugin.tar.gz
maven-shared-archive-resources                     ranger-3.0.0-SNAPSHOT-kafka-plugin.tar.gz    ranger-3.0.0-SNAPSHOT-ranger-tools.tar.gz         ranger-3.0.0-SNAPSHOT-tagsync.tar.gz
ranger-3.0.0-SNAPSHOT-admin.tar.gz                 ranger-3.0.0-SNAPSHOT-kms.tar.gz             ranger-3.0.0-SNAPSHOT-schema-registry-plugin.jar  ranger-3.0.0-SNAPSHOT-usersync.tar.gz
ranger-3.0.0-SNAPSHOT-atlas-plugin.tar.gz          ranger-3.0.0-SNAPSHOT-knox-plugin.tar.gz     ranger-3.0.0-SNAPSHOT-solr_audit_conf.tar.gz      ranger-3.0.0-SNAPSHOT-yarn-plugin.tar.gz
ranger-3.0.0-SNAPSHOT-elasticsearch-plugin.tar.gz  ranger-3.0.0-SNAPSHOT-kylin-plugin.tar.gz    ranger-3.0.0-SNAPSHOT-solr-plugin.tar.gz          version
ranger-3.0.0-SNAPSHOT-hbase-plugin.tar.gz          ranger-3.0.0-SNAPSHOT-migration-util.tar.gz  ranger-3.0.0-SNAPSHOT-sqoop-plugin.tar.gz
ranger-3.0.0-SNAPSHOT-hdfs-plugin.tar.gz           ranger-3.0.0-SNAPSHOT-ozone-plugin.tar.gz    ranger-3.0.0-SNAPSHOT-src.tar.gz
[root@hadoop01 /usr/local/src/ranger]# 

nodejs 下载失败解决

如果遇到node无法下载或下载缓慢的情况下,可以尝试手动下载相应版本的压缩包,并放到对应的maven仓库目录下。例如,根据输出信息:

[INFO] Installing node version v10.13.0
[INFO] Downloading https://nodejs.org/dist/v10.13.0/node-v10.13.0-linux-x64.tar.gz to /root/.m2/repository/com/github/eirslett/node/10.13.0/node-10.13.0-linux-x64.tar.gz
[INFO] No proxies configured
[INFO] No proxy was configured, downloading directly

可以得知目标目路径为 /root/.m2/repository/com/github/eirslett/node/10.13.0/node-10.13.0-linux-x64.tar.gz,于是创建目录:

$ mkdir -p /root/.m2/repository/com/github/eirslett/node/10.13.0/

并将自己下载的压缩包拷贝到该目录下:

$ cp node-v10.13.0-linux-x64.tar.gz /root/.m2/repository/com/github/eirslett/node/10.13.0/node-10.13.0-linux-x64.tar.gz

部署Ranger Admin

将ranger admin的安装包解压到合适的目录下,我这里习惯放到/usr/local

[root@hadoop01 /usr/local/src/ranger]# tar -zxvf target/ranger-3.0.0-SNAPSHOT-admin.tar.gz -C /usr/local/

进入解压后的目录,目录结构如下:

[root@hadoop01 /usr/local/src/ranger]# cd /usr/local/ranger-3.0.0-SNAPSHOT-admin/
[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# ls
bin                    contrib  dba_script.py           ews                 ranger_credential_helper.py  set_globals.sh           templates-upgrade                 upgrade_admin.py
changepasswordutil.py  cred     db_setup.py             install.properties  restrict_permissions.py      setup_authentication.sh  update_property.py                upgrade.sh
changeusernameutil.py  db       deleteUserGroupUtil.py  jisql               rolebasedusersearchutil.py   setup.sh                 updateUserAndGroupNamesInJson.py  version
[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# 

配置安装选项:

[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# vim install.properties 
# 指定MySQL驱动包所在的路径
SQL_CONNECTOR_JAR=/usr/local/src/mysql-connector-java-8.0.21.jar

# 配置root用户名密码以及MySQL实例的连接地址
db_root_user=root
db_root_password=123456a.
db_host=192.168.1.11

# 配置操作ranger库的用户名密码
db_name=ranger
db_user=root
db_password=123456a.

# 指定审计日志的存储方式
audit_store=db
audit_db_user=root
audit_db_name=ranger
audit_db_password=123456a.

在MySQL中创建ranger数据库:

create database ranger;

由于我这里使用的是MySQL8.x,需要修改一下数据库相关的脚本。打开dba_script.pydb_setup.py文件,搜索如下内容:

-cstring jdbc:mysql://%s/%s%s

将其全部修改为如下所示,主要是添加JDBC的serverTimezone连接参数:

-cstring jdbc:mysql://%s/%s%s?serverTimezone=Asia/Shanghai

然后执行如下命令开始安装ranger admin:

[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# ./setup.sh

报错解决

安装过程中如果报如下错误:

SQLException : SQL state: HY000 java.sql.SQLException: Operation CREATE USER failed for 'root'@'localhost' ErrorCode: 1396

SQLException : SQL state: 42000 java.sql.SQLSyntaxErrorException: Access denied for user 'root'@'192.168.1.11' to database 'mysql' ErrorCode: 1044

解决方式,就是在MySQL中执行如下语句:

flush privileges;
grant system_user on *.* to 'root';
drop user'root'@'localhost';
create user 'root'@'localhost' identified by '123456a.';
grant all privileges on *.* to 'root'@'localhost' with grant option;

drop user'root'@'192.168.1.11';
create user 'root'@'192.168.1.11' identified by '123456a.';
grant all privileges on *.* to 'root'@'192.168.1.11' with grant option;
flush privileges;

如果报如下错误:

SQLException : SQL state: HY000 java.sql.SQLException: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable) ErrorCode: 1418

解决方式:

set global log_bin_trust_function_creators=TRUE;
flush privileges;

如果报如下错误:

SQLException : SQL state: HY000 java.sql.SQLException: Cannot drop table 'x_policy' referenced by a foreign key constraint 'x_policy_ref_role_FK_policy_id' on table 'x_policy_ref_role'. ErrorCode: 3730

解决方式:删除ranger库中所有的表,再重新执行./setup.sh

安装完成后最终会输出:

Installation of Ranger PolicyManager Web Application is completed.

启动Ranger Admin

修改配置文件,配置数据库连接密码和jdbc url时区参数:

[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# vim conf/ranger-admin-site.xml
...

<property>
        <name>ranger.jpa.jdbc.url</name>
        <value>jdbc:log4jdbc:mysql://192.168.1.11/ranger?serverTimezone=Asia/Shanghai</value>
        <description />
</property>
<property>
        <name>ranger.jpa.jdbc.password</name>
        <value>123456a.</value>
        <description />
</property>

...

启动命令如下:

[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# ranger-admin start 
Starting Apache Ranger Admin Service
Apache Ranger Admin Service failed to start!
[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]#

可以看到启动失败了,具体原因需要查看日志信息。ranger admin的日志目录配置在conf/ranger-admin-env-logdir.sh文件中,默认是$RANGER_ADMIN_HOME/ews/logs/。查看日志文件得知关键的报错信息如下:

[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# cat /usr/local/ranger-3.0.0-SNAPSHOT-admin/ews/logs/catalina.out
...
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/compress/archivers/tar/TarArchiveInputStream
...

很明显是找不到TarArchiveInputStream这个类。该类处于Apache的commons-compress包中,解决起来也简单,首先到中央仓库上将该jar包下载下来:

然后放到ews/lib/目录下:

[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# ls ews/lib/ |grep commons-compress
commons-compress-1.20.jar
[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# 

重新启动ranger admin,这次就启动成功了:

[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# ranger-admin start 
Starting Apache Ranger Admin Service
Apache Ranger Admin Service with pid 52505 has started.
[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# 

检查端口和进程是否正常:

[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# jps
52626 Jps
52505 EmbeddedServer
[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# netstat -lntp |grep 52505
tcp6       0      0 :::6080                 :::*             LISTEN      52505/java          
tcp6       0      0 127.0.0.1:6085          :::*             LISTEN      52505/java          
[root@hadoop01 /usr/local/ranger-3.0.0-SNAPSHOT-admin]# 

使用浏览器访问6080端口,进入到登录页面,默认用户名和密码均为admin

image.png

登录成功后,进入到首页,如下:


image.png

Ranger HDFS Plugin安装

解压hdfs plugin的安装包到合适的目录下:

[root@hadoop01 ~]# mkdir /usr/local/ranger-plugin
[root@hadoop01 ~]# tar -zxvf /usr/local/src/ranger/target/ranger-3.0.0-SNAPSHOT-hdfs-plugin.tar.gz -C /usr/local/ranger-plugin
[root@hadoop01 ~]# cd /usr/local/ranger-plugin/
[root@hadoop01 /usr/local/ranger-plugin]# mv ranger-3.0.0-SNAPSHOT-hdfs-plugin/ hdfs-plugin

进入解压后的目录,目录结构如下:

[root@hadoop01 /usr/local/ranger-plugin/hdfs-plugin]# ls
disable-hdfs-plugin.sh  enable-hdfs-plugin.sh  install  install.properties  lib  ranger_credential_helper.py  upgrade-hdfs-plugin.sh  upgrade-plugin.py
[root@hadoop01 /usr/local/ranger-plugin/hdfs-plugin]# 

配置安装选项:

[root@hadoop01 /usr/local/ranger-plugin/hdfs-plugin]# vim install.properties
# 指定ranger admin服务的访问地址
POLICY_MGR_URL=http://192.168.243.142:6080
# 配置仓库配置,可自定义
REPOSITORY_NAME=dev_hdfs
# 配置hadoop的安装目录
COMPONENT_INSTALL_DIR_NAME=/usr/local/hadoop-2.6.0-cdh5.16.2
# 配置hdfs的目录
XAAUDIT.HDFS.HDFS_DIR=hdfs://__REPLACE__NAME_NODE_HOST:8020/ranger/audit
XAAUDIT.HDFS.DESTINATION_DIRECTORY=hdfs://__REPLACE__NAME_NODE_HOST:8020/ranger/audit/%app-type%/%time:yyyyMMdd%

# 配置用户和用户组
CUSTOM_USER=root
CUSTOM_GROUP=root

执行如下脚本开启hdfs-plugin

[root@hadoop01 /usr/local/ranger-plugin/hdfs-plugin]# ./enable-hdfs-plugin.sh 

如果报如下错误:

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/compress/archivers/tar/TarArchiveInputStream

拷贝缺失的jar包到install/lib/目录下:

[root@hadoop01 /usr/local/ranger-plugin/hdfs-plugin]# cp /usr/local/ranger-3.0.0-SNAPSHOT-admin/ews/lib/commons-lang3-3.3.2.jar ./install/lib/
[root@hadoop01 /usr/local/ranger-plugin/hdfs-plugin]# cp /usr/local/ranger-3.0.0-SNAPSHOT-admin/ews/lib/commons-compress-1.20.jar ./install/lib/

脚本执行成功后,会输出如下内容:

Ranger Plugin for hadoop has been enabled. Please restart hadoop to ensure that changes are effective.

重启Hadoop:

[root@hadoop01 ~]# stop-all.sh 
[root@hadoop01 ~]# start-all.sh

验证权限控制

到Ranger Admin上添加hdfs service,这里的Service Name需与配置文件中的配置所对应上:


image.png

填写相应信息:


image.png

填写完成后,到页面底部点击“Test Connection”测试能否正常连接,确认可以正常连接后点击“Add”完成新增:


image.png

在hdfs中创建一些测试目录和文件:

[root@hadoop01 ~]# hdfs dfs -mkdir /rangertest1
[root@hadoop01 ~]# hdfs dfs -mkdir /rangertest2
[root@hadoop01 ~]# echo "ranger test" > testfile
[root@hadoop01 ~]# hdfs dfs -put testfile /rangertest1
[root@hadoop01 ~]# hdfs dfs -put testfile /rangertest2

然后到Ranger Admin上添加Ranger的内部用户,“Settings” -> “Add New User”,填写用户信息:


image.png

接着添加权限策略,“Access Manager” -> “dev_hdfs” -> “Add New Policy”,配置权限策略所作用的用户、目录等信息:


image.png

拉到底部点击“Add”完成添加后,可以看到新增了一条策略配置:


image.png

回到操作系统,切换到hive用户,测试能否正常读取目录、文件:

[root@hadoop01 ~]# sudo su - hive
上一次登录:一 11月  9 21:08:34 CST 2020pts/3 上
[hive@hadoop01 ~]$ hdfs dfs -ls /rangertest1
Found 1 items
-rw-r--r--   1 root supergroup         12 2020-11-11 16:26 /rangertest1/testfile
[hive@hadoop01 ~]$ hdfs dfs -cat /rangertest1/testfile
ranger test
[hive@hadoop01 ~]$ 

测试写操作,此时会发现能够正常往rangertest1目录添加文件,但往rangertest2目录添加文件就会报错,因为我们只赋予了rangertest1目录的读写权限:

[hive@hadoop01 ~]$ hdfs dfs -put testfile2 /rangertest1
[hive@hadoop01 ~]$ hdfs dfs -put testfile2 /rangertest2
put: Permission denied: user=hive, access=WRITE, inode="/rangertest2":root:supergroup:drwxr-xr-x
[hive@hadoop01 ~]$ 

至此,Ranger对HDFS的权限控制也验证通过了。除此之外,你也可以进行其他的测试,其他组件的ranger plugin也是类似的,在本文中就不一一演示了。

上一篇 下一篇

猜你喜欢

热点阅读