安装hue
[TOC]
参考网站:
http://blog.csdn.net/bluishglc/article/details/48393291 https://github.com/cloudera/hue#development-prerequisites
安装配置hue
安装
安装JDK:
, 这个包安装之后, 没有javac等命令.yum install java-1.8.0-openjdk
安装这个就可以了: yum install java-1.8.0-openjdk-devel.x86_64
.
依赖包安装
yum install -y ant asciidoc cyrus-sasl-plain gcc libffi-devel make mysql gcc-c++ cyrus-sasl-devel cyrus-sasl-gssapi krb5-devel libxml2-devel libxslt-devel mysql-devel openldap-devel python-devel sqlite-devel openssl-devel gmp-devel
Maven
安装Maven
先下载:
wget http://mirrors.ocf.berkeley.edu/apache/maven/maven-3/3.5.2/binaries/apache-maven-3.5.2-bin.tar.gz
然后解压. 配置环境变量.
vim /etc/profile
在最后一行添加:
export MAVEN_HOME=/opt/maven/apache-maven-3.5.2
export PATH=$MAVEN_HOME/bin:$PATH
最后再source /etc/profile
一下以生效.
编译和运行Hue
新建用户和组:
$ groupadd hue
$ useradd -g hue hue
给用户添加sudo权限:
修改 /etc/sudoers 文件,找到下面一行,在root下面添加一行,如下所示:
[root@test04 etc]# chmod 660 sudoers
[root@test04 etc]# vim sudoers
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
hue ALL=(ALL) ALL
然后
[root@test04 etc]# ll |grep sudo
-rw-r-----. 1 root root 1786 Jun 7 2017 sudo.conf
-rwxr-----. 1 root root 3938 Jun 7 2017 sudoers
drwxr-x---. 2 root root 6 Aug 4 22:38 sudoers.d
-rw-r-----. 1 root root 3181 Jun 7 2017 sudo-ldap.conf
[root@test04 etc]# vim sudoers
[root@test04 etc]# chmod 440 sudoers
sudo chmod -R 775 hue-4.1.0
[root@test04 hue]# ll
total 46860
drwxrwxr-x 8 hue hue 231 Oct 4 23:30 hue-4.1.0
[root@test04 etc]# su hue
[hue@test04 etc]$ sudo chown -R hue:hue hue
[hue@test04 etc]$ cd /opt/
[hue@test04 opt]$ sudo chmod -R 771 hue
[hue@test04 opt]$ cd hue
[hue@test04 hue]$ tar -vxf hue-4.1.0.tgz
[hue@test04 hue]$ cd hue-4.1.0
[hue@test04 hue-4.1.0]$ make apps
配置
配置也得记一下
配置对集群的访问权限
我们在Linux中是以hue用户来管理hue的, 要使他可以访问到hadoop集群, 需要在hadoop中配置hue为代理用户:
/usr/local/hadoop/etc/hadoop/core-site.xml
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
然后重启hadoop集群.
配置hdfs
vim /$HUE_HOME/desktop/conf/hue.ini
报错信息和解决办法
make apps的时候发生Name or service not known
错误
[Errno -2] Name or service not known
, 这种错误是由于路由一类的功能出现问题而导致的. 所以我们先去ping了一下报错时连接的那个服务器域名, 发现是可以ping通的, 后来就去检查了hue所在主机的路由文件?? /etc/resolv.conf
里面的dns. 在原有的nameserver后面又多加了一个nameserver 114.114.114.114
. 接着发现search bigdata.hbh
这个东西存在, 发现hosts文件没有加入这个域名. 我们当时又ping了下这个域名, 结果发现报错的内容就是Name or service not known
, 内心一喜, 后来赶紧在hosts文件中加入了127.0.0.1 test04.bigdata.hbh bigdata.hbh
.
再删除原来解压的hue的文件夹, 重新解压. 发现make已经没有问题了.
启动时报出的错误
Traceback (most recent call last):
File "build/env/bin/supervisor", line 9, in <module>
load_entry_point('desktop==4.1.0', 'console_scripts', 'supervisor')()
File "/opt/hue/hue-4.1.0/desktop/core/src/desktop/supervisor.py", line 319, in main
setup_user_info()
File "/opt/hue/hue-4.1.0/desktop/core/src/desktop/supervisor.py", line 257, in setup_user_info
desktop.lib.daemon_utils.get_uid_gid(SETUID_USER, SETGID_GROUP)
File "/opt/hue/hue-4.1.0/desktop/core/src/desktop/lib/daemon_utils.py", line 45, in get_uid_gid
raise KeyError("Couldn't get user id for user %s" % (username,))
KeyError: "Couldn't get user id for user hue"
需要给HBASE开启 thrift 1
服务.
/usr/hdp/current/hbase-master/bin/hbase thrift start
上面这个默认端口不清楚, 或者用下面的这个, 可以指定端口而且还是后台运行的.
/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start thrift -p 9090
如下错误
<html>
<head>
<title>Apache Tomcat/6.0.48 - Error report</title>
<style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style>
</head>
<body>
<h1>HTTP Status 401 - </h1>
<hr size="1" noshade="noshade" />
<p><b>type</b> Status report</p>
<p><b>message</b> <u></u></p>
<p><b>description</b> <u>This request requires HTTP authentication.</u></p>
<hr size="1" noshade="noshade" />
<h3>Apache Tomcat/6.0.48</h3> (error 401)
</body>
</html>
在oozie的oozie-site.xml里配置. 加上:
oozie.service.ProxyUserService.proxyuser.hue.groups=*
oozie.service.ProxyUserService.proxyuser.hue.hosts=*
从yarn里的history的日志里面看到说找不到mysql的jdbc driver.
Action failed, error message[Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]]
[root@test05 bin]# mkdir -p /var/lib/sqoop
[root@test05 bin]# chown sqoop:sqoop /var/lib/sqoop
[root@test05 bin]# chmod 755 /var/lib/sqoop
you are a Hue admin but not a HDFS superuser
在hdfs的配置文件core-site.xml中配置代理. 如, 我启动的是hadoop-httpfs, 默认用户是httpfs.
那么我配置如下:
hadoop.proxyuser.httpfs.groups=*
hadoop.proxyuser.httpfs.hosts=*
hosts后面的*, 你可以配置成hadoop-httpfs所在的host名称.
提交oozie提示namenode是standby的. "Could not perform authorization operation, Operation category READ is not supported in state standby"
Error: E0501 : E0501: Could not perform authorization operation, Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3928) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1109) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)
原因和解决方式:
hdfs实现了ha, 但是在hue的配置中没有以ha的方式配置.
image.png