我爱编程

CDH遇到的问题

2018-06-27  本文已影响0人  阿甘骑士
0: jdbc:hive2://localhost:10000/> select count(1) from person;
Error: Error while compiling statement: FAILED: SemanticException Unable to determine if hdfs://bi-master:8020/user/hive/warehouse/gtp.db/person is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://bi-master:8020/user/hive/warehouse/gtp.db/person, expected: hdfs://nameservice1 (state=42000,code=40000)

解决办法:
1.进入Hive服务并停止Hive的所有服务
2.点击 “操作” => "点击“更新Hive Metastore NameNode” =>“重启”
3.重启impala
4.假如有安装hue的话,

一般这种情况是因为hbase:acl表丢失了

[root@bi-master ~]# zookeeper-client -server bi-master:2181
Connecting to bi-master:2181
....
[zk: bi-master:2181(CONNECTED) 1]
[zk: bi-master:2181(CONNECTED) 1] ls /
[cluster, controller, brokers, znode001, zookeeper, hadoop-ha, admin, isr_change_notification, dubbo, otter, controller_epoch, consumers, hive_zookeeper_namespace_hive, latest_producer_id_block, config, hbase]
[zk: bi-master:2181(CONNECTED) 2] getAcl /hbase
'world,'anyone
: r
'sasl,'hbase
: cdrwa

2.创建jaas-zk-keytab.conf文件

[root@bi-master ~]# vi /usr/deng_yb/jaas-zk-keytab.conf 
Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/opt/cm-5.11.0/run/cloudera-scm-agent/process/1247-hbase-HBASERESTSERVER/hbase.keytab"
   storeKey=true
   useTicketCache=false
   principal="hbase/bi-master@WONHIGH.COM";
};

3.在操作zookeeper前将jaas-zk-keytab.conf加载到环境变量

[root@bi-master ~]# export CLIENT_JVMFLAGS="-Djava.security.auth.login.config=/usr/deng_yb/jaas-zk-keytab.conf"
[root@bi-master ~]# zookeeper-client -server bi-master:2181
[zk: bi-master:2181(CONNECTED) 9] rmr /hbase

4.重启hbase
5.假如还是不行的话,在CM上把hbase服务删除,再重新添加即可

解决办法:


yarn_acl管理.png

要么就disableACL管理,要么就yarn.admin.acl=*

#!/bin/sh
.....
for file in `cat $KETTLE_REPOSITORY/gtp/gtp_msck_repair/gtp_tables.txt`
do
   sql_msck=" msck repair table $file;"
   echo $file
   echo $sql_msck
expect<<-END
set timeout 10000
   spawn ssh $gtp_user@$gtp_ip "beeline -u 'jdbc:hive2://$gtp_ip:$gtp_port/gtp;principal=$gtp_principal' --hiveconf mapreduce.job.queuename=datacenter  -e 'msck repair table $file;' "
   expect "password: "
   send "$gtp_password\n"
expect eof
exit
END
echo "finish"
done
#简单说下,就是通过ssh,expect远程登陆服务,登陆beeline执行表修复命令

1.当时这个脚本在开发和生产集群能运行,但在测试死活都运行不了,ssh版本和配置都一样,kdc服务正常,HiveServer2服务也正常
2.实在找不到差异的原因,考虑到有可能是命令带 ";" 分号导致的,就把该脚本分成两份

vi /usr/deng_yb/repair.sh

#!/bin/sh
gtp_ip=$1
gtp_port=$2
gtp_principal=$3
table=$4
ssh wms_test@test-gtp-cdh-node01  << eeooff
beeline -u 'jdbc:hive2://${gtp_ip}:${gtp_port}/gtp;principal=${gtp_principal}' --hiveconf mapreduce.job.queuename=datacenter  -e 'msck repair table ${table};'
eeooff

原来的脚本改

...
spawn sh /usr/deng_yb/repair.sh $gtp_ip $gtp_port $gtp_principal $table
...
[realms]
 WONHIGH.COM = {
  kdc = bi-master
  admin_server = bi-master
  default_realm = WONHIGH.COM
  kdc = bi-slave1
#admin_server = bi-slave1

sparkstreaming消费kafka信息报错

Exception in thread "streaming-start" java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)V
    at org.apache.spark.streaming.kafka010.Subscribe.onStart(ConsumerStrategy.scala:85)
    at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.consumer(DirectKafkaInputDStream.scala:70)
    at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:240)
    at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
    at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
    at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)

sqoop导数问题

1.数据库到hive过程中任务执行一半报错

2.从关系型数据库到hive,结果列错位,很多列变成NULL

3.从关系型数据库到hive,结果行数不一样,count出来比原来多

上一篇 下一篇

猜你喜欢

热点阅读