error

2017-10-29  本文已影响0人  satyrs_sh
SET PASSWORD = PASSWORD('your new password');
ALTER USER 'root'@'localhost' PASSWORD EXPIRE NEVER;
flush privileges;
$mysql -u root -p
mysql> create user 'hive' identified by '123456';
Query OK, 0 rows affected (0.00 sec)
mysql> grant all privileges on *.* to 'hive' with grant option;
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)

create user 'hive'@'%' identified by 'hive';
grant all privileges on *.* to 'hive'@'%' with grant option;
flush privileges;

hadoop namenode -format
start-all.sh

  1. schemaTool failed
    Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)
    关闭mysql
sudo /usr/local/mysql/support-files/mysql.server start
sudo /usr/local/mysql/support-files/mysql.server stop
sudo /usr/local/mysql/support-files/mysql.server restart
  1. MySQL server PID file could not be found!
    关闭进程
    ps -ef|grep mysqld;kill 6334

  2. schemaTool failed
    Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : Communications link failure
    防火墙问题或mysql user设置问题

  3. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    执行初始化
    schematool -initSchema -dbType mysql

  4. hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "wyq/192.168.."; destination host is: "localhost":8088;
    查看端口,kill进程

  5. Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Call From wyq/192.168.. to localhost:8088 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:
    hadoop namenode启动错误

  6. Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/":wyq:supergroup:drwxr-xr-x
    切换到wyq

insert overwrite table test partition (pt)   
select userid, substring(addts,0,10) as pt from testb f where pt>='2012-09-01'   
[Fatal Error] Operator FS_3 (id=3): Number of dynamic partitions exceeded hive.exec.max.dynamic.partitions.pernode.. Killing the job.  

原因:

hive.exec.max.dynamic.partitions.pernode (缺省值100):
每一个mapreduce job允许创建的分区的最大数量,如果超过了这个数量就会报错
hive.exec.max.dynamic.partitions (缺省值1000):一个dml语句允许创建的所有分区的最大数量
hive.exec.max.created.files (缺省值100000):所有的mapreduce job允许创建的文件的最大数量

解决:每一个mapreduce可以尽量少的产生新的文件夹,可以借助distribute by的功能,将分区列值相同的数据放到一起:

distribute by substring(addts,0,10)
上一篇下一篇

猜你喜欢

热点阅读