Apache Flink 基于 CDH-6.3.2 源码编译
-
修改 maven 的仓库地址
[root@node01 cloudera]# cat /usr/share/maven/conf/settings.xml ... </mirrors> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> ...
-
下载解压 Flink-1.9.2 源码,并查看 flink-shaded 版本
[root@node01 cloudera]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.9.2/flink-1.9.2-src.tgz [root@node01 cloudera]# tar -zxvf flink-1.9.2-src.tgz [root@node01 cloudera]# cat flink-1.9.2/pom.xml | grep flink.shaded.version <flink.shaded.version>7.0</flink.shaded.version> <version>6.2.1-${flink.shaded.version}</version> <version>18.0-${flink.shaded.version}</version> <version>4.1.32.Final-${flink.shaded.version}</version> <version>2.0.25.Final-${flink.shaded.version}</version> ...
-
下载解压 flink-shaded-7.0 源码
[root@node01 cloudera]# wget https://archive.apache.org/dist/flink/flink-shaded-7.0/flink-shaded-7.0-src.tgz
-
修改 flink 和 flink-shaded 的 pom.xml 文件
<repositories> <repository> <id>cloudera</id> <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url> <name>Cloudera Repositories</name> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories>
-
编译 flink-shaded 并安装到本地仓库
mvn -T4C clean install -DskipTests -Dhadoop.version=3.0.0-cdh6.3.2 -Dzookeeper.version=3.4.5-cdh6.3.2
注意:
需在 flink-shaded-9.0/flink-shaded-hadoop-2-uber/pom.xml 中的 dependencies 标签中添加如下依赖,否则运行启动时会出现java.lang.NoSuchMethodError: org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
异常。
原因:
默认使用的是 commons-cli 的 1.2 版本,该版本中的 Option 中不粗拿在 Option$Builder 内部类。<dependency> <groupId>commons-cli</groupId> <artifactId>commons-cli</artifactId> <version>1.3.1</version> </dependency>
编译运行:
[root@node01 flink-shaded-7.0]# mvn -T4C clean install -DskipTests -Dhadoop.version=3.0.0-cdh6.3.2 -Dzookeeper.version=3.4.5-cdh6.3.2 ... [INFO] Reactor Summary: [INFO] [INFO] flink-shaded ...................................... SUCCESS [2.562s] [INFO] flink-shaded-force-shading ........................ SUCCESS [1.550s] [INFO] flink-shaded-asm-6 ................................ SUCCESS [2.945s] [INFO] flink-shaded-guava-18 ............................. SUCCESS [4.558s] [INFO] flink-shaded-netty-4 .............................. SUCCESS [7.088s] [INFO] flink-shaded-netty-tcnative-dynamic ............... SUCCESS [4.131s] [INFO] flink-shaded-jackson-parent ....................... SUCCESS [0.380s] [INFO] flink-shaded-jackson-2 ............................ SUCCESS [4.371s] [INFO] flink-shaded-jackson-module-jsonSchema-2 .......... SUCCESS [4.132s] [INFO] flink-shaded-hadoop-2 ............................. SUCCESS [25.811s] [INFO] flink-shaded-hadoop-2-uber ........................ SUCCESS [27.971s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 56.584s (Wall Clock) [INFO] Finished at: Wed Apr 22 13:55:07 CST 2020 [INFO] Final Memory: 34M/3161M [INFO] ------------------------------------------------------------------------
问题: 如果出现
For artifact {org.apache.zookeeper:zookeeper:null:jar}
,异常如下:[ERROR] Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (process-resource-bundles) on project flink-shaded-hadoop-2-uber: Execution process-resource-bundles of goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: For artifact {org.apache.zookeeper:zookeeper:null:jar}: The version cannot be empty. -> [Help 1]
解决办法:
修改 flink-shaded-hadoop-2-uber下的pom.xml文件,增加zookeeper版本信息后再重新编译。... <properties> <hadoop.version>2.4.1</hadoop.version> <!-- 增加zookeeper版本信息 --> <zookeeper.version>3.4.5-cdh6.3.2</zookeeper.version> </properties> ... <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <!-- 增加zookeeper版本信息 --> <version>${zookeeper.version}</version> <exclusions> ....
-
编译 flink 源码
修改 flink-connectors/flink-hbase/pom.xml 文件
<properties> <!--<hbase.version>1.4.3</hbase.version>--> <hbase.version>2.1.0-cdh6.3.2</hbase.version> </properties>
mvn -T4C clean package -DskipTests -Pvendor-repos -Pinclude-hadoop -Dhadoop.version=3.0.0-cdh6.3.2 -Dhive.version=2.1.1-cdh6.3.2 -Dscala-2.11
参数说明:-T4C 使用 4 线程编译 -Pinclude-hadoop 将 hadoop的 jar包,打入到lib/中 -Pvendor-repos 使用cdh、hdp 的 hadoop 需要添加该参数 -Dhadoop.version=3.0.0-cdh6.3.2 指定 hadoop 的版本 -Dhive.version=2.1.1-cdh6.3.2 指定 hive 的版本 -Dscala-2.11 指定scala的版本为2.11
运行日志:
[INFO] Reactor Summary for flink 1.9.2: [INFO] [INFO] force-shading ...................................... SUCCESS [ 2.810 s] [INFO] flink .............................................. SUCCESS [ 2.978 s] [INFO] flink-annotations .................................. SUCCESS [ 5.721 s] [INFO] flink-shaded-curator ............................... SUCCESS [ 5.782 s] [INFO] flink-metrics ...................................... SUCCESS [ 0.671 s] [INFO] flink-metrics-core ................................. SUCCESS [ 5.349 s] [INFO] flink-test-utils-parent ............................ SUCCESS [ 1.483 s] [INFO] flink-test-utils-junit ............................. SUCCESS [ 4.502 s] [INFO] flink-core ......................................... SUCCESS [ 40.413 s] [INFO] flink-java ......................................... SUCCESS [ 6.292 s] [INFO] flink-queryable-state .............................. SUCCESS [ 0.254 s] [INFO] flink-queryable-state-client-java .................. SUCCESS [ 1.418 s] [INFO] flink-filesystems .................................. SUCCESS [ 0.750 s] [INFO] flink-hadoop-fs .................................... SUCCESS [ 6.068 s] [INFO] flink-runtime ...................................... SUCCESS [02:21 min] [INFO] flink-scala ........................................ SUCCESS [ 54.247 s] [INFO] flink-mapr-fs ...................................... SUCCESS [ 1.321 s] [INFO] flink-filesystems :: flink-fs-hadoop-shaded ........ SUCCESS [ 8.563 s] [INFO] flink-s3-fs-base ................................... SUCCESS [ 14.335 s] [INFO] flink-s3-fs-hadoop ................................. SUCCESS [ 16.905 s] [INFO] flink-s3-fs-presto ................................. SUCCESS [ 23.099 s] [INFO] flink-swift-fs-hadoop .............................. SUCCESS [ 30.542 s] [INFO] flink-oss-fs-hadoop ................................ SUCCESS [ 13.162 s] [INFO] flink-azure-fs-hadoop .............................. SUCCESS [ 16.330 s] [INFO] flink-optimizer .................................... SUCCESS [ 19.099 s] [INFO] flink-clients ...................................... SUCCESS [ 2.122 s] [INFO] flink-streaming-java ............................... SUCCESS [ 14.139 s] [INFO] flink-test-utils ................................... SUCCESS [ 6.110 s] [INFO] flink-runtime-web .................................. SUCCESS [02:37 min] [INFO] flink-examples ..................................... SUCCESS [ 0.438 s] [INFO] flink-examples-batch ............................... SUCCESS [ 27.812 s] [INFO] flink-connectors ................................... SUCCESS [ 0.705 s] [INFO] flink-hadoop-compatibility ......................... SUCCESS [ 14.950 s] [INFO] flink-state-backends ............................... SUCCESS [ 1.644 s] [INFO] flink-statebackend-rocksdb ......................... SUCCESS [ 5.861 s] [INFO] flink-tests ........................................ SUCCESS [ 44.591 s] [INFO] flink-streaming-scala .............................. SUCCESS [ 43.502 s] [INFO] flink-table ........................................ SUCCESS [ 1.830 s] [INFO] flink-table-common ................................. SUCCESS [ 4.247 s] [INFO] flink-table-api-java ............................... SUCCESS [ 3.593 s] [INFO] flink-table-api-java-bridge ........................ SUCCESS [ 4.231 s] [INFO] flink-table-api-scala .............................. SUCCESS [ 14.861 s] [INFO] flink-table-api-scala-bridge ....................... SUCCESS [ 13.659 s] [INFO] flink-sql-parser ................................... SUCCESS [ 12.202 s] [INFO] flink-libraries .................................... SUCCESS [ 2.007 s] [INFO] flink-cep .......................................... SUCCESS [ 7.588 s] [INFO] flink-table-planner ................................ SUCCESS [02:46 min] [INFO] flink-orc .......................................... SUCCESS [ 4.112 s] [INFO] flink-jdbc ......................................... SUCCESS [ 4.117 s] [INFO] flink-table-runtime-blink .......................... SUCCESS [ 11.370 s] [INFO] flink-table-planner-blink .......................... SUCCESS [04:00 min] [INFO] flink-hbase ........................................ SUCCESS [ 28.810 s] [INFO] flink-hcatalog ..................................... SUCCESS [ 10.915 s] [INFO] flink-metrics-jmx .................................. SUCCESS [ 1.355 s] [INFO] flink-connector-kafka-base ......................... SUCCESS [ 9.463 s] [INFO] flink-connector-kafka-0.9 .......................... SUCCESS [ 15.682 s] [INFO] flink-connector-kafka-0.10 ......................... SUCCESS [ 4.199 s] [INFO] flink-connector-kafka-0.11 ......................... SUCCESS [ 7.567 s] [INFO] flink-formats ...................................... SUCCESS [ 2.251 s] [INFO] flink-json ......................................... SUCCESS [ 1.543 s] [INFO] flink-connector-elasticsearch-base ................. SUCCESS [ 5.216 s] [INFO] flink-connector-elasticsearch2 ..................... SUCCESS [ 49.831 s] [INFO] flink-connector-elasticsearch5 ..................... SUCCESS [ 53.684 s] [INFO] flink-connector-elasticsearch6 ..................... SUCCESS [ 11.141 s] [INFO] flink-csv .......................................... SUCCESS [ 0.553 s] [INFO] flink-connector-hive ............................... SUCCESS [ 27.144 s] [INFO] flink-connector-rabbitmq ........................... SUCCESS [ 3.961 s] [INFO] flink-connector-twitter ............................ SUCCESS [ 5.598 s] [INFO] flink-connector-nifi ............................... SUCCESS [ 0.744 s] [INFO] flink-connector-cassandra .......................... SUCCESS [ 14.668 s] [INFO] flink-avro ......................................... SUCCESS [ 9.614 s] [INFO] flink-connector-filesystem ......................... SUCCESS [ 9.759 s] [INFO] flink-connector-kafka .............................. SUCCESS [ 15.728 s] [INFO] flink-connector-gcp-pubsub ......................... SUCCESS [ 5.617 s] [INFO] flink-sql-connector-elasticsearch6 ................. SUCCESS [ 30.449 s] [INFO] flink-sql-connector-kafka-0.9 ...................... SUCCESS [ 1.645 s] [INFO] flink-sql-connector-kafka-0.10 ..................... SUCCESS [ 2.836 s] [INFO] flink-sql-connector-kafka-0.11 ..................... SUCCESS [ 2.787 s] [INFO] flink-sql-connector-kafka .......................... SUCCESS [ 5.766 s] [INFO] flink-connector-kafka-0.8 .......................... SUCCESS [ 7.800 s] [INFO] flink-avro-confluent-registry ...................... SUCCESS [ 11.056 s] [INFO] flink-parquet ...................................... SUCCESS [ 10.743 s] [INFO] flink-sequence-file ................................ SUCCESS [ 0.686 s] [INFO] flink-examples-streaming ........................... SUCCESS [ 38.346 s] [INFO] flink-examples-table ............................... SUCCESS [ 33.490 s] [INFO] flink-examples-build-helper ........................ SUCCESS [ 0.525 s] [INFO] flink-examples-streaming-twitter ................... SUCCESS [ 1.058 s] [INFO] flink-examples-streaming-state-machine ............. SUCCESS [ 0.659 s] [INFO] flink-examples-streaming-gcp-pubsub ................ SUCCESS [ 5.456 s] [INFO] flink-container .................................... SUCCESS [ 3.473 s] [INFO] flink-queryable-state-runtime ...................... SUCCESS [ 3.480 s] [INFO] flink-end-to-end-tests ............................. SUCCESS [ 1.483 s] [INFO] flink-cli-test ..................................... SUCCESS [ 2.964 s] [INFO] flink-parent-child-classloading-test-program ....... SUCCESS [ 3.633 s] [INFO] flink-parent-child-classloading-test-lib-package ... SUCCESS [ 4.444 s] [INFO] flink-dataset-allround-test ........................ SUCCESS [ 0.579 s] [INFO] flink-dataset-fine-grained-recovery-test ........... SUCCESS [ 0.462 s] [INFO] flink-datastream-allround-test ..................... SUCCESS [ 11.859 s] [INFO] flink-batch-sql-test ............................... SUCCESS [ 1.319 s] [INFO] flink-stream-sql-test .............................. SUCCESS [ 0.323 s] [INFO] flink-bucketing-sink-test .......................... SUCCESS [ 5.689 s] [INFO] flink-distributed-cache-via-blob ................... SUCCESS [ 3.021 s] [INFO] flink-high-parallelism-iterations-test ............. SUCCESS [ 9.755 s] [INFO] flink-stream-stateful-job-upgrade-test ............. SUCCESS [ 6.941 s] [INFO] flink-queryable-state-test ......................... SUCCESS [ 4.807 s] [INFO] flink-local-recovery-and-allocation-test ........... SUCCESS [ 0.973 s] [INFO] flink-elasticsearch2-test .......................... SUCCESS [ 6.464 s] [INFO] flink-elasticsearch5-test .......................... SUCCESS [ 8.548 s] [INFO] flink-elasticsearch6-test .......................... SUCCESS [ 22.338 s] [INFO] flink-quickstart ................................... SUCCESS [ 4.529 s] [INFO] flink-quickstart-java .............................. SUCCESS [ 4.234 s] [INFO] flink-quickstart-scala ............................. SUCCESS [ 4.197 s] [INFO] flink-quickstart-test .............................. SUCCESS [ 1.227 s] [INFO] flink-confluent-schema-registry .................... SUCCESS [ 9.498 s] [INFO] flink-stream-state-ttl-test ........................ SUCCESS [ 21.483 s] [INFO] flink-sql-client-test .............................. SUCCESS [ 2.769 s] [INFO] flink-streaming-file-sink-test ..................... SUCCESS [ 3.807 s] [INFO] flink-state-evolution-test ......................... SUCCESS [ 11.670 s] [INFO] flink-mesos ........................................ SUCCESS [ 47.011 s] [INFO] flink-yarn ......................................... SUCCESS [ 4.402 s] [INFO] flink-gelly ........................................ SUCCESS [ 7.358 s] [INFO] flink-gelly-scala .................................. SUCCESS [ 23.522 s] [INFO] flink-gelly-examples ............................... SUCCESS [ 14.272 s] [INFO] flink-metrics-dropwizard ........................... SUCCESS [ 0.699 s] [INFO] flink-metrics-graphite ............................. SUCCESS [ 0.350 s] [INFO] flink-metrics-influxdb ............................. SUCCESS [ 1.329 s] [INFO] flink-metrics-prometheus ........................... SUCCESS [ 1.064 s] [INFO] flink-metrics-statsd ............................... SUCCESS [ 0.784 s] [INFO] flink-metrics-datadog .............................. SUCCESS [ 3.023 s] [INFO] flink-metrics-slf4j ................................ SUCCESS [ 0.731 s] [INFO] flink-cep-scala .................................... SUCCESS [ 15.660 s] [INFO] flink-table-uber ................................... SUCCESS [ 6.177 s] [INFO] flink-table-uber-blink ............................. SUCCESS [ 3.510 s] [INFO] flink-sql-client ................................... SUCCESS [ 2.423 s] [INFO] flink-state-processor-api .......................... SUCCESS [ 1.121 s] [INFO] flink-python ....................................... SUCCESS [ 4.363 s] [INFO] flink-scala-shell .................................. SUCCESS [ 49.588 s] [INFO] flink-dist ......................................... SUCCESS [ 17.690 s] [INFO] flink-end-to-end-tests-common ...................... SUCCESS [ 0.552 s] [INFO] flink-metrics-availability-test .................... SUCCESS [ 0.299 s] [INFO] flink-metrics-reporter-prometheus-test ............. SUCCESS [ 0.261 s] [INFO] flink-heavy-deployment-stress-test ................. SUCCESS [ 31.274 s] [INFO] flink-connector-gcp-pubsub-emulator-tests .......... SUCCESS [ 3.946 s] [INFO] flink-streaming-kafka-test-base .................... SUCCESS [ 2.275 s] [INFO] flink-streaming-kafka-test ......................... SUCCESS [ 28.913 s] [INFO] flink-streaming-kafka011-test ...................... SUCCESS [ 18.241 s] [INFO] flink-streaming-kafka010-test ...................... SUCCESS [ 24.459 s] [INFO] flink-plugins-test ................................. SUCCESS [ 0.757 s] [INFO] dummy-fs ........................................... SUCCESS [ 0.739 s] [INFO] another-dummy-fs ................................... SUCCESS [ 0.735 s] [INFO] flink-tpch-test .................................... SUCCESS [ 6.803 s] [INFO] flink-contrib ...................................... SUCCESS [ 0.807 s] [INFO] flink-connector-wikiedits .......................... SUCCESS [ 3.988 s] [INFO] flink-yarn-tests ................................... SUCCESS [ 5.258 s] [INFO] flink-fs-tests ..................................... SUCCESS [ 8.230 s] [INFO] flink-docs ......................................... SUCCESS [ 1.313 s] [INFO] flink-ml-parent .................................... SUCCESS [ 1.400 s] [INFO] flink-ml-api ....................................... SUCCESS [ 0.977 s] [INFO] flink-ml-lib ....................................... SUCCESS [ 0.593 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 13:16 min (Wall Clock) [INFO] Finished at: 2020-04-24T07:27:54+08:00 [INFO] ------------------------------------------------------------------------
问题: 出现如下
org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance
不可用,信息如下:[INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 08:18 min (Wall Clock) [INFO] Finished at: 2019-07-27T13:18:30+08:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile (default-testCompile) on project flink-yarn_2.11: Compilation failure [ERROR] /opt/flink2/flink-yarn/src/test/java/org/apache/flink/yarn/AbstractYarnClusterTest.java:[89,41] no suitable method found for newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,<nulltype>,org.apache.hadoop.yarn.api.records.YarnApplicationState,<nulltype>,<nulltype>,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,<nulltype>,<nulltype>,float,<nulltype>,<nulltype>) [ERROR] method org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token) is not applicable [ERROR] (actual and formal argument lists differ in length) [ERROR] method org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token,java.util.Set<java.lang.String>,boolean,org.apache.hadoop.yarn.api.records.Priority,java.lang.String,java.lang.String) is not applicable [ERROR] (actual and formal argument lists differ in length) [ERROR] method org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token,java.util.Set<java.lang.String>,boolean,org.apache.hadoop.yarn.api.records.Priority,java.lang.String,java.lang.String) is not applicable [ERROR] (actual and formal argument lists differ in length) [ERROR] [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :flink-yarn_2.11
解决办法:
在 flink-yarn/pom.xml 文件的 build 中添加如下插件,跳过本模块的测试代码的编译。<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.0</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> <!-- 略过测试代码的编译 --> <skip>true</skip> <!-- The semantics of this option are reversed, see MCOMPILER-209. --> <useIncrementalCompilation>false</useIncrementalCompilation> <compilerArgs> <!-- Prevents recompilation due to missing package-info.class, see MCOMPILER-205 --> <arg>-Xpkginfo:always</arg> </compilerArgs> </configuration> </plugin>
问题: 出现如下
/AbstractTableInputFormat.java:[235,99] cannot find symbol
异常,信息如下:[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile) on project flink-hbase_2.11: Compilation failure: Compilation failure: [ERROR] /data/github/cloudera/flink-1.9.2/flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/AbstractTableInputFormat.java:[235,99] cannot find symbol [ERROR] symbol: method getTableName() [ERROR] location: variable table of type org.apache.hadoop.hbase.client.HTable [ERROR] /data/github/cloudera/flink-1.9.2/flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/TableInputFormat.java:[84,32] constructor HTable in class org.apache.hadoop.hbase.client.HTable cannot be applied to given types; [ERROR] required: org.apache.hadoop.hbase.client.ClusterConnection,org.apache.hadoop.hbase.client.TableBuilderBase,org.apache.hadoop.hbase.client.RpcRetryingCallerFactory,org.apache.hadoop.hbase.ipc.RpcControllerFactory,java.util.concurrent.ExecutorService [ERROR] found: org.apache.hadoop.conf.Configuration,java.lang.String [ERROR] reason: actual and formal argument lists differ in length [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <args> -rf :flink-hbase_2.11
解决办法:
修改 flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/AbstractTableInputFormat.java 第235行源码:
将final TableInputSplit split = new TableInputSplit(id, hosts, table.getTableName(), splitStart, splitStop);
改为
final TableInputSplit split = new TableInputSplit(id, hosts, table.getName().getName(), splitStart, splitStop);
问题: 出现如下
/AbstractTableInputFormat.java:[235,99] cannot find symbol
异常,信息如下:[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile) on project flink-hbase_2.11: Compilation failure [ERROR] /data/github/cloudera/flink-1.9.2/flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/TableInputFormat.java:[84,32] constructor HTable in class org.apache.hadoop.hbase.client.HTable cannot be applied to given types; [ERROR] required: org.apache.hadoop.hbase.client.ClusterConnection,org.apache.hadoop.hbase.client.TableBuilderBase,org.apache.hadoop.hbase.client.RpcRetryingCallerFactory,org.apache.hadoop.hbase.ipc.RpcControllerFactory,java.util.concurrent.ExecutorService [ERROR] found: org.apache.hadoop.conf.Configuration,java.lang.String [ERROR] reason: actual and formal argument lists differ in length [ERROR] [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <args> -rf :flink-hbase_2.11
解决办法:
修改 flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/TableInputFormat.jav 第84行源码:import org.apache.hadoop.hbase.client.Table; ... private HTable createTable() { LOG.info("Initializing HBaseConfiguration"); //use files found in the classpath org.apache.hadoop.conf.Configuration hConf = HBaseConfiguration.create(); /*try { return new HTable(hConf, getTableName()); } */ try (Connection connection = ConnectionFactory.createConnection(hConf); Table connTable = connection.getTable(TableName.valueOf(getTableName()))) { // use table as needed, the table returned is lightweight if (connTable instanceof HTable) { return connTable; } } catch (Exception e) { LOG.error("Error instantiating a new HTable instance", e); } return null; }
如出现测试代码编译错误,可参考上述 flink-yarn 模块解决办法,添加maven插件,跳过测试代码的编译。
-
打包
查看flink的lib下的依赖包
[root@node01 cloudera]#cd flink-1.9.2/flink-dist/target/flink-1.9.2-bin [root@node01 flink-1.9.2-bin]# pwd /data/github/cloudera/flink-1.9.2/flink-dist/target/flink-1.9.2-bin [root@node01 flink-1.9.2-bin]# cd flink-1.9.2/lib/ [root@node01 lib]# ll total 201652 -rw-r--r-- 1 root root 105437196 Apr 24 07:27 flink-dist_2.11-1.9.2.jar -rw-r--r-- 1 root root 59612259 Apr 22 13:55 flink-shaded-hadoop-2-uber-3.0.0-cdh6.3.2-7.0.jar -rw-r--r-- 1 root root 18751140 Apr 24 07:25 flink-table_2.11-1.9.2.jar -rw-r--r-- 1 root root 22182832 Apr 24 07:27 flink-table-blink_2.11-1.9.2.jar -rw-r--r-- 1 root root 489884 Apr 22 11:54 log4j-1.2.17.jar -rw-r--r-- 1 root root 9931 Apr 22 11:54 slf4j-log4j12-1.7.15.jar
打包,由于是基于scala2.11编译的,压缩包的名称必须是:flink-1.9.2-bin-scala_2.11.tgz
[root@node01 flink-1.9.2-bin]# tar -zcf flink-1.9.2-bin-scala_2.11.tgz flink-1.9.2 [root@node01 flink-1.9.2-bin]# ll total 310688 drwxr-xr-x 9 root root 126 Apr 24 07:27 flink-1.9.2 -rw-r--r-- 1 root root 318142073 Apr 24 08:41 flink-1.9.2-bin-scala_2.11.tgz
-
使用 flink-shaded-9.0 依赖版本编译
需修改以下内容,然后再执行编译打包:
-
修改
flink-1.9.2/pom.xml
文件
将flink-shaded-asm-6
修改为flink-shaded-asm-7
将6.2.1-${flink.shaded.version}
修改为7.1-${flink.shaded.version}
将4.1.32.Final
修改为4.1.39.Final
<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-shaded-asm-7</artifactId> <version>7.1-${flink.shaded.version}</version> </dependency> ... <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-shaded-netty</artifactId> <version>4.1.32.Final-${flink.shaded.version}</version> </dependency>
-
修改
flink-core/pom.xml
flink-java/pom.xml
flink-scala/pom.xml
flink-runtime/pom.xml
文件
将flink-shaded-asm-6
修改为flink-shaded-asm-7
<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-shaded-asm-7</artifactId> </dependency>
-
修改以下源码,将引入的包名
org.apache.flink.shaded.asm6
修改为org.apache.flink.shaded.asm7
flink-1.9.2/flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractionUtils.java
flink-1.9.2/flink-java/src/main/java/org/apache/flink/api/java/ClosureCleaner.java
flink-1.9.2/flink-scala/src/main/scala/org/apache/flink/api/scala/ClosureCleaner.scala
-