hive字段级别血缘实现
2021-02-18 本文已影响0人
烂泥_119c
背## 背景
- 为便于hive表数据上下游的管理(评估逻辑变更的影响、快速追溯数据来源),需要构建hive字段级别的数据血缘,hive本身提供提供了一个用于打印数据血缘的钩子类,我们可以借助其来进行实现。
准备工作
这个钩子类将血缘关系以日志的形式输出,为了拿到这里的血缘关系,首先我们要准备log4j的配置文件。
- hive-log4j2.properties
status = INFO
name = HiveLog4j2
packages = org.apache.hadoop.hive.ql.log
property.hive.log.level = INFO
property.hive.root.loggr = DRFA
property.hive.log.dir = .
property.hive.log.file = hive.log
appenders = console, DRFA, lineage
# 这里省略 console, DRFA的配置 都是些常规配置
# ......
loggers = LineageLogger
# lineage
logger.lineageLogger.name = org.apache.hadoop.hive.ql.hooks.lineageLogger
logger.lineageLogger.level = INFO
logger.lineageLogger.additivity = false
logger.lineageLogger.appenderRefs = lineage
appender.lineage.type = RollingRandomAccessFile
appender.lineage.fileName = ${sys:hive.log.dir}/hive_lineage.log
appender.lineage.filePattern = ${sys:hive.log.dir}/hive_lineage.log.%d{yyyy-MM-dd}
appender.lineage.layout.type = PatternLayout
appender.lineage.layout.pattern = %m%n
- hive脚本运行前指定日志配置文件,并设置钩子
set hive.log4j.file=hive-log4j2.properties
set hive.exec.post.hooks=org.apache.hadoop.hive.ql.hooks.LineageLogger
运行
- 经过以上配置,hive脚本执行完毕后,会在服务器本地生成一个日志文件: hive_lineage.log
- 解析该日志文件,即可得到字段级别的血缘关系
举例
- 如,执行下面的hiveQL
CREATE TABLE tmp_zone_info AS
SELECT z.zoneid AS zone_id,
z.zonename AS zone_name,
c.cityid AS city_id,
c.cityname AS city_name
FROM dict_zoneinfo z
LEFT JOIN dict_cityinfo c
ON z.cityid = c.cityid
AND z.dt='20210218'
AND c.dt='20210218'
WHERE z.dt='20210218'
AND c.dt='20210218';
- 得到的日志文件,经格式化如下图所示(摘抄自网络):
{
"version": "1.0",
"user": "hadoop",
"timestamp": 1510307578,
"duration": 30629,
"jobIds": [
"job_1509088410884_16739"
],
"engine": "mr",
"database": "cxy7_dw",
"hash": "4484378cebc5e2b0b55fb34368d861b0",
"queryText": "CREATE TABLE tmp_zone_info AS SELECT z.zoneid AS zone_id,z.zonename AS zone_name, c.cityid AS city_id, c.cityname AS city_name FROM dict_zoneinfo z LEFT JOIN dict_cityinfo c ON z.cityid = c.cityid AND z.dt='20171109' AND c.dt='20171109' WHERE z.dt='20171109' AND c.dt='20171109'",
"edges": [
{
"sources": [
4
],
"targets": [
0
],
"edgeType": "PROJECTION"
},
{
"sources": [
5
],
"targets": [
1
],
"edgeType": "PROJECTION"
},
{
"sources": [
6
],
"targets": [
2
],
"edgeType": "PROJECTION"
},
{
"sources": [
7
],
"targets": [
3
],
"edgeType": "PROJECTION"
},
{
"sources": [
8,
6
],
"targets": [
0,
1,
2,
3
],
"expression": "(z.cityid = c.cityid)",
"edgeType": "PREDICATE"
},
{
"sources": [
9
],
"targets": [
0,
1,
2,
3
],
"expression": "(c.dt = '20171109')",
"edgeType": "PREDICATE"
},
{
"sources": [
10,
9
],
"targets": [
0,
1,
2,
3
],
"expression": "((z.dt = '20171109') and (c.dt = '20171109'))",
"edgeType": "PREDICATE"
}
],
"vertices": [
{
"id": 0,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.tmp_zone_info.zone_id"
},
{
"id": 1,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.tmp_zone_info.zone_name"
},
{
"id": 2,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.tmp_zone_info.city_id"
},
{
"id": 3,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.tmp_zone_info.city_name"
},
{
"id": 4,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.dict_zoneinfo.zoneid"
},
{
"id": 5,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.dict_zoneinfo.zonename"
},
{
"id": 6,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.dict_cityinfo.cityid"
},
{
"id": 7,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.dict_cityinfo.cityname"
},
{
"id": 8,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.dict_zoneinfo.cityid"
},
{
"id": 9,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.dict_cityinfo.dt"
},
{
"id": 10,
"vertexType": "COLUMN",
"vertexId": "cxy7_dw.dict_zoneinfo.dt"
}
]
}
- 日志文件中对表中的字段进行了编码,通过source/target表示字段的血缘关系,格式比较简单,不再赘述。 这里说明一下,edgeType 有 PREDICATE(谓语) 和 PROJECTION(投射) 两种取值,PROJECTION投射就是我们要的数据血缘, PREDICATE谓语则是一些过滤逻辑。
- 需要注意的是,这里使用with语法时,无法打出血缘。