大数据精进之路dataxDataX

DataX的执行流程分析

2018-10-05  本文已影响9222人  晴天哥_王志

开篇

 最早接触DataX是在前阿里同事在现在的公司引入的时候提到的,一直想抽空好好看看这部分代码,因为DataX的代码框架设计的很好,非常适合二次开发。在熟悉DataX的代码过程中,没有时间针对每个数据源的读写部分代码进行研究(这部分代码非常值得研究,基本上主流数据源的读写操作都能看到),主要阅读的还是DataX的启动和工作部分代码。

 DataX的框架的核心部分我个人看来就两大块,一块是配置贯穿DataX,all in configuration,将配置的json用到了极致;另一块是通过URLClassLoader实现插件的热加载

DataX的github地址:https://github.com/alibaba/DataX

DataX介绍

 DataX 是阿里巴巴集团内被广泛使用的离线数据同步工具/平台,实现包括 MySQL、SQL Server、Oracle、PostgreSQL、HDFS、Hive、HBase、OTS、ODPS 等各种异构数据源之间高效的数据同步功能。
 DataX本身作为数据同步框架,将不同数据源的同步抽象为从源头数据源读取数据的Reader插件,以及向目标端写入数据的Writer插件,理论上DataX框架可以支持任意数据源类型的数据同步工作。同时DataX插件体系作为一套生态系统, 每接入一套新数据源该新加入的数据源即可实现和现有的数据源互通。

Job&Task概念

 在DataX的逻辑模型中包括job、task两个维度,通过将job进行task拆分,然后将task合并到taskGroup进行运行。

启动过程

说明:




说明:

DataX开启Debug

 阅读源码的最好方法是debug整个项目工程,在如何调试DataX项目的过程中还是花费了一些精力在里面的,现在一并共享出来供有兴趣的程序员一并研究。
 整个debug过程需要按照下列步骤进行:

(1)、下载DataX源码:

$ git clone git@github.com:alibaba/DataX.git
(2)、通过maven打包:

$ cd  {DataX_source_code_home}
$ mvn -U clean package assembly:assembly -Dmaven.test.skip=true
打包成功,日志显示如下:

[INFO] BUILD SUCCESS
[INFO] -----------------------------------------------------------------
[INFO] Total time: 08:12 min
[INFO] Finished at: 2015-12-13T16:26:48+08:00
[INFO] Final Memory: 133M/960M
[INFO] -----------------------------------------------------------------
打包成功后的DataX包位于 {DataX_source_code_home}/target/datax/datax/ ,结构如下:

$ cd  {DataX_source_code_home}
$ ls ./target/datax/datax/
bin     conf        job     lib     log     log_perf    plugin      script      tmp
if __name__ == "__main__":
    printCopyright()
    parser = getOptionParser()
    options, args = parser.parse_args(sys.argv[1:])
    if options.reader is not None and options.writer is not None:
        generateJobConfigTemplate(options.reader,options.writer)
        sys.exit(RET_STATE['OK'])
    if len(args) != 1:
        parser.print_help()
        sys.exit(RET_STATE['FAIL'])

    startCommand = buildStartCommand(options, args)
    print startCommand

    child_process = subprocess.Popen(startCommand, shell=True)
    register_signal()
    (stdout, stderr) = child_process.communicate()

    sys.exit(child_process.returncode)
java -server  -Xms1g -Xmx1g -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=//Users/lebron374/Documents/github/DataX/target/datax/datax/log 
-Dloglevel=info -Dfile.encoding=UTF-8 
-Dlogback.statusListenerClass=ch.qos.logback.core.status.NopStatusListener 
-Djava.security.egd=file:///dev/urandom -Ddatax.home=//Users/lebron374/Documents/github/DataX/target/datax/datax 
-Dlogback.configurationFile=//Users/lebron374/Documents/github/DataX/target/datax/datax/conf/logback.xml 
-classpath //Users/lebron374/Documents/github/DataX/target/datax/datax/lib/*:.
-Dlog.file.name=s_datax_job_job_json 
com.alibaba.datax.core.Engine 

-mode standalone -jobid -1 
-job //Users/lebron374/Documents/github/DataX/target/datax/datax/job/job.json
以下配置在VM options当中
-server  -Xms1g -Xmx1g -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=//Users/lebron374/Documents/github/DataX/target/datax/datax/log 
-Dloglevel=info -Dfile.encoding=UTF-8 
-Dlogback.statusListenerClass=ch.qos.logback.core.status.NopStatusListener 
-Djava.security.egd=file:///dev/urandom -Ddatax.home=//Users/lebron374/Documents/github/DataX/target/datax/datax 
-Dlogback.configurationFile=//Users/lebron374/Documents/github/DataX/target/datax/datax/conf/logback.xml 
-classpath //Users/lebron374/Documents/github/DataX/target/datax/datax/lib/*:.
-Dlog.file.name=s_datax_job_job_json 
com.alibaba.datax.core.Engine 

以下配置在Program arguments当中
-mode standalone -jobid -1 
-job //Users/lebron374/Documents/github/DataX/target/datax/datax/job/job.json

启动步骤解析

启动过程源码分析

入口main函数

public class Engine {

    public static void main(String[] args) throws Exception {
        int exitCode = 0;
        try {
            Engine.entry(args);
        } catch (Throwable e) {
            System.exit(exitCode);
        }
    }

    public static void entry(final String[] args) throws Throwable {

        // 省略相关参数的解析代码
        
        // 获取job的配置路径信息
        String jobPath = cl.getOptionValue("job");

        // 如果用户没有明确指定jobid, 则 datax.py 会指定 jobid 默认值为-1
        String jobIdString = cl.getOptionValue("jobid");
        RUNTIME_MODE = cl.getOptionValue("mode");

        // 解析配置信息
        Configuration configuration = ConfigParser.parse(jobPath);

        // 省略相关代码
        boolean isStandAloneMode = "standalone".equalsIgnoreCase(RUNTIME_MODE);
        configuration.set(CoreConstant.DATAX_CORE_CONTAINER_JOB_ID, jobId);

        // 根据配置启动参数
        Engine engine = new Engine();
        engine.start(configuration);
    }
}

说明:
main函数主要做两件事情,分别是:

configuration解析过程

public final class ConfigParser {
    private static final Logger LOG = LoggerFactory.getLogger(ConfigParser.class);
    /**
     * 指定Job配置路径,ConfigParser会解析Job、Plugin、Core全部信息,并以Configuration返回
     */
    public static Configuration parse(final String jobPath) {
        // 加载任务的指定的配置文件,这个配置是有固定的json的固定模板格式的
        Configuration configuration = ConfigParser.parseJobConfig(jobPath);

        // 合并conf/core.json的配置文件
        configuration.merge(
                ConfigParser.parseCoreConfig(CoreConstant.DATAX_CONF_PATH),
                false);

        // todo config优化,只捕获需要的plugin
        // 固定的节点路径 job.content[0].reader.name
        String readerPluginName = configuration.getString(
                CoreConstant.DATAX_JOB_CONTENT_READER_NAME);
        // 固定的节点路径 job.content[0].writer.name
        String writerPluginName = configuration.getString(
                CoreConstant.DATAX_JOB_CONTENT_WRITER_NAME);

        // 固定的节点路径 job.preHandler.pluginName
        String preHandlerName = configuration.getString(
                CoreConstant.DATAX_JOB_PREHANDLER_PLUGINNAME);

        // 固定的节点路径 job.postHandler.pluginName
        String postHandlerName = configuration.getString(
                CoreConstant.DATAX_JOB_POSTHANDLER_PLUGINNAME);

        // 添加读写插件的列表待加载
        Set<String> pluginList = new HashSet<String>();
        pluginList.add(readerPluginName);
        pluginList.add(writerPluginName);

        if(StringUtils.isNotEmpty(preHandlerName)) {
            pluginList.add(preHandlerName);
        }
        if(StringUtils.isNotEmpty(postHandlerName)) {
            pluginList.add(postHandlerName);
        }
        try {
            // parsePluginConfig(new ArrayList<String>(pluginList))加载指定的插件的配置信息,并且和全局的配置文件进行合并
            configuration.merge(parsePluginConfig(new ArrayList<String>(pluginList)), false);
        }catch (Exception e){
        }

        // configuration整合了三方的配置,包括 任务配置、core核心配置、指定插件的配置。
        return configuration;
    }


    // 在指定的reader和writer目录获取指定的插件并解析其配置
    public static Configuration parsePluginConfig(List<String> wantPluginNames) {
        // 创建一个空的配置信息对象
        Configuration configuration = Configuration.newDefault();

        Set<String> replicaCheckPluginSet = new HashSet<String>();
        int complete = 0;
        // 所有的reader在/plugin/reader目录,遍历获取所有reader的目录
        // 获取待加载插件的配资信息,并合并到上面创建的空配置对象
        // //Users/lebron374/Documents/github/DataX/target/datax/datax/plugin/reader
        for (final String each : ConfigParser
                .getDirAsList(CoreConstant.DATAX_PLUGIN_READER_HOME)) {

            // 解析单个reader目录,eachReaderConfig保存的是key是plugin.reader.pluginname,value是对应的plugin.json内容
            Configuration eachReaderConfig = ConfigParser.parseOnePluginConfig(each, "reader", replicaCheckPluginSet, wantPluginNames);
            if(eachReaderConfig!=null) {
                // 采用覆盖式的合并
                configuration.merge(eachReaderConfig, true);
                complete += 1;
            }
        }

        // //Users/lebron374/Documents/github/DataX/target/datax/datax/plugin/writer
        for (final String each : ConfigParser
                .getDirAsList(CoreConstant.DATAX_PLUGIN_WRITER_HOME)) {
            Configuration eachWriterConfig = ConfigParser.parseOnePluginConfig(each, "writer", replicaCheckPluginSet, wantPluginNames);
            if(eachWriterConfig!=null) {
                configuration.merge(eachWriterConfig, true);
                complete += 1;
            }
        }

        if (wantPluginNames != null && wantPluginNames.size() > 0 && wantPluginNames.size() != complete) {
            throw DataXException.asDataXException(FrameworkErrorCode.PLUGIN_INIT_ERROR, "插件加载失败,未完成指定插件加载:" + wantPluginNames);
        }

        return configuration;
    }
}

说明:
configuration解析包括三部分的配置解析合并解析结果并返回,分别是:

configuration配置信息

job.json的configuration

{
    "job": {
        "setting": {
            "speed": {
                "byte":10485760,
                "record":1000
            },
            "errorLimit": {
                "record": 0,
                "percentage": 0.02
            }
        },
        "content": [
            {
                "reader": {
                    "name": "streamreader",
                    "parameter": {
                        "column" : [
                            {
                                "value": "DataX",
                                "type": "string"
                            },
                            {
                                "value": 19890604,
                                "type": "long"
                            },
                            {
                                "value": "1989-06-04 00:00:00",
                                "type": "date"
                            },
                            {
                                "value": true,
                                "type": "bool"
                            },
                            {
                                "value": "test",
                                "type": "bytes"
                            }
                        ],
                        "sliceRecordCount": 100000
                    }
                },
                "writer": {
                    "name": "streamwriter",
                    "parameter": {
                        "print": false,
                        "encoding": "UTF-8"
                    }
                }
            }
        ]
    }
}



core.json的configuration


{
    "entry": {
        "jvm": "-Xms1G -Xmx1G",
        "environment": {}
    },
    "common": {
        "column": {
            "datetimeFormat": "yyyy-MM-dd HH:mm:ss",
            "timeFormat": "HH:mm:ss",
            "dateFormat": "yyyy-MM-dd",
            "extraFormats":["yyyyMMdd"],
            "timeZone": "GMT+8",
            "encoding": "utf-8"
        }
    },
    "core": {
        "dataXServer": {
            "address": "http://localhost:7001/api",
            "timeout": 10000,
            "reportDataxLog": false,
            "reportPerfLog": false
        },
        "transport": {
            "channel": {
                "class": "com.alibaba.datax.core.transport.channel.memory.MemoryChannel",
                "speed": {
                    "byte": 100,
                    "record": 10
                },
                "flowControlInterval": 20,
                "capacity": 512,
                "byteCapacity": 67108864
            },
            "exchanger": {
                "class": "com.alibaba.datax.core.plugin.BufferedRecordExchanger",
                "bufferSize": 32
            }
        },
        "container": {
            "job": {
                "reportInterval": 10000
            },
            "taskGroup": {
                "channel": 5
            },
            "trace": {
                "enable": "false"
            }

        },
        "statistics": {
            "collector": {
                "plugin": {
                    "taskClass": "com.alibaba.datax.core.statistics.plugin.task.StdoutPluginCollector",
                    "maxDirtyNumber": 10
                }
            }
        }
    }
}



plugin.json的configuration

{
    "name": "streamreader",
    "class": "com.alibaba.datax.plugin.reader.streamreader.StreamReader",
    "description": {
        "useScene": "only for developer test.",
        "mechanism": "use datax framework to transport data from stream.",
        "warn": "Never use it in your real job."
    },
    "developer": "alibaba"
}


{
    "name": "streamwriter",
    "class": "com.alibaba.datax.plugin.writer.streamwriter.StreamWriter",
    "description": {
        "useScene": "only for developer test.",
        "mechanism": "use datax framework to transport data to stream.",
        "warn": "Never use it in your real job."
    },
    "developer": "alibaba"
}



合并后的configuration

{
    "common": {
        "column": {
            "dateFormat": "yyyy-MM-dd",
            "datetimeFormat": "yyyy-MM-dd HH:mm:ss",
            "encoding": "utf-8",
            "extraFormats": ["yyyyMMdd"],
            "timeFormat": "HH:mm:ss",
            "timeZone": "GMT+8"
        }
    },
    "core": {
        "container": {
            "job": {
                "id": -1,
                "reportInterval": 10000
            },
            "taskGroup": {
                "channel": 5
            },
            "trace": {
                "enable": "false"
            }
        },
        "dataXServer": {
            "address": "http://localhost:7001/api",
            "reportDataxLog": false,
            "reportPerfLog": false,
            "timeout": 10000
        },
        "statistics": {
            "collector": {
                "plugin": {
                    "maxDirtyNumber": 10,
                    "taskClass": "com.alibaba.datax.core.statistics.plugin.task.StdoutPluginCollector"
                }
            }
        },
        "transport": {
            "channel": {
                "byteCapacity": 67108864,
                "capacity": 512,
                "class": "com.alibaba.datax.core.transport.channel.memory.MemoryChannel",
                "flowControlInterval": 20,
                "speed": {
                    "byte": -1,
                    "record": -1
                }
            },
            "exchanger": {
                "bufferSize": 32,
                "class": "com.alibaba.datax.core.plugin.BufferedRecordExchanger"
            }
        }
    },
    "entry": {
        "jvm": "-Xms1G -Xmx1G"
    },
    "job": {
        "content": [{
            "reader": {
                "name": "streamreader",
                "parameter": {
                    "column": [{
                        "type": "string",
                        "value": "DataX"
                    }, {
                        "type": "long",
                        "value": 19890604
                    }, {
                        "type": "date",
                        "value": "1989-06-04 00:00:00"
                    }, {
                        "type": "bool",
                        "value": true
                    }, {
                        "type": "bytes",
                        "value": "test"
                    }],
                    "sliceRecordCount": 100000
                }
            },
            "writer": {
                "name": "streamwriter",
                "parameter": {
                    "encoding": "UTF-8",
                    "print": false
                }
            }
        }],
        "setting": {
            "errorLimit": {
                "percentage": 0.02,
                "record": 0
            },
            "speed": {
                "byte": 10485760
            }
        }
    },
    "plugin": {
        "reader": {
            "streamreader": {
                "class": "com.alibaba.datax.plugin.reader.streamreader.StreamReader",
                "description": {
                    "mechanism": "use datax framework to transport data from stream.",
                    "useScene": "only for developer test.",
                    "warn": "Never use it in your real job."
                },
                "developer": "alibaba",
                "name": "streamreader",
                "path": "//Users/lebron374/Documents/github/DataX/target/datax/datax/plugin/reader/streamreader"
            }
        },
        "writer": {
            "streamwriter": {
                "class": "com.alibaba.datax.plugin.writer.streamwriter.StreamWriter",
                "description": {
                    "mechanism": "use datax framework to transport data to stream.",
                    "useScene": "only for developer test.",
                    "warn": "Never use it in your real job."
                },
                "developer": "alibaba",
                "name": "streamwriter",
                "path": "//Users/lebron374/Documents/github/DataX/target/datax/datax/plugin/writer/streamwriter"
            }
        }
    }
}

Engine的start过程

public class Engine {
    private static final Logger LOG = LoggerFactory.getLogger(Engine.class);

    private static String RUNTIME_MODE;

    /* check job model (job/task) first */
    public void start(Configuration allConf) {

        // 省略相关代码

        boolean isJob = !("taskGroup".equalsIgnoreCase(allConf
                .getString(CoreConstant.DATAX_CORE_CONTAINER_MODEL)));
        AbstractContainer container;
        if (isJob) {
            allConf.set(CoreConstant.DATAX_CORE_CONTAINER_JOB_MODE, RUNTIME_MODE);
            // 核心点在于JobContainer的对象
            container = new JobContainer(allConf);
            instanceId = allConf.getLong(
                    CoreConstant.DATAX_CORE_CONTAINER_JOB_ID, 0);

        } 

        Configuration jobInfoConfig = allConf.getConfiguration(CoreConstant.DATAX_JOB_JOBINFO);

        // 核心容器的启动
        container.start();

    }

说明:
start过程中做了两件事:

JobContainer的启动过程

public class JobContainer extends AbstractContainer {
    /**
     * jobContainer主要负责的工作全部在start()里面,包括init、prepare、split、scheduler、
     * post以及destroy和statistics
     */
    @Override
    public void start() {
        try {
            isDryRun = configuration.getBool(CoreConstant.DATAX_JOB_SETTING_DRYRUN, false);
            if(isDryRun) {
                // 省略相关代码
            } else {

                //拷贝一份新的配置,保证线程安全
                userConf = configuration.clone();

                // 执行preHandle()操作
                LOG.debug("jobContainer starts to do preHandle ...");
                this.preHandle();

                // 执行reader、transform、writer等初始化
                LOG.debug("jobContainer starts to do init ...");
                this.init();

                // 执行plugin的prepare
                LOG.info("jobContainer starts to do prepare ...");
                this.prepare();

                // 执行任务切分
                LOG.info("jobContainer starts to do split ...");
                this.totalStage = this.split();

                // 执行任务调度
                LOG.info("jobContainer starts to do schedule ...");
                this.schedule();

                // 执行后置操作
                LOG.debug("jobContainer starts to do post ...");
                this.post();

                // 执行postHandle操作
                LOG.debug("jobContainer starts to do postHandle ...");
                this.postHandle();

                LOG.info("DataX jobId [{}] completed successfully.", this.jobId);

                this.invokeHooks();
            }
        } catch (Throwable e) {
            // 省略相关代码
        } finally {
             // 省略相关代码
        }
    }
}

说明:
JobContainer的start方法会执行一系列job相关的操作,如下:

Job的init过程

public class JobContainer extends AbstractContainer {
    private void init() {
        this.jobId = this.configuration.getLong(
                CoreConstant.DATAX_CORE_CONTAINER_JOB_ID, -1);

        if (this.jobId < 0) {
            LOG.info("Set jobId = 0");
            this.jobId = 0;
            this.configuration.set(CoreConstant.DATAX_CORE_CONTAINER_JOB_ID,
                    this.jobId);
        }

        Thread.currentThread().setName("job-" + this.jobId);

        // 初始化
        JobPluginCollector jobPluginCollector = new DefaultJobPluginCollector(
                this.getContainerCommunicator());

        //必须先Reader ,后Writer
        this.jobReader = this.initJobReader(jobPluginCollector);
        this.jobWriter = this.initJobWriter(jobPluginCollector);
    }


    private Reader.Job initJobReader(
            JobPluginCollector jobPluginCollector) {

        // 获取插件名字
        this.readerPluginName = this.configuration.getString(
                CoreConstant.DATAX_JOB_CONTENT_READER_NAME);

        classLoaderSwapper.setCurrentThreadClassLoader(LoadUtil.getJarLoader(
                PluginType.READER, this.readerPluginName));

        Reader.Job jobReader = (Reader.Job) LoadUtil.loadJobPlugin(
                PluginType.READER, this.readerPluginName);

        // 设置reader的jobConfig
        jobReader.setPluginJobConf(this.configuration.getConfiguration(
                CoreConstant.DATAX_JOB_CONTENT_READER_PARAMETER));

        // 设置reader的readerConfig
        jobReader.setPeerPluginJobConf(this.configuration.getConfiguration(
                CoreConstant.DATAX_JOB_CONTENT_WRITER_PARAMETER));

        jobReader.setJobPluginCollector(jobPluginCollector);

        // 这里已经到每个插件具体的初始化操作
        jobReader.init();

        classLoaderSwapper.restoreCurrentThreadClassLoader();
        return jobReader;
    }


    private Writer.Job initJobWriter(
            JobPluginCollector jobPluginCollector) {
        this.writerPluginName = this.configuration.getString(
                CoreConstant.DATAX_JOB_CONTENT_WRITER_NAME);
        classLoaderSwapper.setCurrentThreadClassLoader(LoadUtil.getJarLoader(
                PluginType.WRITER, this.writerPluginName));

        Writer.Job jobWriter = (Writer.Job) LoadUtil.loadJobPlugin(
                PluginType.WRITER, this.writerPluginName);

        // 设置writer的jobConfig
        jobWriter.setPluginJobConf(this.configuration.getConfiguration(
                CoreConstant.DATAX_JOB_CONTENT_WRITER_PARAMETER));

        // 设置reader的readerConfig
        jobWriter.setPeerPluginJobConf(this.configuration.getConfiguration(
                CoreConstant.DATAX_JOB_CONTENT_READER_PARAMETER));

        jobWriter.setPeerPluginName(this.readerPluginName);
        jobWriter.setJobPluginCollector(jobPluginCollector);
        jobWriter.init();
        classLoaderSwapper.restoreCurrentThreadClassLoader();

        return jobWriter;
    }
}

说明:
Job的init()过程主要做了两个事情,分别是:

job的split过程

public class JobContainer extends AbstractContainer {
    private int split() {
        this.adjustChannelNumber();

        if (this.needChannelNumber <= 0) {
            this.needChannelNumber = 1;
        }

        List<Configuration> readerTaskConfigs = this
                .doReaderSplit(this.needChannelNumber);
        int taskNumber = readerTaskConfigs.size();
        List<Configuration> writerTaskConfigs = this
                .doWriterSplit(taskNumber);

        List<Configuration> transformerList = this.configuration.getListConfiguration(CoreConstant.DATAX_JOB_CONTENT_TRANSFORMER);

        LOG.debug("transformer configuration: "+ JSON.toJSONString(transformerList));
        /**
         * 输入是reader和writer的parameter list,输出是content下面元素的list
         */
        List<Configuration> contentConfig = mergeReaderAndWriterTaskConfigs(
                readerTaskConfigs, writerTaskConfigs, transformerList);


        LOG.debug("contentConfig configuration: "+ JSON.toJSONString(contentConfig));

        this.configuration.set(CoreConstant.DATAX_JOB_CONTENT, contentConfig);

        return contentConfig.size();
    }


    private void adjustChannelNumber() {
        int needChannelNumberByByte = Integer.MAX_VALUE;
        int needChannelNumberByRecord = Integer.MAX_VALUE;

        boolean isByteLimit = (this.configuration.getInt(
                CoreConstant.DATAX_JOB_SETTING_SPEED_BYTE, 0) > 0);
        if (isByteLimit) {
            long globalLimitedByteSpeed = this.configuration.getInt(
                    CoreConstant.DATAX_JOB_SETTING_SPEED_BYTE, 10 * 1024 * 1024);
            Long channelLimitedByteSpeed = this.configuration
                    .getLong(CoreConstant.DATAX_CORE_TRANSPORT_CHANNEL_SPEED_BYTE);
            needChannelNumberByByte =
                    (int) (globalLimitedByteSpeed / channelLimitedByteSpeed);
            needChannelNumberByByte =
                    needChannelNumberByByte > 0 ? needChannelNumberByByte : 1;
            LOG.info("Job set Max-Byte-Speed to " + globalLimitedByteSpeed + " bytes.");
        }

        boolean isRecordLimit = (this.configuration.getInt(
                CoreConstant.DATAX_JOB_SETTING_SPEED_RECORD, 0)) > 0;
        if (isRecordLimit) {
            long globalLimitedRecordSpeed = this.configuration.getInt(
                    CoreConstant.DATAX_JOB_SETTING_SPEED_RECORD, 100000);

            Long channelLimitedRecordSpeed = this.configuration.getLong(
                    CoreConstant.DATAX_CORE_TRANSPORT_CHANNEL_SPEED_RECORD);

            needChannelNumberByRecord =
                    (int) (globalLimitedRecordSpeed / channelLimitedRecordSpeed);
            needChannelNumberByRecord =
                    needChannelNumberByRecord > 0 ? needChannelNumberByRecord : 1;
        }

        // 取较小值
        this.needChannelNumber = needChannelNumberByByte < needChannelNumberByRecord ?
                needChannelNumberByByte : needChannelNumberByRecord;

        boolean isChannelLimit = (this.configuration.getInt(
                CoreConstant.DATAX_JOB_SETTING_SPEED_CHANNEL, 0) > 0);
        if (isChannelLimit) {
            this.needChannelNumber = this.configuration.getInt(
                    CoreConstant.DATAX_JOB_SETTING_SPEED_CHANNEL);

            LOG.info("Job set Channel-Number to " + this.needChannelNumber
                    + " channels.");

            return;
        }

        throw DataXException.asDataXException(
                FrameworkErrorCode.CONFIG_ERROR,
                "Job运行速度必须设置");
    }
}

说明:
DataX的job的split过程主要是根据限流配置计算channel的个数,进而计算task的个数,主要过程如下:

Job的schedule过程

public class JobContainer extends AbstractContainer {

    private void schedule() {
        /**
         * 通过获取配置信息得到每个taskGroup需要运行哪些tasks任务
         */
        List<Configuration> taskGroupConfigs = JobAssignUtil.assignFairly(this.configuration,
                this.needChannelNumber, channelsPerTaskGroup);

        ExecuteMode executeMode = null;
        AbstractScheduler scheduler;
        try {
            executeMode = ExecuteMode.STANDALONE;
            scheduler = initStandaloneScheduler(this.configuration);

            //设置 executeMode
            for (Configuration taskGroupConfig : taskGroupConfigs) {
                taskGroupConfig.set(CoreConstant.DATAX_CORE_CONTAINER_JOB_MODE, executeMode.getValue());
            }

            // 开始调度所有的taskGroup
            scheduler.schedule(taskGroupConfigs);

        } catch (Exception e) {
            // 省略相关代码
        }
    }

}

说明:
Job的schedule的过程主要做了两件事,分别是:

TaskGroup的schedule过程

public abstract class AbstractScheduler {
   
    public void schedule(List<Configuration> configurations) {
        
        int totalTasks = calculateTaskCount(configurations);
        
        // 启动所有的TaskGroup
        startAllTaskGroup(configurations);
        try {
            while (true) {
                // 省略相关代码
            }
        } catch (InterruptedException e) {
        }
    }
}



public abstract class ProcessInnerScheduler extends AbstractScheduler {

    private ExecutorService taskGroupContainerExecutorService;

    @Override
    public void startAllTaskGroup(List<Configuration> configurations) {

        //todo 根据taskGroup的数量启动固定的线程数
        this.taskGroupContainerExecutorService = Executors
                .newFixedThreadPool(configurations.size());

        //todo 每个TaskGroup启动一个TaskGroupContainerRunner
        for (Configuration taskGroupConfiguration : configurations) {
            //todo 创建TaskGroupContainerRunner并提交线程池运行
            TaskGroupContainerRunner taskGroupContainerRunner = newTaskGroupContainerRunner(taskGroupConfiguration);
            this.taskGroupContainerExecutorService.execute(taskGroupContainerRunner);
        }

        // 等待所有任务执行完后会关闭,执行该方法后不会再接收新任务
        this.taskGroupContainerExecutorService.shutdown();
    }
}



public class TaskGroupContainerRunner implements Runnable {

    private TaskGroupContainer taskGroupContainer;

    private State state;

    public TaskGroupContainerRunner(TaskGroupContainer taskGroup) {
        this.taskGroupContainer = taskGroup;
        this.state = State.SUCCEEDED;
    }

    @Override
    public void run() {
        try {
            Thread.currentThread().setName(
                    String.format("taskGroup-%d", this.taskGroupContainer.getTaskGroupId()));
            this.taskGroupContainer.start();
            this.state = State.SUCCEEDED;
        } catch (Throwable e) {
        }
    }
}

说明:
TaskGroup的Schedule方法做的事情如下:

TaskGroupContainer的启动

public class TaskGroupContainer extends AbstractContainer {

    @Override
    public void start() {
        try {
             // 省略相关代码

            int taskCountInThisTaskGroup = taskConfigs.size();
            Map<Integer, Configuration> taskConfigMap = buildTaskConfigMap(taskConfigs); //taskId与task配置
            List<Configuration> taskQueue = buildRemainTasks(taskConfigs); //待运行task列表
            Map<Integer, TaskExecutor> taskFailedExecutorMap = new HashMap<Integer, TaskExecutor>(); //taskId与上次失败实例
            List<TaskExecutor> runTasks = new ArrayList<TaskExecutor>(channelNumber); //正在运行task
            Map<Integer, Long> taskStartTimeMap = new HashMap<Integer, Long>(); //任务开始时间

            while (true) {
                // 省略相关代码
                
                // 新增任务会在这里被启动
                Iterator<Configuration> iterator = taskQueue.iterator();
                while(iterator.hasNext() && runTasks.size() < channelNumber){
                    Configuration taskConfig = iterator.next();
                    Integer taskId = taskConfig.getInt(CoreConstant.TASK_ID);
                    int attemptCount = 1;
                    TaskExecutor lastExecutor = taskFailedExecutorMap.get(taskId);

                    // todo 需要新建任务的配置信息
                    Configuration taskConfigForRun = taskMaxRetryTimes > 1 ? taskConfig.clone() : taskConfig;

                    // todo taskExecutor应该就需要新建的任务
                    TaskExecutor taskExecutor = new TaskExecutor(taskConfigForRun, attemptCount);
                    taskStartTimeMap.put(taskId, System.currentTimeMillis());
                    taskExecutor.doStart();

                    iterator.remove();
                    runTasks.add(taskExecutor);
                   
                }
        } catch (Throwable e) {
        }finally {
        }
    }
}

说明:
TaskGroupContainer的内部主要做的事情如下:

Task的启动

    class TaskExecutor {
        
        private Channel channel;
        private Thread readerThread;
        private Thread writerThread;
        private ReaderRunner readerRunner;
        private WriterRunner writerRunner;

        /**
         * 该处的taskCommunication在多处用到:
         * 1. channel
         * 2. readerRunner和writerRunner
         * 3. reader和writer的taskPluginCollector
         */
        public TaskExecutor(Configuration taskConf, int attemptCount) {
            // 获取该taskExecutor的配置
            this.taskConfig = taskConf;

            // 得到taskId
            this.taskId = this.taskConfig.getInt(CoreConstant.TASK_ID);
            this.attemptCount = attemptCount;

            /**
             * 由taskId得到该taskExecutor的Communication
             * 要传给readerRunner和writerRunner,同时要传给channel作统计用
             */
            this.channel = ClassUtil.instantiate(channelClazz,
                    Channel.class, configuration);
            // channel在这里生成,每个taskGroup生成一个channel,在generateRunner方法当中生成writer或reader并注入channel
            this.channel.setCommunication(this.taskCommunication);

            /**
             * 获取transformer的参数
             */

            List<TransformerExecution> transformerInfoExecs = TransformerUtil.buildTransformerInfo(taskConfig);

            /**
             * 生成writerThread
             */
            writerRunner = (WriterRunner) generateRunner(PluginType.WRITER);
            this.writerThread = new Thread(writerRunner,
                    String.format("%d-%d-%d-writer",
                            jobId, taskGroupId, this.taskId));
            //通过设置thread的contextClassLoader,即可实现同步和主程序不通的加载器
            this.writerThread.setContextClassLoader(LoadUtil.getJarLoader(
                    PluginType.WRITER, this.taskConfig.getString(
                            CoreConstant.JOB_WRITER_NAME)));

            /**
             * 生成readerThread
             */
            readerRunner = (ReaderRunner) generateRunner(PluginType.READER,transformerInfoExecs);
            this.readerThread = new Thread(readerRunner,
                    String.format("%d-%d-%d-reader",
                            jobId, taskGroupId, this.taskId));
            /**
             * 通过设置thread的contextClassLoader,即可实现同步和主程序不通的加载器
             */
            this.readerThread.setContextClassLoader(LoadUtil.getJarLoader(
                    PluginType.READER, this.taskConfig.getString(
                            CoreConstant.JOB_READER_NAME)));
        }

        public void doStart() {
            this.writerThread.start();
            this.readerThread.start();
        }
}

说明:
TaskExecutor的启动过程主要做了以下事情:

DataX的数据传输

 跟一般的生产者-消费者模式一样,Reader插件和Writer插件之间也是通过channel来实现数据的传输的。channel可以是内存的,也可能是持久化的,插件不必关心。插件通过RecordSender往channel写入数据,通过RecordReceiver从channel读取数据。

 channel中的一条数据为一个Record的对象,Record中可以放多个Column对象,这可以简单理解为数据库中的记录和列。

public class DefaultRecord implements Record {

    private static final int RECORD_AVERGAE_COLUMN_NUMBER = 16;

    private List<Column> columns;

    private int byteSize;

    // 首先是Record本身需要的内存
    private int memorySize = ClassSize.DefaultRecordHead;

    public DefaultRecord() {
        this.columns = new ArrayList<Column>(RECORD_AVERGAE_COLUMN_NUMBER);
    }

    @Override
    public void addColumn(Column column) {
        columns.add(column);
        incrByteSize(column);
    }

    @Override
    public Column getColumn(int i) {
        if (i < 0 || i >= columns.size()) {
            return null;
        }
        return columns.get(i);
    }

    @Override
    public void setColumn(int i, final Column column) {
        if (i < 0) {
            throw DataXException.asDataXException(FrameworkErrorCode.ARGUMENT_ERROR,
                    "不能给index小于0的column设置值");
        }

        if (i >= columns.size()) {
            expandCapacity(i + 1);
        }

        decrByteSize(getColumn(i));
        this.columns.set(i, column);
        incrByteSize(getColumn(i));
    }
}

DataX的插件开发

DataX插件开发宝典

上一篇 下一篇

猜你喜欢

热点阅读