综述:Multi-Task发展现状及未来趋势
2019-05-30 本文已影响0人
lirainbow0
Taxonomy Different Views
By Methodology
- Instance-based
Identify useful data instances in a task for others task - feature-based
- Feature learning approach
原始特征可能不能充分被其他任务利用- Feature Transformation Approach
通过线性变化、非线性变化学习原始特征 - Feature Selection Approach
- 选择原始特征的一个子集作为学习的表征
- 根据不同的标准消除无用特征
- Feature Transformation Approach
- Deep learning approach
- Hard Parameter Sharing
- 所有任务之间共享隐层
- Keeps several task-specific output layers
- Soft Parameter Sharing
- 每个任务有自己的模型参数
- 通过参数的约束加强参数的相似度
- Hard Parameter Sharing
- Feature learning approach
- parameter-based
- Dirty Approach
- low-rank approach
多个任务的相关意味着参数矩阵是低秩的,因此可以通过各种不同的方式对损失函数进行约束加强稀疏。 - decomposition approach
假设参数矩阵 可以被分解为多个部分,然后通过不同的惩罚系数进行约束
- low-rank approach
- task clustering approach
假设所有任务都在很少的几个类中,而一个类中的任务是相关的,同一个聚簇中的任务共享参数- First, cluster the tasks into groups
- Learn a task transfer matrix
- Minimizing pairwise within-class distances
- Maximizing pairwise between-class distances
- Second, learn classifier on the training data of tasks in a cluster
- A weighted nearest neighbor classifier is proposed
- First, cluster the tasks into groups
- task relation learning approach
大部分multi-task都是假定多个任务具有相关性。但是如果我们不知道任务之间的相关性,这时应该怎么自动从数据中学习任务之间的相关性。- Task relations are assumed to be known as a priori information
- Similar task parameters are expected to be close
- Utilize task similarities to design regularizers
- Learn task relations automatically from data
- Global learning model
- Multi-task Gaussian process (defined as prior on functional values for training data)
- Keep the task covariance matrix positive definite
- Local learning model
- Ex., kNN classifier (learning function as a weighted voting of neighbors)
- Global learning model
- Task relations are assumed to be known as a priori information
- Dirty Approach
辅助任务
- unsupervised
- semi-supervised
- active learning
- Reinforcement Learning
- online learning
- multi-view
重点投入领域
feature-based和paramter-based是目前主要投入的方向,这两个方向分布覆盖了不同的应用场景:1)原始特征需要变换之后才能更好的被target domain利用;2)而parameter-based则面向的是领域相关,但是在参数上模型之间可以共享的场景。
- feature-based
- Feature learning approach
- Feature Transformation Approach效果要比Feature Selection Approach更好,并且泛化性更好。虽然Feature Selection Approach解释性更好,但是目前我们对解释性的要求稍低。
- 原始特征不能直接就能应用到target domain的场景
- Deep learning approach
- Hard Parameter Sharing
- Soft Parameter Sharing
- Feature learning approach
- parameter-based
领域相关,但是在参数上模型之间可以共享。- low-rank approach
- decomposition approach
- task relation learning approach
目前我们做multi-task任务时,任务之间的相关性大都基于人工的判断。如果我们能在该领域给出一个领域相关性的判断,对于做multi-task任务更加省力,并且对于整体体系更完整。
参考文献
- A Survey on Multi-Task Learning
- An Overview of Multi-Task Learning in Deep Neural Networks