机器学习和人工智能入门

[机器学习] Coursera ML笔记 - 机器学习基础概念

2017-02-14  本文已影响201人  mmmwhy

在学习的过程中,抓住基本概念是非常重要的,这样可以防止自己陷入某些细节中无法自拔,可以让自己站在比较宏观的层面上看待问题。 更多见:iii.run


机器学习(Machine Learning)

有监督学习(Supervised Learning)

**Example: **
 Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.
 We could turn this example into a classification problem by instead making our output about whether the house “sells for more or less than the asking price.” Here we are classifying the houses based on price into two discrete categories

无监督学习(Unsupervised Learning)

维基百科:
 Unsupervised learning is the machine learning task of inferring a function to describe hidden structure from unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution. This distinguishes unsupervised learning from supervised learning and reinforcement learning.  Unsupervised learning is closely related to the problem of density estimation in statistics.[1] However unsupervised learning also encompasses many other techniques that seek to summarize and explain key features of the data. Many methods employed in unsupervised learning are based on data mining methods used to preprocess data.

Example:
 Clustering: Take a collection of 1000 essays written on the US Economy, and find a way to automatically group these essays into a small number that are somehow similar or related by different variables, such as word frequency, sentence length, page count, and so on.
 Associative: Suppose a doctor over years of experience forms associations in his mind between patient characteristics and illnesses that they have. If a new patient shows up then based on this patient’s characteristics such as symptoms, family medical history, physical attributes, mental outlook, etc the doctor associates possible illness or illnesses based on what the doctor has seen before with similar patients. This is not the same as rule based reasoning as in expert systems. In this case we would like to estimate a mapping function from patient characteristics into illnesses.

解释一下:

首先看什么是学习(learning)?一个成语就可概括:举一反三。此处以高考为例,高考的题目在上考场前我们未必做过,但在高中三年我们做过很多很多题目,懂解题方法,因此考场上面对陌生问题也可以算出答案。机器学习的思路也类似:我们能不能利用一些训练数据(已经做过的题),使机器能够利用它们(解题方法)分析未知数据(高考的题目)?

最简单也最普遍的一类机器学习算法就是分类(classification)。对于分类,输入的训练数据有特征(feature),有标签(label)。所谓的学习,其本质就是找到特征和标签间的关系(mapping)。这样当有特征而无标签的未知数据输入时,我们就可以通过已有的关系得到未知数据标签。

在上述的分类过程中,如果所有训练数据都有标签,则为有监督学习(supervised learning)。如果数据没有标签,显然就是无监督学习(unsupervised learning)了,也即聚类(clustering)。

目前分类算法的效果还是不错的,但相对来讲,聚类算法就有些惨不忍睹了。确实,无监督学习本身的特点使其难以得到如分类一样近乎完美的结果。这也正如我们在高中做题,答案(标签)是非常重要的,假设两个完全相同的人进入高中,一个正常学习,另一人做的所有题目都没有答案,那么想必第一个人高考会发挥更好,第二个人会发疯。

这时各位可能要问,既然分类如此之好,聚类如此之不靠谱,那为何我们还可以容忍聚类的存在?因为在实际应用中,标签的获取常常需要极大的人工工作量,有时甚至非常困难。例如在自然语言处理(NLP)中,Penn Chinese Treebank在2年里只完成了4000句话的标签……

这时有人可能会想,难道有监督学习和无监督学习就是非黑即白的关系吗?有没有灰呢?Good idea。灰是存在的。二者的中间带就是半监督学习(semi-supervised learning)。对于半监督学习,其训练数据的一部分是有标签的,另一部分没有标签,而没标签数据的数量常常极大于有标签数据数量(这也是符合现实情况的)。隐藏在半监督学习下的基本规律在于:数据的分布必然不是完全随机的,通过一些有标签数据的局部特征,以及更多没标签数据的整体分布,就可以得到可以接受甚至是非常好的分类结果。(此处大量忽略细节)

因此,learning家族的整体构造是这样的:
有监督学习(分类,回归)

半监督学习(分类,回归),transductive learning(分类,回归)

半监督聚类(有标签数据的标签不是确定的,类似于:肯定不是xxx,很可能是yyy)

无监督学习(聚类)

总结

在Coursera上Andrew Ng的课程中,主要涉及:

参考资料

Coursera - Machine learning( Andrew Ng) https://www.coursera.org/learn/machine-learning
什么是无监督学习? https://www.zhihu.com/question/23194489

上一篇 下一篇

猜你喜欢

热点阅读