机器学习

Linear regression with one varia

2020-01-27  本文已影响0人  spraysss

Housing Prices

Hypothesis:

h_{\theta}(x)=\theta_0+\theta_1x

Parameters:

\theta_0\theta_1

Cost function:

Squared error
J(\theta_0,\theta_1)=\frac{1}{2m}\sum_{i=1}^{m}(h_\theta(x^i)-y^i)^2

Goal

Min J(\theta_0,\theta_1)

Gradient descent

Gradient descent algorithm

repeat until convergence {
\space \theta_j:=\theta_j-\alpha\frac{\partial}{\partial\theta_j}J(\theta_0,\theta_1) (for \space j=0 \space and \space j=1)
}
Note : the Correct way is simultaneous update

上一篇下一篇

猜你喜欢

热点阅读