吴恩达机器学习 - 课程总结

2018-12-03  本文已影响0人  YANWeichuan

监督学习

算法 h_θ(x) cost function 求解 特性
线性回归 h_θ(x) = θ_0+ θ_1x J( θ_0, θ_1) = \frac{1}{2m}\sum_{i=1}^{m}{(h_\theta(x^i) - y^i)^2} θ_1 = θ_1 + α{\frac{1}{m}\sum_{i=1}^{m}{(h_θ(x^i) - y^i)x^i}}
逻辑回归 0<= h_θ(x) <= 1 h_θ(x) = g (θ^Tx) g(z) = {\frac{1}{1+ e^{-z}}} J(θ)=-{\frac{1}{m}}\left[\sum_{i=1}^{m}{y_i^{(i)}log(h_θ(x^{(i)})} + {(1-y_i^{(i)})log(1-(h_θ(x^{(i)})))}\right] θ_j = θ_j - α{\frac{1}{m}\sum_{i=1}^{m}{(h_θ(x^i) - y^i)x^i}}
神经网络 sigmoid J(Θ)=-{\frac{1}{m}}\left[\sum_{i=1}^{m}\sum_{j=1}^{K}{y_k^{(i)}log(h_Θ(x^{(i)})_k)} + {(1-y_k^{(i)})log(1-(h_Θ(x^{(i)}))_k)}\right] + {\frac{λ}{2m}}\sum_{l=1}^{L-1}\sum_{i=1}^{S_l}\sum_{j=1}^{S_l+1}Θ_{ji}^l 反向传播
SVM min C{\frac{1}{m}}\sum_{i=1}^{m}{y_i^{(i)}Cost_1(h_θ(x^{(i)})} + {(1-y_i^{(i)})Cost_0(h_θ(x^{(i)}))} 决策边界
上一篇 下一篇

猜你喜欢

热点阅读