机器学习 梯度下降
2023-08-29 本文已影响0人
飞猪的浪漫
![](https://img.haomeiwen.com/i9149867/26bb7e905c0f7ad2.png)
![](https://img.haomeiwen.com/i9149867/09ff44ccb6822748.png)
![](https://img.haomeiwen.com/i9149867/3ce8c6ba70935de1.png)
![](https://img.haomeiwen.com/i9149867/5467d052fdf597d7.png)
![](https://img.haomeiwen.com/i9149867/c4e4dd73394ad968.png)
![](https://img.haomeiwen.com/i9149867/c89dae5e8c7263cf.png)
总结:
Batch gradient descent:Use all examples in each iteration;
Stochastic gradient descent:Use 1 example in each iteration;
Mini-batch gradient descent:Use b examples in each iteration.
总结:
Batch gradient descent:Use all examples in each iteration;
Stochastic gradient descent:Use 1 example in each iteration;
Mini-batch gradient descent:Use b examples in each iteration.