17. Large scale machine learning

2021-01-30  本文已影响0人  玄语梨落

Large scale machine learning

Learining with large datasets

Stochastic gradient descent

Batch gradient descent:

J_{train}(\theta)=\frac{1}{2m}\sum\limits_{i=1}^m(h_\theta(x^{(i)}-y^{(i)})^2
Repeat{
\theta_j:=\theta_j-\alpha\frac{1}{m}\sum\limits_{i=1}^m(h_\theta(x^{(i)}-y^{(i)})x_j^{(i)}
}

Stochastic gradient descent:

cost(\theta,(x^{)i)},y^{(i)}))=\frac{1}{2}(h_\theta(x^{(i)}-y^{(i)})^2
J_{train}(\theta)=\frac{1}{m}\sum\limits_{i=1}^m cost(\theta,(x^{)i)},y^{(i)}))

  1. Randomly shaffle dataset
  2. Repeat{for{}}

Mini-batch gradient descent

Mini-batch gradient descent: Use b examples in each iteration.

b = mini-batch size

\Theta_j:=\Theta_j-\alpha\frac{1}{10}\sum_{k=i}^{i+9}(h_\theta(x^{(k)})-y^{(k)})x_j^{(k)}

Stochastic gradient descent convergence

Checking for convergence:

For Stochastic gradient descent: Learning rate \alpha istypically held constant. Can slowly decrease \alpha over time if we want \theta to converge. (E.g. \alpha = \frac{const1}{iterationNmuber +const2})

Online learning

operate one data once.

Predicte CTR (click through rate)

Map-reduce and data parallelism

divide all work into many parts and calculate them at the same time with different machine.

Map-reduce and summation over the training set:

Many learining algorithms can be expressed as computing sums of functions over the training set.

Multi-core machines:

上一篇 下一篇

猜你喜欢

热点阅读