【机器学习】-Week4.1 Model Representat

2019-12-29  本文已影响0人  Kitty_风花

Model Representation I

Let's examine how we will represent a hypothesis function using neural networks. At a very simple level, neurons are basically computational units that take inputs (dendrites) as electrical inputs (called "spikes") that are channeled to outputs (axons).  

In our model, our dendrites are like the input features x1⋯xn, and the output is the result of our hypothesis function. In this model our x0 input node is sometimes called the "bias unit." It is always equal to 1. In neural networks, we use the same logistic function as in classification,

​yet we sometimes call it a sigmoid (logistic) activation function. In this situation, our "theta" parameters are sometimes called "weights".

Visually, a simplistic representation looks like:

Our input nodes (layer 1), also known as the "input layer", go into another node (layer 2), which finally outputs the hypothesis function, known as the "output layer".

We can have intermediate layers of nodes between the input and output layers called the "hidden layers."

In this example, we label these intermediate or "hidden" layer nodes a_{0}^2... a_{n}^2, and call them "activation units."

If we had one hidden layer, it would look like:

The values for each of the "activation" nodes is obtained as follows:

This is saying that we compute our activation nodes by using a 3×4 matrix of parameters. We apply each row of the parameters to our inputs to obtain the value for one activation node. Our hypothesis output is the logistic function applied to the sum of the values of our activation nodes, which have been multiplied by yet another parameter matrix \theta ^{(2)}containing the weights for our second layer of nodes.

Each layer gets its own matrix of weights, \theta ^{(j)}

The dimensions of these matrices of weights is determined as follows:
If networks has s_{j}  units in layer j,    and s_{j+1}  units in layer  j+ 1,  then \theta ^{(j)} will be of dimension s_{j+1} x (  s_{j} + 1 ). 

The +1 comes from the addition in  \theta ^{(j)} of the "bias nodes," x0​ and \theta _{0}^{(j)} . In other words the output nodes will not include the bias nodes while the inputs will. The following image summarizes our model representation:

Example: If layer 1 has 2 input nodes and layer 2 has 4 activation nodes. Dimension of Θ(1) is going to be 4×3 where 

来源:coursera 斯坦福 吴恩达 机器学习

上一篇 下一篇

猜你喜欢

热点阅读