Forward Propagation
Forward propagation is the process in a neural network to produce an output. It’s called “forward” because the data moves in one direction from input layer to the output layer. The input layer receives the raw data and is passed through the hidden layer. Each neuron in this layer computes the weighted sum of the inputs plus the bias, then an activation function is applied to introduce non-linearity such that the model can understand complex patterns. The result of this layer is passed to the next layer and the process repeats for each layer. The data reaches the output layer after passing through all the hidden layers. Let’s consider an example:
For one input feature is 0, is 0 and is 0. Let’s consider the weights and the bias
This is passed through first hidden layer.
Hidden layer one:
In first step weighted sum is calculated
- Weighted sum:
- Activation (Sigmoid Function):
In step 2, z is passed through the activation function called sigmoid. The main aim of sigmoid curve is to convert the z value between 0 and 1.
Now passes through the second hidden layer.
Hidden Layer two:
- Weighted sum:
- Activation (Sigmoid function):
Output neuron takes the activations from the hidden layer as input
Output:
- Weighted sum:
- Activation (Sigmoid function):
Loss function is the error that is the difference between the actual value and predicted value. We need to reduce the error by updating the weights. The process of updating the weight goes backward. This process is called as backward propagation.