Perceptron
- Perceptron is a fundamental concept in deep learning and a building block in neural network introduced by Frank Rosenblatt in 1957.
Structure

-
Input: Perceptron takes multiple input values denoted as

-
Weights: Each input is associated with a weight w. They represent the importance of the input signal passing through the connection. They control the flow of information from one layer to other and determines how much influence a particular input has on the neurons output.
-
Hidden Layer: Hidden layer is a layer of neurons in neural network which is between the input layer and the output layer, these are intermediate computations used by the network to reach the output.
-
Output: The final output is a binary value indicating the class to which the input data belongs to.
-
There are two steps in hidden layer
In step 1, we take the sum of

Weights determine the contribution of each input feature and bias helps the network to adjust its decision boundary to make more accurate predictions. If weights are zero then bias allows the neuron to produce non-zero output.
In step 2, we apply activation function, its main aim is to transform the value.
An activation function determines whether a neuron should be activated based on the input it receives. There are two types of activation function: Linear and Non-Linear Activation function.