Perceptron Learning Algorithm in Deep Learning

Perceptron

    Perceptron is a fundamental concept in deep learning and a building block in neural network introduced by Frank Rosenblatt in 1957.

Structure


    Input: Perceptron takes multiple input values denoted as x_1,x_2,..,x_n each corresponding to a feature in the input data.
    Weights: Each input is associated with a weight w. They represent the importance of the input signal passing through the connection. They control the flow of information from one layer to other and determines how much influence a particular input has on the neurons output.
    Hidden Layer: Hidden layer is a layer of neurons in neural network which is between the input layer and the output layer, these are intermediate computations used by the network to reach the output.
    Output: The final output is a binary value indicating the class to which the input data belongs to.
    There are two steps in hidden layer
    In step 1, we take the sum of w_ix_i and the bias term.

        \[z = \sum_{i=1}^{n} w_i \cdot x_i + b\]

        \[z = x_iw_i+x_2w_2+b\]

    Weights determine the contribution of each input feature and bias helps the network to adjust its decision boundary to make more accurate predictions. If weights are zero then bias allows the neuron to produce non-zero output.
    In step 2, we apply activation function, its main aim is to transform the value.

        \[Act(z)\]

    An activation function determines whether a neuron should be activated based on the input it receives. There are two types of activation function: Linear and Non-Linear Activation function.

Leave a Comment