The first neural network was introduced in 1957 by Frank Rosenblatt as a simplified model of a neuron called the perceptron. The perceptron is a linear-model binary classifier with a simple input-output relationship. The perceptron receives multiple input signals ai multiplied by their weights Wi, sums them up, and feeds an activation function g(x) that defines an output y. During the learning phase, the perceptron changes the weights to minimize the error until all the record inputs are correctly classified.