The use of neural networks was not possible until the introduction of the back-propagation algorithm by Yann LeCun in 1998. The backpropagation learning is nowadays the most common learning algorithm in neural networks. The algorithm adjusts the weights of the network only if the output does not match the label.  The "blame" of the error is though divided across the contributing weights. In a feed-forward multilayer network, this can be challenging because of the many weights connecting each input with the output. The error is defined as the difference between the network output and the actual output value for the training example. The key is how we distribute the blame across the different layers of the network. Each hidden node is responsible for a portion of error in each of the neurons to which it has a forward connection. Each portion is divided accordingly to the connection weight between the hidden and the output node. In each layer is summed up by the total number of neurons, and progressively is updated for each layer.