A characteristic of the neural networks is their iterative learning process. At each step, the weights and biases associated with the inputs are adjusted based on the correct label known in advanced (as in supervised learning). The biases are added to the inputs to ensure that at least a few nodes per layer will be activated regardless of signal strength. Initially, random weights and biases are assigned and the output is calculated using the activation functions in the hidden layers, the resulting output is then compared with the desired ones. Errors are then propagated back, allowing the system to adjust the weights and the biases, thereby allocating significance to certain bits of information or minimizing others.  During this phase, the neural network learns if some relationships exist across different features and the connection weights and biases are continually refined until the error is minimized. The neural network is rewarded for good and bad guesses through an optimization process that uses loss functions such as stochastic gradient descent (SGD) by acting on the parameters.