The length of these learning steps is known as learning rate. The learning rate is a configurable hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. It is a positive value between the range of 0.0 and 1.0. The choice of the learning rate is important as challenging because a too-small value could result in a long training process thus getting stuck, whereas a too-large one could result in learning a sub-optimal set of weights too fast thus resulting in an unstable process.