Here \(\mathbf{\nabla}f(\mathbf{x}_{t})\) is the gradient vector calculated at \(\mathbf{x}_{t}\).
To improve the convergence of first-order methods, second-order methods use the opposite direction of the gradient vector improved by the information of the Hessian matrix, which provides second-order information. The Modified Newton method, for example, adopts a unitary step size, and the method is described by Eq. (3).