a current tentative solution, and finds a constraint that is not strictly satisfied. The tentative solution is incremented by the normalised coefficients of the unsatisfied constraint, and we repeat until all the constraints are satisfied.
In order to view this procedure as a Lagrangian relaxation method, we must impose an additional constraint on tentative solutions and introduce intermediate variables. The additional constraint is that we are only interested in feasible solutions of unit norm: \(\|x\|_2 = 1\). We imposed an additional restriction, so any unit-norm solution is also a solution to the Perceptron instance; we already know that any solution to \(Ax > 0\) must be non-zero, so they can always be re-scaled to have unit norm. The intermediate variables are simply the current Lagrange multipliers for the strict inequality constraints: the coefficient for each row of \(A\) is incremented by \(1\) whenever that row is picked as the unsatisfied constraint. Let \(y \in \mathbb{R}^m\) be that non-zero vector of non-negative multipliers; the Lagrangian subproblem (over the \(n\)-dimensional unit ball \(\mathbb{B}^n\)) is