You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Strangely HPIPM success rate is lower at low accuracy, and it increases at mid and even more at high accuracy.
This behavior doesn't appear for the other solvers.
The reason seems to be the low percentage of problems where "solved" return codes are correct.
Looking into the issue more carefully, it seems that the issue is related to the duality gap: the interfaces sets the solver 'max_res_comp' (i.e. the complementarity slackness) to a specified value, but then it accepts the solution as correct if the duality gap (instead of the complementarity slackness) is within that specified value.
Complementarity slackness and duality gap are equivalent only for a feasible primal and dual point, but numerically this is true only approximately, and the approximation can be especially crude for the low accuracy case, where also primal and dual feasibility tolerances are set to low values.
The text was updated successfully, but these errors were encountered:
Thank you @giaf for raising this fine point on the difference between complementarity-slackness and duality-gap termination at low precision. The reason why qpbenchmark went with the duality gap as a termination condition is its availability in most solvers (it appeared a de facto standard).
I don't have other ideas to suggest at present, so I'm following up to your offer of adding an option for duality-gap termination in HPIPM: giaf/hpipm#171 😉
Issue reported by @giaf in qpsolvers/mpc_qpbenchmark#7 (comment):
The text was updated successfully, but these errors were encountered: