-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluating constraints in IpoptSolver_NLP
is always done in series, even when the constraints could be evaluated concurrently.
#21724
Comments
I would propose not to change class ParallelConstraint {
public:
ParallelConstraint(const std::vector<Binding<Constraint>>& constraints);
private:
template<typename T, typename S>
void DoEvalGeneric(const Eigen::Ref<VectorX<T>>& x, VectorX<S>* y) const {
// Evaluate the constraints in parallel.
}
}; The advantage of doing it this way is that it gives more control to the user, to adjust which set of constraints should be evaluated in parallel together, and this same parallel evaluation can be used for all solvers (SNOPT/IPOPT/NLOPT). cc @jwnimmer-tri in case you want to chime in. |
I don't think we should make the user group constraints together. They don't actually know very well how to group things in an efficient and correct way. We should just do it for them. I would say ideally, all choices can be selected independently: (1) The user builds their MathematicalProgram, adding costs and constraints with no special knowledge of parallelism. For example, nothing in any of the GCSTrajOpt NLP program-building would need to change. (2) When they call (3) When the solver supports multicore, it automatically selects the NL constraints that support multicore, and evaluates them using a thread pool. (I assume the convex constraints are handled in implicit form by the solver. Anyway, the back-end could do whatever makes the most sense.) Note that any multicore in FYI #19119 is related. That is about using multicore for multiple mathematical programs, broadcasting programs across cores. This one is about using multicore for just one program, broadcasting the constraints (and costs) to multiple cores. |
Thanks Jeremy! My concern is that our constraint evaluation is not thread safe, they might share some information. For example, we can construct two kinematics constraints that share the same context
if we want to impose both a position and an orientation constraint in an IK problem. Then I don't think we can evaluate these two constraints in parallel as they both would modify the same |
Right. That's why I said #10320 is a prereq. Any constraint that takes a context pointer without known if its shared, needs to conservatively report that it is not threadsafe. |
Is your feature request related to a problem? Please describe.
The following generic constraint evaluation in the
IpoptSolver_NLP
class:drake/solvers/ipopt_solver_internal.cc
Lines 733 to 736 in 84f4fc5
Evaluates each constraint in series, each of these generic constraints is oftentimes expensive to compute.
Moreover, in the case of multiple shooting algorithms where these are the Direct Collocation / Direct Transcription constraints, there are many of these non-coupled constraints and could be run concurrently, if implementation permits.
Describe the solution you'd like
I'd like to re-implement the quoted block of code to evaluate some constraints concurrently, if permitted by the constraint, e.g.:
https://github.com/samzapo/drake/blob/45fe92d77e87525da86a1f51b319f1973b6612a2/solvers/ipopt_solver_internal.cc#L735-L795
The text was updated successfully, but these errors were encountered: