diff --git a/doc/OnlineDocs/explanation/solvers/pyros.rst b/doc/OnlineDocs/explanation/solvers/pyros.rst index 7a3da450031..576b2e60f5b 100644 --- a/doc/OnlineDocs/explanation/solvers/pyros.rst +++ b/doc/OnlineDocs/explanation/solvers/pyros.rst @@ -14,19 +14,18 @@ The developers gratefully acknowledge support from the U.S. Department of Energy Methodology Overview ----------------------------- -Below is an overview of the type of optimization models PyROS can accommodate. +PyROS can accommodate optimization models with: +* **Continuous variables** only +* **Nonlinearities** (including **nonconvexities**) in both the + variables and uncertain parameters +* **First-stage degrees of freedom** and **second-stage degrees of freedom** +* **Equality constraints** defining state variables, + including implicitly defined state variables that cannot be + eliminated from the model via reformulation +* **Inequality constraints** in the degree-of-freedom and/or state variables -* PyROS is suitable for optimization models of **continuous variables** - that may feature non-linearities (including **non-convexities**) in - both the variables and uncertain parameters. -* PyROS can handle **equality constraints** defining state variables, - including implicit state variables that cannot be eliminated via - reformulation. -* PyROS allows for **two-stage** optimization problems that may - feature both first-stage and second-stage degrees of freedom. - -PyROS is designed to operate on deterministic models of the general form +Supported deterministic models can be written in the general form .. _deterministic-model: @@ -39,20 +38,21 @@ PyROS is designed to operate on deterministic models of the general form where: -* :math:`x \in \mathcal{X}` are the "design" variables - (i.e., first-stage degrees of freedom), - where :math:`\mathcal{X} \subseteq \mathbb{R}^{n_x}` is the feasible space defined by the model constraints - (including variable bounds specifications) referencing :math:`x` only. -* :math:`z \in \mathbb{R}^{n_z}` are the "control" variables - (i.e., second-stage degrees of freedom) +* :math:`x \in \mathcal{X}` are the first-stage degrees of freedom, + (or "design" variables,) + of which the feasible space :math:`\mathcal{X} \subseteq \mathbb{R}^{n_x}` + is defined by the model constraints + (including variable bounds specifications) referencing :math:`x` only +* :math:`z \in \mathbb{R}^{n_z}` are the second-stage degrees of freedom + (or "control" variables) * :math:`y \in \mathbb{R}^{n_y}` are the "state" variables * :math:`q \in \mathbb{R}^{n_q}` is the vector of model parameters considered uncertain, and :math:`q^{\text{nom}}` is the vector of nominal values - associated with those. -* :math:`f_1\left(x\right)` are the terms of the objective function that depend + associated with those +* :math:`f_1\left(x\right)` is the summand of the objective function that depends only on design variables -* :math:`f_2\left(x, z, y; q\right)` are the terms of the objective function - that depend on all variables and the uncertain parameters +* :math:`f_2\left(x, z, y; q\right)` is the summand of the objective function + that depends on all variables and the uncertain parameters * :math:`g_i\left(x, z, y; q\right)` is the :math:`i^\text{th}` inequality constraint function in set :math:`\mathcal{I}` (see :ref:`Note `) @@ -63,23 +63,13 @@ where: .. _var-bounds-to-ineqs: .. note:: - PyROS accepts models in which bounds are directly imposed on - ``Var`` objects representing components of the variables :math:`z` - and :math:`y`. These models are cast to - :ref:`the form above ` - by reformulating the bounds as inequality constraints. - -.. _unique-mapping: + PyROS accepts models in which there are: -.. note:: - A key requirement of PyROS is that each value of :math:`\left(x, z, q \right)` - maps to a unique value of :math:`y`, a property that is assumed to - be properly enforced by the system of equality constraints - :math:`\mathcal{J}`. - If the mapping is not unique, then the selection of 'state' - (i.e., not degree of freedom) variables :math:`y` is incorrect, - and one or more of the :math:`y` variables should be appropriately - redesignated to be part of either :math:`x` or :math:`z`. + 1. Bounds declared on the ``Var`` objects representing + components of the variable vectors :math:`z` and :math:`y`. + These bounds are reformulated to inequality constraints. + 2. Ranged inequality constraints. These are easily reformulated to + single inequality constraints. In order to cast the robust optimization counterpart of the :ref:`deterministic model `, @@ -89,7 +79,8 @@ any realization in a compact uncertainty set the nominal value :math:`q^{\text{nom}}`. The set :math:`\mathcal{Q}` may be **either continuous or discrete**. -Based on the above notation, the form of the robust counterpart addressed by PyROS is +Based on the above notation, +the form of the robust counterpart addressed by PyROS is .. math:: \begin{array}{ccclll} @@ -100,10 +91,66 @@ Based on the above notation, the form of the robust counterpart addressed by PyR & & & \displaystyle ~~ h_j\left(x, z, y, q\right) = 0 & & \forall\,j \in \mathcal{J} \end{array} -PyROS solves problems of this form using the -Generalized Robust Cutting-Set algorithm developed in [IAE+21]_. +PyROS accepts a deterministic model and accompanying uncertainty set +and then, using the Generalized Robust Cutting-Set algorithm developed +in [IAE+21]_, seeks a solution to the robust counterpart. +When using PyROS, please consider citing [IAE+21]_. + +.. _unique-mapping: + +.. note:: + A key assumption of PyROS is that + for every + :math:`x \in \mathcal{X}`, + :math:`z \in \mathbb{R}^{n_z}`, + :math:`q \in \mathcal{Q}`, + there exists a unique :math:`y \in \mathbb{R}^{n_y}` + for which :math:`(x, z, y, q)` + satisfies the equality constraints + :math:`h_j(x, z, y, q) = 0\,\,\forall\, j \in \mathcal{J}`. + If this assumption is not met, + then the selection of 'state' + (i.e., not degree of freedom) variables :math:`y` is incorrect, + and one or more of the :math:`y` variables should be appropriately + redesignated to be part of either :math:`x` or :math:`z`. + +PyROS Installation +----------------------------- +PyROS can be installed as follows: + +1. :doc:`Install Pyomo <../../installation>`. + PyROS is included in the Pyomo software package, at pyomo/contrib/pyros. +2. Install NumPy and SciPy with your preferred package manager; + both NumPy and SciPy are required dependencies of PyROS. + You may install NumPy and SciPy with, for example, ``conda``: + + :: + + conda install numpy scipy + + or ``pip``: + + :: + + pip install numpy scipy +3. (*Optional*) Test your installation: + install ``pytest`` and ``parameterized`` + with your preferred package manager (as in the previous step): + + :: + + pip install pytest parameterized + + You may then run the PyROS tests as follows: + + :: + + python -c 'import os, pytest, pyomo.contrib.pyros as p; pytest.main([os.path.dirname(p.__file__)])' + + Some tests involving solvers may fail or be skipped, + depending on the solver distributions (e.g., Ipopt, BARON, SCIP) + that you have pre-installed and licensed on your system. -When using PyROS, please consider citing the above paper. PyROS Required Inputs ----------------------------- @@ -128,7 +175,8 @@ These are more elaborately presented in the PyROS Solver Interface ----------------------------- -The PyROS solver is invoked through the :py:meth:`PyROS.solve` method. +The PyROS solver is invoked through the +:py:meth:`~pyomo.contrib.pyros.pyros.PyROS.solve` method. .. autoclass:: pyomo.contrib.pyros.PyROS :members: solve @@ -442,7 +490,7 @@ global NLP solver: .. note:: Additional NLP optimizers can be automatically used in the event the primary subordinate local or global optimizer passed - to the PyROS :meth:`~pyomo.contrib.pyros.PyROS.solve` method + to the PyROS :meth:`~pyomo.contrib.pyros.pyros.PyROS.solve` method does not successfully solve a subproblem to an appropriate termination condition. These alternative solvers are provided through the optional keyword arguments ``backup_local_solvers`` and ``backup_global_solvers``. @@ -551,7 +599,7 @@ The :ref:`preceding code snippet ` demonstrates how to retrieve this information. If we pass ``load_solution=True`` (the default setting) -to the :meth:`~pyomo.contrib.pyros.PyROS.solve` method, +to the :meth:`~pyomo.contrib.pyros.pyros.PyROS.solve` method, then the solution at which PyROS terminates will be loaded to the variables of the original deterministic model. Note that in the :ref:`preceding code snippet `, @@ -581,7 +629,7 @@ freedom are in fact second-stage degrees of freedom. PyROS handles second-stage degrees of freedom via the use of polynomial decision rules, of which the degree is controlled through the optional keyword argument ``decision_rule_order`` to the PyROS -:meth:`~pyomo.contrib.pyros.PyROS.solve` method. +:meth:`~pyomo.contrib.pyros.pyros.PyROS.solve` method. In this example, we select affine decision rules by setting ``decision_rule_order=1``: @@ -638,10 +686,10 @@ to an affine decision rule. Specifying Arguments Indirectly Through ``options`` """"""""""""""""""""""""""""""""""""""""""""""""""" Like other Pyomo solver interface methods, -:meth:`~pyomo.contrib.pyros.PyROS.solve` +:meth:`~pyomo.contrib.pyros.pyros.PyROS.solve` provides support for specifying options indirectly by passing a keyword argument ``options``, whose value must be a :class:`dict` -mapping names of arguments to :meth:`~pyomo.contrib.pyros.PyROS.solve` +mapping names of arguments to :meth:`~pyomo.contrib.pyros.pyros.PyROS.solve` to their desired values. For example, the ``solve()`` statement in the :ref:`two-stage problem snippet ` @@ -775,7 +823,7 @@ PyROS Solver Log Output The PyROS solver log output is controlled through the optional ``progress_logger`` argument, itself cast to a standard Python logger (:py:class:`logging.Logger`) object -at the outset of a :meth:`~pyomo.contrib.pyros.PyROS.solve` call. +at the outset of a :meth:`~pyomo.contrib.pyros.pyros.PyROS.solve` call. The level of detail of the solver log output can be adjusted by adjusting the level of the logger object; see :ref:`the following table `. @@ -816,7 +864,7 @@ for a basic tutorial, see the :doc:`logging HOWTO `. every master feasility, master, and DR polishing problem * Progress updates for the separation procedure * Separation subproblem initial point infeasibilities - * Summary of separation loop outcomes: performance constraints + * Summary of separation loop outcomes: second-stage inequality constraints violated, uncertain parameter scenario added to the master problem * Uncertain parameter scenarios added to the master problem @@ -837,12 +885,20 @@ Observe that the log contains the following information: * **Preprocessing information** (lines 39--41). Wall time required for preprocessing the deterministic model and associated components, - i.e. standardizing model components and adding the decision rule + i.e., standardizing model components and adding the decision rule variables and equations. * **Model component statistics** (lines 42--58). Breakdown of model component statistics. Includes components added by PyROS, such as the decision rule variables and equations. + The preprocessor may find that some second-stage variables + and state variables are mathematically + not adjustable to the uncertain parameters. + To this end, in the logs, the numbers of + adjustable second-stage variables and state variables + are included in parentheses, next to the total numbers + of second-stage variables and state variables, respectively; + note that "adjustable" has been abbreviated as "adj." * **Iteration log table** (lines 59--69). Summary information on the problem iterates and subproblem outcomes. The constituent columns are defined in detail in @@ -879,21 +935,21 @@ Observe that the log contains the following information: :linenos: ============================================================================== - PyROS: The Pyomo Robust Optimization Solver, v1.2.11. - Pyomo version: 6.7.2 + PyROS: The Pyomo Robust Optimization Solver, v1.3.0. + Pyomo version: 6.8.1 Commit hash: unknown - Invoked at UTC 2024-03-28T00:00:00.000000 - + Invoked at UTC 2024-11-01T00:00:00.000000 + Developed by: Natalie M. Isenberg (1), Jason A. F. Sherman (1), John D. Siirola (2), Chrysanthos E. Gounaris (1) (1) Carnegie Mellon University, Department of Chemical Engineering (2) Sandia National Laboratories, Center for Computing Research - + The developers gratefully acknowledge support from the U.S. Department of Energy's Institute for the Design of Advanced Energy Systems (IDAES). ============================================================================== ================================= DISCLAIMER ================================= - PyROS is still under development. + PyROS is still under development. Please provide feedback and/or report any issues by creating a ticket at https://github.com/Pyomo/pyomo/issues/new/choose ============================================================================== @@ -919,55 +975,56 @@ Observe that the log contains the following information: p_robustness={} ------------------------------------------------------------------------------ Preprocessing... - Done preprocessing; required wall time of 0.175s. + Done preprocessing; required wall time of 0.018s. ------------------------------------------------------------------------------ - Model statistics: + Model Statistics: Number of variables : 62 Epigraph variable : 1 First-stage variables : 7 - Second-stage variables : 6 - State variables : 18 + Second-stage variables : 6 (6 adj.) + State variables : 18 (7 adj.) Decision rule variables : 30 Number of uncertain parameters : 4 - Number of constraints : 81 + Number of constraints : 52 Equality constraints : 24 Coefficient matching constraints : 0 + Other first-stage equations : 10 + Second-stage equations : 8 Decision rule equations : 6 - All other equality constraints : 18 - Inequality constraints : 57 - First-stage inequalities (incl. certain var bounds) : 10 - Performance constraints (incl. var bounds) : 47 + Inequality constraints : 28 + First-stage inequalities : 1 + Second-stage inequalities : 27 ------------------------------------------------------------------------------ Itn Objective 1-Stg Shift 2-Stg Shift #CViol Max Viol Wall Time (s) ------------------------------------------------------------------------------ - 0 3.5838e+07 - - 5 1.8832e+04 1.741 - 1 3.5838e+07 3.5184e-15 3.9404e-15 10 4.2516e+06 3.766 - 2 3.5993e+07 1.8105e-01 7.1406e-01 13 5.2004e+06 6.288 - 3 3.6285e+07 5.1968e-01 7.7753e-01 4 1.7892e+04 8.247 - 4 3.6285e+07 9.1166e-13 1.9702e-15 0 7.1157e-10g 11.456 + 0 3.5838e+07 - - 1 2.7000e+02 0.657 + 1 3.6087e+07 8.0199e-01 1.2807e-01 5 4.1852e+04 1.460 + 2 3.6125e+07 8.7068e-01 2.7098e-01 8 2.7711e+01 3.041 + 3 3.6174e+07 7.6526e-01 2.2357e-01 4 1.3893e+02 4.186 + 4 3.6285e+07 2.8923e-01 3.4064e-01 0 1.2670e-09g 7.162 ------------------------------------------------------------------------------ Robust optimal solution identified. ------------------------------------------------------------------------------ Timing breakdown: - + Identifier ncalls cumtime percall % ----------------------------------------------------------- - main 1 11.457 11.457 100.0 + main 1 7.163 7.163 100.0 ------------------------------------------------------ - dr_polishing 4 0.682 0.171 6.0 - global_separation 47 1.109 0.024 9.7 - local_separation 235 5.810 0.025 50.7 - master 5 1.353 0.271 11.8 - master_feasibility 4 0.247 0.062 2.2 - preprocessing 1 0.429 0.429 3.7 - other n/a 1.828 n/a 16.0 + dr_polishing 4 0.293 0.073 4.1 + global_separation 27 1.106 0.041 15.4 + local_separation 135 3.385 0.025 47.3 + master 5 1.396 0.279 19.5 + master_feasibility 4 0.155 0.039 2.2 + preprocessing 1 0.018 0.018 0.2 + other n/a 0.811 n/a 11.3 ====================================================== =========================================================== - + ------------------------------------------------------------------------------ Termination stats: Iterations : 5 - Solve time (wall s) : 11.457 + Solve time (wall s) : 7.163 Final objective value : 3.6285e+07 Termination condition : pyrosTerminationCondition.robust_optimal ------------------------------------------------------------------------------ @@ -1025,10 +1082,10 @@ The constituent columns are defined in the there are no second-stage variables, or the master problem of the current iteration is not solved successfully. * - #CViol - - Number of performance constraints found to be violated during + - Number of second-stage inequality constraints found to be violated during the separation step of the current iteration. - Unless a custom prioritization of the model's performance constraints - is specified (through the ``separation_priority_order`` argument), + Unless a custom prioritization of the model's second-stage inequality + constraints is specified (through the ``separation_priority_order`` argument), expect this number to trend downward as the iteration number increases. A "+" is appended if not all of the separation problems were solved successfully, either due to custom prioritization, a time out, @@ -1036,13 +1093,13 @@ The constituent columns are defined in the A dash ("-") is produced in lieu of a value if the separation routine is not invoked during the current iteration. * - Max Viol - - Maximum scaled performance constraint violation. + - Maximum scaled second-stage inequality constraint violation. Expect this value to trend downward as the iteration number increases. A 'g' is appended to the value if the separation problems were solved globally during the current iteration. A dash ("-") is produced in lieu of a value if the separation routine is not invoked during the current iteration, or if there are - no performance constraints. + no second-stage inequality constraints. * - Wall time (s) - Total time elapsed by the solver, in seconds, up to the end of the current iteration. diff --git a/pyomo/contrib/pyros/CHANGELOG.txt b/pyomo/contrib/pyros/CHANGELOG.txt index 52cd7a6db47..afae4b3db71 100644 --- a/pyomo/contrib/pyros/CHANGELOG.txt +++ b/pyomo/contrib/pyros/CHANGELOG.txt @@ -2,6 +2,19 @@ PyROS CHANGELOG =============== +------------------------------------------------------------------------------- +PyROS 1.3.0 12 Aug 2024 +------------------------------------------------------------------------------- +- Fix interactions between PyROS and NL writer-based solvers +- Overhaul the preprocessor +- Update subproblem formulations and modeling objects +- Update `UncertaintySet` class and pre-implemented subclasses to + facilitate new changes to the subproblems +- Update documentation and logging system in light of new preprocessor + and subproblem changes +- Make all tests more rigorous and extensive + + ------------------------------------------------------------------------------- PyROS 1.2.11 17 Mar 2024 ------------------------------------------------------------------------------- diff --git a/pyomo/contrib/pyros/__init__.py b/pyomo/contrib/pyros/__init__.py index 4e134ef1166..54f3d1623c6 100644 --- a/pyomo/contrib/pyros/__init__.py +++ b/pyomo/contrib/pyros/__init__.py @@ -10,7 +10,7 @@ # ___________________________________________________________________________ from pyomo.contrib.pyros.pyros import PyROS -from pyomo.contrib.pyros.pyros import ObjectiveType, pyrosTerminationCondition +from pyomo.contrib.pyros.util import ObjectiveType, pyrosTerminationCondition from pyomo.contrib.pyros.uncertainty_sets import ( UncertaintySet, EllipsoidalSet, diff --git a/pyomo/contrib/pyros/config.py b/pyomo/contrib/pyros/config.py index 217172ce012..5abc61536cb 100644 --- a/pyomo/contrib/pyros/config.py +++ b/pyomo/contrib/pyros/config.py @@ -20,7 +20,11 @@ from pyomo.core.base import Var, VarData from pyomo.core.base.param import Param, ParamData from pyomo.opt import SolverFactory -from pyomo.contrib.pyros.util import ObjectiveType, setup_pyros_logger +from pyomo.contrib.pyros.util import ( + ObjectiveType, + setup_pyros_logger, + standardize_component_data, +) from pyomo.contrib.pyros.uncertainty_sets import UncertaintySet @@ -132,24 +136,6 @@ def __init__( self.cdatatype_validator = cdatatype_validator self.allow_repeats = allow_repeats - def standardize_ctype_obj(self, obj): - """ - Standardize object of type ``self.ctype`` to list - of objects of type ``self.cdatatype``. - """ - if self.ctype_validator is not None: - self.ctype_validator(obj) - return list(obj.values()) - - def standardize_cdatatype_obj(self, obj): - """ - Standardize object of type ``self.cdatatype`` to - ``[obj]``. - """ - if self.cdatatype_validator is not None: - self.cdatatype_validator(obj) - return [obj] - def __call__(self, obj, from_iterable=None, allow_repeats=None): """ Cast object to a flat list of Pyomo component data type @@ -173,40 +159,15 @@ def __call__(self, obj, from_iterable=None, allow_repeats=None): ValueError If the resulting list contains duplicate entries. """ - if allow_repeats is None: - allow_repeats = self.allow_repeats - - if isinstance(obj, self.ctype): - ans = self.standardize_ctype_obj(obj) - elif isinstance(obj, self.cdatatype): - ans = self.standardize_cdatatype_obj(obj) - elif isinstance(obj, Iterable) and not isinstance(obj, str): - ans = [] - for item in obj: - ans.extend(self.__call__(item, from_iterable=obj)) - else: - from_iterable_qual = ( - f" (entry of iterable {from_iterable})" - if from_iterable is not None - else "" - ) - raise TypeError( - f"Input object {obj!r}{from_iterable_qual} " - "is not of valid component type " - f"{self.ctype.__name__} or component data type " - f"{self.cdatatype.__name__}." - ) - - # check for duplicates if desired - if not allow_repeats and len(ans) != len(ComponentSet(ans)): - comp_name_list = [comp.name for comp in ans] - raise ValueError( - f"Standardized component list {comp_name_list} " - f"derived from input {obj} " - "contains duplicate entries." - ) - - return ans + return standardize_component_data( + obj=obj, + valid_ctype=self.ctype, + valid_cdatatype=self.cdatatype, + ctype_validator=self.ctype_validator, + cdatatype_validator=self.cdatatype_validator, + allow_repeats=allow_repeats, + from_iterable=from_iterable, + ) def domain_name(self): """Return str briefly describing domain encompassed by self.""" diff --git a/pyomo/contrib/pyros/master_problem_methods.py b/pyomo/contrib/pyros/master_problem_methods.py index 2af38c1d582..f05fd21e388 100644 --- a/pyomo/contrib/pyros/master_problem_methods.py +++ b/pyomo/contrib/pyros/master_problem_methods.py @@ -10,165 +10,206 @@ # ___________________________________________________________________________ """ -Functions for handling the construction and solving of the GRCS master problem via ROSolver +Functions for construction and solution of the PyROS master problem. """ -from pyomo.core.base import ( - ConcreteModel, - Block, - Var, - Objective, - Constraint, - ConstraintList, - SortComponents, -) -from pyomo.opt import TerminationCondition as tc -from pyomo.opt import SolverResults -from pyomo.core.expr import value +import os + +from pyomo.common.collections import ComponentMap, ComponentSet +from pyomo.common.modeling import unique_component_name +from pyomo.core import TransformationFactory +from pyomo.core.base import ConcreteModel, Block, Var, Objective, Constraint from pyomo.core.base.set_types import NonNegativeIntegers, NonNegativeReals +from pyomo.core.expr import identify_variables, value +from pyomo.core.util import prod +from pyomo.opt import TerminationCondition as tc +from pyomo.repn.standard_repn import generate_standard_repn + +from pyomo.contrib.pyros.solve_data import MasterResults from pyomo.contrib.pyros.util import ( call_solver, - selective_clone, + DR_POLISHING_PARAM_PRODUCT_ZERO_TOL, + enforce_dr_degree, + get_dr_expression, + check_time_limit_reached, + generate_all_decision_rule_var_data_objects, ObjectiveType, pyrosTerminationCondition, - process_termination_condition_master_problem, - adjust_solver_time_settings, - revert_solver_max_time_adjustment, - get_main_elapsed_time, + TIC_TOC_SOLVE_TIME_ATTR, ) -from pyomo.contrib.pyros.solve_data import MasterProblemData, MasterResult -from pyomo.opt.results import check_optimal_termination -from pyomo.core.expr.visitor import replace_expressions, identify_variables -from pyomo.common.collections import ComponentMap, ComponentSet -from pyomo.repn.standard_repn import generate_standard_repn -from pyomo.core import TransformationFactory -import itertools as it -import os -from copy import deepcopy -from pyomo.common.errors import ApplicationError -from pyomo.common.modeling import unique_component_name - -from pyomo.common.timing import TicTocTimer -from pyomo.contrib.pyros.util import TIC_TOC_SOLVE_TIME_ATTR, enforce_dr_degree - - -def initial_construct_master(model_data): - """ - Constructs the iteration 0 master problem - return: a MasterProblemData object containing the master_model object - """ - m = ConcreteModel() - m.scenarios = Block(NonNegativeIntegers, NonNegativeIntegers) - - master_data = MasterProblemData() - master_data.original = model_data.working_model.clone() - master_data.master_model = m - master_data.timing = model_data.timing - - return master_data -def get_state_vars(model, iterations): +def construct_initial_master_problem(model_data): """ - Obtain the state variables of a two-stage model - for a given (sequence of) iterations corresponding - to model blocks. + Construct the initial master problem model object + from the preprocessed working model. Parameters ---------- - model : ConcreteModel - PyROS model. - iterations : iterable - Iterations to consider. + model_data : model data object + Main model data object, + containing the preprocessed working model. Returns ------- - iter_state_var_map : dict - Mapping from iterations to list(s) of state vars. + master_model : ConcreteModel + Initial master problem model object. + Contains a single scenario block fully cloned from + the working model. """ - iter_state_var_map = dict() - for itn in iterations: - state_vars = [ - var for blk in model.scenarios[itn, :] for var in blk.util.state_vars - ] - iter_state_var_map[itn] = state_vars + master_model = ConcreteModel() + master_model.scenarios = Block(NonNegativeIntegers, NonNegativeIntegers) + add_scenario_block_to_master_problem( + master_model=master_model, + scenario_idx=(0, 0), + param_realization=model_data.config.nominal_uncertain_param_vals, + from_block=model_data.working_model, + clone_first_stage_components=True, + ) + + # epigraph Objective was not added during preprocessing, + # as we wanted to add it to the root block of the master + # model rather than to the model to prevent + # duplication across scenario sub-blocks + master_model.epigraph_obj = Objective( + expr=master_model.scenarios[0, 0].first_stage.epigraph_var + ) - return iter_state_var_map + return master_model -def construct_master_feasibility_problem(model_data, config): +def add_scenario_block_to_master_problem( + master_model, + scenario_idx, + param_realization, + from_block, + clone_first_stage_components, +): """ - Construct a slack-variable based master feasibility model. - Initialize all model variables appropriately, and scale slack variables - as well. + Add new scenario block to the master model. Parameters ---------- - model_data : MasterProblemData + master_model : ConcreteModel + Master model. + scenario_idx : tuple + Index of ``master_model.scenarios`` for the new block. + param_realization : Iterable of numeric type + Uncertain parameter realization for new block. + from_block : BlockData + Block from which to transfer attributes. + This can be an existing scenario block, or a block + with the same hierarchical structure as the + preprocessed working model. + clone_first_stage_components : bool + True to clone first-stage variables + when transferring attributes to the new block + to the new block (as opposed to using the objects as + they are in `from_block`), False otherwise. + """ + # Note for any of the Vars not copied: + # - if Var is not a member of an indexed var, then + # the 'name' attribute changes from + # '{from_block.name}.{var.name}' + # to 'scenarios[{scenario_idx}].{var.name}' + # - otherwise, the name stays the same + memo = dict() + if not clone_first_stage_components: + nonadjustable_comps = from_block.all_nonadjustable_variables + memo = {id(comp): comp for comp in nonadjustable_comps} + + # we will clone the first-stage constraints + # (mostly to prevent symbol map name clashes). + # the duplicate constraints are redundant. + # consider deactivating these constraints in the + # off-nominal blocks? + + new_block = from_block.clone(memo=memo) + master_model.scenarios[scenario_idx].transfer_attributes_from(new_block) + + # update uncertain parameter values in new block + new_uncertain_params = master_model.scenarios[scenario_idx].uncertain_params + for param, val in zip(new_uncertain_params, param_realization): + param.set_value(val) + + # deactivate the first-stage constraints: they are duplicate + if scenario_idx != (0, 0): + new_blk = master_model.scenarios[scenario_idx] + for con in new_blk.first_stage.inequality_cons.values(): + con.deactivate() + for con in new_blk.first_stage.equality_cons.values(): + con.deactivate() + + +def construct_master_feasibility_problem(master_data): + """ + Construct slack variable minimization problem from the master + model. + + Slack variables are added only to the seconds-stage + inequality constraints of the blocks added for the + current PyROS iteration. + + Parameters + ---------- + master_data : MasterProblemData Master problem data. - config : ConfigDict - PyROS solver config. Returns ------- - model : ConcreteModel + slack_model : ConcreteModel Slack variable model. """ - - # clone master model. current state: - # - variables for all but newest block are set to values from - # master solution from previous iteration - # - variables for newest block are set to values from separation - # solution chosen in previous iteration - model = model_data.master_model.clone() - - # obtain mapping from master problem to master feasibility - # problem variables - varmap_name = unique_component_name(model_data.master_model, 'pyros_var_map') + # to prevent use of find_component when copying variable values + # from the slack model to the master problem later, we will + # map corresponding variables before/during slack model construction + varmap_name = unique_component_name(master_data.master_model, 'pyros_var_map') setattr( - model_data.master_model, + master_data.master_model, varmap_name, - list(model_data.master_model.component_data_objects(Var)), + list(master_data.master_model.component_data_objects(Var)), ) - model = model_data.master_model.clone() - model_data.feasibility_problem_varmap = list( - zip(getattr(model_data.master_model, varmap_name), getattr(model, varmap_name)) + + slack_model = master_data.master_model.clone() + + master_data.feasibility_problem_varmap = list( + zip( + getattr(master_data.master_model, varmap_name), + getattr(slack_model, varmap_name), + ) ) - delattr(model_data.master_model, varmap_name) - delattr(model, varmap_name) + delattr(master_data.master_model, varmap_name) + delattr(slack_model, varmap_name) - for obj in model.component_data_objects(Objective): + for obj in slack_model.component_data_objects(Objective): obj.deactivate() - iteration = model_data.iteration + iteration = master_data.iteration - # add slacks only to inequality constraints for the newest - # master block. these should be the only constraints which + # add slacks only to second-stage inequality constraints for the + # newest master block(s). + # these should be the only constraints that # may have been violated by the previous master and separation # solution(s) targets = [] - for blk in model.scenarios[iteration, :]: - targets.extend( - [ - con - for con in blk.component_data_objects( - Constraint, active=True, descend_into=True - ) - if not con.equality - ] - ) + for blk in slack_model.scenarios[iteration, :]: + targets.extend(blk.second_stage.inequality_cons.values()) - # retain original constraint expressions - # (for slack initialization and scaling) + # retain original constraint expressions before adding slacks + # (to facilitate slack initialization and scaling) pre_slack_con_exprs = ComponentMap((con, con.body - con.upper) for con in targets) # add slack variables and objective # inequalities g(v) <= b become g(v) - s^- <= b - TransformationFactory("core.add_slack_variables").apply_to(model, targets=targets) + TransformationFactory("core.add_slack_variables").apply_to( + slack_model, targets=targets + ) slack_vars = ComponentSet( - model._core_add_slack_variables.component_data_objects(Var, descend_into=True) + slack_model._core_add_slack_variables.component_data_objects( + Var, descend_into=True + ) ) - # initialize and scale slack variables + # initialize slack variables for con in pre_slack_con_exprs: # get mapping from slack variables to their (linear) # coefficients (+/-1) in the updated constraint expressions @@ -179,7 +220,6 @@ def construct_master_feasibility_problem(model_data, config): if var in slack_vars: slack_var_coef_map[var] = repn.linear_coefs[idx] - slack_substitution_map = dict() for slack_var in slack_var_coef_map: # coefficient determines whether the slack # is a +ve or -ve slack @@ -188,26 +228,12 @@ def construct_master_feasibility_problem(model_data, config): else: con_slack = max(0, -value(pre_slack_con_exprs[con])) - # initialize slack variable, evaluate scaling coefficient slack_var.set_value(con_slack) - scaling_coeff = 1 - - # update expression replacement map for slack scaling - slack_substitution_map[id(slack_var)] = scaling_coeff * slack_var - - # finally, scale slack(s) - con.set_value( - ( - replace_expressions(con.lower, slack_substitution_map), - replace_expressions(con.body, slack_substitution_map), - replace_expressions(con.upper, slack_substitution_map), - ) - ) - return model + return slack_model -def solve_master_feasibility_problem(model_data, config): +def solve_master_feasibility_problem(master_data): """ Solve a slack variable-based feasibility model derived from the master problem. Initialize the master problem @@ -216,20 +242,19 @@ def solve_master_feasibility_problem(model_data, config): Parameters ---------- - model_data : MasterProblemData + master_data : MasterProblemData Master problem data. - config : ConfigDict - PyROS solver settings. Returns ------- results : SolverResults Solver results. """ - model = construct_master_feasibility_problem(model_data, config) + model = construct_master_feasibility_problem(master_data) active_obj = next(model.component_data_objects(Objective, active=True)) + config = master_data.config config.progress_logger.debug("Solving master feasibility problem") config.progress_logger.debug( f" Initial objective (total slack): {value(active_obj)}" @@ -244,12 +269,12 @@ def solve_master_feasibility_problem(model_data, config): model=model, solver=solver, config=config, - timing_obj=model_data.timing, + timing_obj=master_data.timing, timer_name="main.master_feasibility", err_msg=( f"Optimizer {repr(solver)} encountered exception " "attempting to solve master feasibility problem in iteration " - f"{model_data.iteration}." + f"{master_data.iteration}." ), ) @@ -273,7 +298,7 @@ def solve_master_feasibility_problem(model_data, config): else: config.progress_logger.warning( "Could not successfully solve master feasibility problem " - f"of iteration {model_data.iteration} with primary subordinate " + f"of iteration {master_data.iteration} with primary subordinate " f"{'global' if config.solve_master_globally else 'local'} solver " "to acceptable level. " f"Termination stats:\n{results.solver}\n" @@ -281,23 +306,20 @@ def solve_master_feasibility_problem(model_data, config): ) # load master feasibility point to master model - for master_var, feas_var in model_data.feasibility_problem_varmap: + for master_var, feas_var in master_data.feasibility_problem_varmap: master_var.set_value(feas_var.value, skip_validation=True) return results -def construct_dr_polishing_problem(model_data, config): +def construct_dr_polishing_problem(master_data): """ - Construct DR polishing problem from most recently added - master problem. + Construct DR polishing problem from the master problem. Parameters ---------- - model_data : MasterProblemData + master_data : MasterProblemData Master problem data. - config : ConfigDict - PyROS solver settings. Returns ------- @@ -312,119 +334,150 @@ def construct_dr_polishing_problem(model_data, config): (including epigraph) fixed. Optimality of the polished DR with respect to the master objective is also enforced. """ - # clone master problem - master_model = model_data.master_model + master_model = master_data.master_model polishing_model = master_model.clone() nominal_polishing_block = polishing_model.scenarios[0, 0] - # fix first-stage variables (including epigraph, where applicable) - decision_rule_var_set = ComponentSet( - var - for indexed_dr_var in nominal_polishing_block.util.decision_rule_vars - for var in indexed_dr_var.values() - ) - first_stage_vars = nominal_polishing_block.util.first_stage_variables - for var in first_stage_vars: - if var not in decision_rule_var_set: - var.fix() - - # ensure master optimality constraint enforced - if config.objective_focus == ObjectiveType.worst_case: - polishing_model.zeta.fix() - else: - optimal_master_obj_value = value(polishing_model.obj) - polishing_model.nominal_optimality_con = Constraint( - expr=( - nominal_polishing_block.first_stage_objective - + nominal_polishing_block.second_stage_objective - <= optimal_master_obj_value - ) - ) - - # deactivate master problem objective - polishing_model.obj.deactivate() + nominal_eff_var_partitioning = nominal_polishing_block.effective_var_partitioning - decision_rule_vars = nominal_polishing_block.util.decision_rule_vars - nominal_polishing_block.util.polishing_vars = polishing_vars = [] - for idx, indexed_dr_var in enumerate(decision_rule_vars): - # declare auxiliary 'polishing' variables. + nondr_nonadjustable_vars = ( + nominal_eff_var_partitioning.first_stage_variables + # fixing epigraph variable constrains the problem + # to the optimal master problem solution set + + [nominal_polishing_block.first_stage.epigraph_var] + ) + for var in nondr_nonadjustable_vars: + var.fix() + + # deactivate original constraints that involved + # only vars that have been fixed. + # we do this mostly to ensure that the active equality constraints + # do not grossly outnumber the unfixed Vars + fixed_dr_vars = [ + var + for var in generate_all_decision_rule_var_data_objects(nominal_polishing_block) + if var.fixed + ] + fixed_nonadjustable_vars = ComponentSet(nondr_nonadjustable_vars + fixed_dr_vars) + for blk in polishing_model.scenarios.values(): + for con in blk.component_data_objects(Constraint, active=True): + vars_in_con = ComponentSet(identify_variables(con.body)) + if not (vars_in_con - fixed_nonadjustable_vars): + con.deactivate() + + # we will add the polishing objective later + polishing_model.epigraph_obj.deactivate() + + polishing_model.polishing_vars = polishing_vars = [] + indexed_dr_var_list = nominal_polishing_block.first_stage.decision_rule_vars + for idx, indexed_dr_var in enumerate(indexed_dr_var_list): + # auxiliary 'polishing' variables. # these are meant to represent the absolute values - # of the terms of DR polynomial + # of the terms of DR polynomial; + # we need these for the L1-norm indexed_polishing_var = Var( list(indexed_dr_var.keys()), domain=NonNegativeReals ) - nominal_polishing_block.add_component( - unique_component_name(nominal_polishing_block, f"dr_polishing_var_{idx}"), - indexed_polishing_var, - ) + polishing_model.add_component(f"dr_polishing_var_{idx}", indexed_polishing_var) polishing_vars.append(indexed_polishing_var) - dr_eq_var_zip = zip( - nominal_polishing_block.util.decision_rule_eqns, - polishing_vars, - nominal_polishing_block.util.second_stage_variables, - ) - nominal_polishing_block.util.polishing_abs_val_lb_cons = all_lb_cons = [] - nominal_polishing_block.util.polishing_abs_val_ub_cons = all_ub_cons = [] - for idx, (dr_eq, indexed_polishing_var, ss_var) in enumerate(dr_eq_var_zip): + # we need the DR expressions to set up the + # absolute value constraints and initialize the + # auxiliary polishing variables + eff_ss_var_to_dr_expr_pairs = [ + (ss_var, get_dr_expression(nominal_polishing_block, ss_var)) + for ss_var in nominal_eff_var_partitioning.second_stage_variables + ] + + dr_eq_var_zip = zip(polishing_vars, eff_ss_var_to_dr_expr_pairs) + polishing_model.polishing_abs_val_lb_cons = all_lb_cons = [] + polishing_model.polishing_abs_val_ub_cons = all_ub_cons = [] + for idx, (indexed_polishing_var, (ss_var, dr_expr)) in enumerate(dr_eq_var_zip): # set up absolute value constraint components polishing_absolute_value_lb_cons = Constraint(indexed_polishing_var.index_set()) polishing_absolute_value_ub_cons = Constraint(indexed_polishing_var.index_set()) - # add constraints to polishing model - nominal_polishing_block.add_component( - unique_component_name(polishing_model, f"polishing_abs_val_lb_con_{idx}"), - polishing_absolute_value_lb_cons, + # add indexed constraints to polishing model + polishing_model.add_component( + f"polishing_abs_val_lb_con_{idx}", polishing_absolute_value_lb_cons ) - nominal_polishing_block.add_component( - unique_component_name(polishing_model, f"polishing_abs_val_ub_con_{idx}"), - polishing_absolute_value_ub_cons, + polishing_model.add_component( + f"polishing_abs_val_ub_con_{idx}", polishing_absolute_value_ub_cons ) - # update list of absolute value cons + # update list of absolute value (i.e., polishing) cons all_lb_cons.append(polishing_absolute_value_lb_cons) all_ub_cons.append(polishing_absolute_value_ub_cons) - # get monomials; ensure second-stage variable term excluded - # - # the dr_eq is a linear sum where the first term is the - # second-stage variable: the remainder of the terms will be - # either MonomialTermExpressions or bare VarData - dr_expr_terms = dr_eq.body.args[:-1] - - for dr_eq_term in dr_expr_terms: - if dr_eq_term.is_expression_type(): - dr_var_in_term = dr_eq_term.args[-1] + for dr_monomial in dr_expr.args: + is_a_nonstatic_dr_term = dr_monomial.is_expression_type() + if is_a_nonstatic_dr_term: + # degree >= 1 monomial expression of form + # (product of uncertain params) * dr variable + dr_var_in_term = dr_monomial.args[-1] else: - dr_var_in_term = dr_eq_term - dr_var_in_term_idx = dr_var_in_term.index() + # the static term (intercept) + dr_var_in_term = dr_monomial - # get corresponding polishing variable + # we want the DR variable and corresponding polishing + # constraints to have the same index in the indexed + # components + dr_var_in_term_idx = dr_var_in_term.index() polishing_var = indexed_polishing_var[dr_var_in_term_idx] + # Fix DR variable if: + # (1) it has already been fixed from master due to + # DR efficiencies (already done) + # (2) coefficient of term + # (i.e. product of uncertain parameter values) + # in DR expression is 0 + # across all master blocks + dr_term_copies = [ + ( + scenario_blk.second_stage.decision_rule_eqns[idx].body.args[ + dr_var_in_term_idx + ] + ) + for scenario_blk in master_model.scenarios.values() + ] + all_copy_coeffs_zero = is_a_nonstatic_dr_term and all( + abs(value(prod(term.args[:-1]))) <= DR_POLISHING_PARAM_PRODUCT_ZERO_TOL + for term in dr_term_copies + ) + if all_copy_coeffs_zero: + # increment static DR variable value + # to maintain feasibility of the initial point + # as much as possible + static_dr_var_in_expr = dr_expr.args[0] + static_dr_var_in_expr.set_value( + value(static_dr_var_in_expr) + value(dr_monomial) + ) + dr_var_in_term.fix(0) + # add polishing constraints polishing_absolute_value_lb_cons[dr_var_in_term_idx] = ( - -polishing_var - dr_eq_term <= 0 + -polishing_var - dr_monomial <= 0 ) polishing_absolute_value_ub_cons[dr_var_in_term_idx] = ( - dr_eq_term - polishing_var <= 0 + dr_monomial - polishing_var <= 0 ) - # if DR var is fixed, then fix corresponding polishing - # variable, and deactivate the absolute value constraints - if dr_var_in_term.fixed: + # some DR variables may be fixed, + # due to the PyROS DR order efficiency instituted + # in the first few iterations. + # these need not be polished + if dr_var_in_term.fixed or not is_a_nonstatic_dr_term: polishing_var.fix() polishing_absolute_value_lb_cons[dr_var_in_term_idx].deactivate() polishing_absolute_value_ub_cons[dr_var_in_term_idx].deactivate() - # initialize polishing variable to absolute value of - # the DR term. polishing constraints should now be - # satisfied (to equality) at the initial point - polishing_var.set_value(abs(value(dr_eq_term))) + # ensure polishing var properly initialized + polishing_var.set_value(abs(value(dr_monomial))) - # polishing problem objective is taken to be 1-norm - # of DR monomials, or equivalently, sum of the polishing - # variables. + # L1-norm objective + # TODO: if dropping nonstatic terms, ensure the + # corresponding polishing variables are excluded + # from this expression polishing_model.polishing_obj = Objective( expr=sum(sum(polishing_var.values()) for polishing_var in polishing_vars) ) @@ -432,16 +485,14 @@ def construct_dr_polishing_problem(model_data, config): return polishing_model -def minimize_dr_vars(model_data, config): +def minimize_dr_vars(master_data): """ Polish decision rule of most recent master problem solution. Parameters ---------- - model_data : MasterProblemData + master_data : MasterProblemData Master problem data. - config : ConfigDict - PyROS solver settings. Returns ------- @@ -451,10 +502,10 @@ def minimize_dr_vars(model_data, config): True if polishing model was solved to acceptable level, False otherwise. """ + config = master_data.config + # create polishing NLP - polishing_model = construct_dr_polishing_problem( - model_data=model_data, config=config - ) + polishing_model = construct_dr_polishing_problem(master_data) if config.solve_master_globally: solver = config.global_solver @@ -474,12 +525,12 @@ def minimize_dr_vars(model_data, config): model=polishing_model, solver=solver, config=config, - timing_obj=model_data.timing, + timing_obj=master_data.timing, timer_name="main.dr_polishing", err_msg=( f"Optimizer {repr(solver)} encountered an exception " "attempting to solve decision rule polishing problem " - f"in iteration {model_data.iteration}" + f"in iteration {master_data.iteration}" ), ) @@ -499,7 +550,7 @@ def minimize_dr_vars(model_data, config): # continue with "unpolished" master model solution config.progress_logger.warning( "Could not successfully solve DR polishing problem " - f"of iteration {model_data.iteration} with primary subordinate " + f"of iteration {master_data.iteration} with primary subordinate " f"{'global' if config.solve_master_globally else 'local'} solver " "to acceptable level. " f"Termination stats:\n{results.solver}\n" @@ -511,102 +562,30 @@ def minimize_dr_vars(model_data, config): # variables to polishing model solution polishing_model.solutions.load_from(results) - for idx, blk in model_data.master_model.scenarios.items(): - ssv_zip = zip( - blk.util.second_stage_variables, - polishing_model.scenarios[idx].util.second_stage_variables, - ) - sv_zip = zip( - blk.util.state_vars, polishing_model.scenarios[idx].util.state_vars - ) - for master_ssv, polish_ssv in ssv_zip: - master_ssv.set_value(value(polish_ssv)) - for master_sv, polish_sv in sv_zip: - master_sv.set_value(value(polish_sv)) - - # update master problem decision rule variables + # update master problem variable values + for idx, blk in master_data.master_model.scenarios.items(): + master_adjustable_vars = blk.all_adjustable_variables + polishing_adjustable_vars = polishing_model.scenarios[ + idx + ].all_adjustable_variables + adjustable_vars_zip = zip(master_adjustable_vars, polishing_adjustable_vars) + for master_var, polish_var in adjustable_vars_zip: + master_var.set_value(value(polish_var)) dr_var_zip = zip( - blk.util.decision_rule_vars, - polishing_model.scenarios[idx].util.decision_rule_vars, + blk.first_stage.decision_rule_vars, + polishing_model.scenarios[idx].first_stage.decision_rule_vars, ) for master_dr, polish_dr in dr_var_zip: for mvar, pvar in zip(master_dr.values(), polish_dr.values()): mvar.set_value(value(pvar), skip_validation=True) config.progress_logger.debug(f" Optimized DR norm: {value(polishing_obj)}") - config.progress_logger.debug(" Polished master objective:") - - # print breakdown of objective value of polished master solution - if config.objective_focus == ObjectiveType.worst_case: - eval_obj_blk_idx = max( - model_data.master_model.scenarios.keys(), - key=lambda idx: value( - model_data.master_model.scenarios[idx].second_stage_objective - ), - ) - else: - eval_obj_blk_idx = (0, 0) - - # debugging: summarize objective breakdown - eval_obj_blk = model_data.master_model.scenarios[eval_obj_blk_idx] - config.progress_logger.debug( - " First-stage objective: " f"{value(eval_obj_blk.first_stage_objective)}" - ) - config.progress_logger.debug( - " Second-stage objective: " f"{value(eval_obj_blk.second_stage_objective)}" - ) - polished_master_obj = value( - eval_obj_blk.first_stage_objective + eval_obj_blk.second_stage_objective - ) - config.progress_logger.debug(f" Objective: {polished_master_obj}") + log_master_solve_results(polishing_model, config, results, desc="polished") return results, True -def add_p_robust_constraint(model_data, config): - """ - p-robustness--adds constraints to the master problem ensuring that the - optimal k-th iteration solution is within (1+rho) of the nominal - objective. The parameter rho is specified by the user and should be between. - """ - rho = config.p_robustness['rho'] - model = model_data.master_model - block_0 = model.scenarios[0, 0] - frac_nom_cost = (1 + rho) * ( - block_0.first_stage_objective + block_0.second_stage_objective - ) - - for block_k in model.scenarios[model_data.iteration, :]: - model.p_robust_constraints.add( - block_k.first_stage_objective + block_k.second_stage_objective - <= frac_nom_cost - ) - return - - -def add_scenario_to_master(model_data, violations): - """ - Add block to master, without cloning the master_model.first_stage_variables - """ - - m = model_data.master_model - i = max(m.scenarios.keys())[0] + 1 - - # === Add a block to master for each violation - idx = 0 # Only supporting adding single violation back to master in v1 - new_block = selective_clone( - m.scenarios[0, 0], m.scenarios[0, 0].util.first_stage_variables - ) - m.scenarios[i, idx].transfer_attributes_from(new_block) - - # === Set uncertain params in new block(s) to correct value(s) - for j, p in enumerate(m.scenarios[i, idx].util.uncertain_params): - p.set_value(violations[j]) - - return - - -def get_master_dr_degree(model_data, config): +def get_master_dr_degree(master_data): """ Determine DR polynomial degree to enforce based on the iteration number. @@ -620,35 +599,31 @@ def get_master_dr_degree(model_data, config): Parameters ---------- - model_data : MasterProblemData + master_data : MasterProblemData Master problem data. - config : ConfigDict - PyROS solver options. Returns ------- int DR order, or polynomial degree, to enforce. """ - if model_data.iteration == 0: + if master_data.iteration == 0: return 0 - elif model_data.iteration <= len(config.uncertain_params): - return min(1, config.decision_rule_order) + elif master_data.iteration <= len(master_data.config.uncertain_params): + return min(1, master_data.config.decision_rule_order) else: - return min(2, config.decision_rule_order) + return min(2, master_data.config.decision_rule_order) -def higher_order_decision_rule_efficiency(model_data, config): +def higher_order_decision_rule_efficiency(master_data): """ Enforce DR coefficient variable efficiencies for master problem-like formulation. Parameters ---------- - model_data : MasterProblemData + master_data : MasterProblemData Master problem data. - config : ConfigDict - PyROS solver options. Note ---- @@ -658,168 +633,184 @@ def higher_order_decision_rule_efficiency(model_data, config): to be set depends on the iteration number; see ``get_master_dr_degree``. """ - order_to_enforce = get_master_dr_degree(model_data, config) + order_to_enforce = get_master_dr_degree(master_data) enforce_dr_degree( - blk=model_data.master_model.scenarios[0, 0], - config=config, + working_blk=master_data.master_model.scenarios[0, 0], + config=master_data.config, degree=order_to_enforce, ) -def solver_call_master(model_data, config, solver, solve_data): +def log_master_solve_results(master_model, config, results, desc="Optimized"): + """ + Log master problem solve results. + """ + if config.objective_focus == ObjectiveType.worst_case: + eval_obj_blk_idx = max( + master_model.scenarios.keys(), + key=lambda idx: value(master_model.scenarios[idx].second_stage_objective), + ) + else: + eval_obj_blk_idx = (0, 0) + + eval_obj_blk = master_model.scenarios[eval_obj_blk_idx] + config.progress_logger.debug(f" {desc.capitalize()} master objective breakdown:") + config.progress_logger.debug( + f" First-stage objective: {value(eval_obj_blk.first_stage_objective)}" + ) + config.progress_logger.debug( + f" Second-stage objective: {value(eval_obj_blk.second_stage_objective)}" + ) + master_obj = eval_obj_blk.full_objective + config.progress_logger.debug(f" Overall Objective: {value(master_obj)}") + config.progress_logger.debug( + f" Termination condition: {results.solver.termination_condition}" + ) + config.progress_logger.debug( + f" Solve time: {getattr(results.solver, TIC_TOC_SOLVE_TIME_ATTR)}s" + ) + + +def process_termination_condition_master_problem(config, results): """ - Invoke subsolver(s) on PyROS master problem. + Process master problem solve termination condition. Parameters ---------- - model_data : MasterProblemData - Container for current master problem and related data. config : ConfigDict - PyROS solver settings. - solver : solver type - Primary subordinate optimizer with which to solve - the master problem. This may be a local or global - NLP solver. - solve_data : MasterResult - Master problem results object. May be empty or contain - master feasibility problem results. + PyROS solver options. + results : SolverResults + Solver results. Returns ------- - master_soln : MasterResult - Master problem results object, containing master - model and subsolver results. + optimality_acceptable : bool + True if problem was solved to an acceptable optimality target, + False otherwise. + infeasible : bool + True if problem was found to be infeasible, False otherwise. + + Raises + ------ + NotImplementedError + If a particular solver termination is not supported by + PyROS. """ - nlp_model = model_data.master_model - master_soln = solve_data - solver_term_cond_dict = {} + locally_acceptable = [tc.optimal, tc.locallyOptimal, tc.globallyOptimal] + globally_acceptable = [tc.optimal, tc.globallyOptimal] + robust_infeasible = [tc.infeasible] + try_backups = [ + tc.feasible, + tc.maxTimeLimit, + tc.maxIterations, + tc.maxEvaluations, + tc.minStepLength, + tc.minFunctionValue, + tc.other, + tc.solverFailure, + tc.internalSolverError, + tc.error, + tc.unbounded, + tc.infeasibleOrUnbounded, + tc.invalidProblem, + tc.intermediateNonInteger, + tc.noSolution, + tc.unknown, + ] + + termination_condition = results.solver.termination_condition + optimality_acceptable = ( + (termination_condition in globally_acceptable) + if config.solve_master_globally + else (termination_condition in locally_acceptable) + ) + infeasible = termination_condition in robust_infeasible + try_backup_solver = termination_condition in try_backups + + unsupported_termination = not ( + optimality_acceptable or try_backup_solver or infeasible + ) + if unsupported_termination: + solve_type = "global" if config.solve_master_globally else "local" + raise NotImplementedError( + f"Processing of termination condition {termination_condition} " + f"for attempt at {solve_type} solution of master problem " + "is currently not supported by PyROS. " + "Please report this issue to the PyROS developers." + ) + + return optimality_acceptable, infeasible + + +def solver_call_master(master_data): + """ + Invoke subsolver(s) on PyROS master problem, + and update the MasterResults object accordingly. + + Parameters + ---------- + master_data : MasterProblemData + Container for current master problem and related data. + + Returns + ------- + master_soln : MasterResults + Master solution results object. + """ + config = master_data.config + master_model = master_data.master_model + master_soln = MasterResults( + master_model=master_model, pyros_termination_condition=None + ) if config.solve_master_globally: - solvers = [solver] + config.backup_global_solvers + solvers = [config.global_solver] + config.backup_global_solvers else: - solvers = [solver] + config.backup_local_solvers - - higher_order_decision_rule_efficiency(model_data=model_data, config=config) + solvers = [config.local_solver] + config.backup_local_solvers solve_mode = "global" if config.solve_master_globally else "local" config.progress_logger.debug("Solving master problem") + higher_order_decision_rule_efficiency(master_data) + for idx, opt in enumerate(solvers): if idx > 0: config.progress_logger.warning( f"Invoking backup solver {opt!r} " f"(solver {idx + 1} of {len(solvers)}) for " - f"master problem of iteration {model_data.iteration}." + f"master problem of iteration {master_data.iteration}." ) results = call_solver( - model=nlp_model, + model=master_model, solver=opt, config=config, - timing_obj=model_data.timing, + timing_obj=master_data.timing, timer_name="main.master", err_msg=( f"Optimizer {repr(opt)} ({idx + 1} of {len(solvers)}) " "encountered exception attempting to " - f"solve master problem in iteration {model_data.iteration}" + f"solve master problem in iteration {master_data.iteration}" ), ) - optimal_termination = check_optimal_termination(results) - infeasible = results.solver.termination_condition == tc.infeasible - - if optimal_termination: - nlp_model.solutions.load_from(results) - - # record master problem termination conditions - # for this particular subsolver - # pyros termination condition is determined later in the - # algorithm - solver_term_cond_dict[str(opt)] = str(results.solver.termination_condition) - master_soln.termination_condition = results.solver.termination_condition - master_soln.pyros_termination_condition = None - (try_backup, _) = master_soln.master_subsolver_results = ( + master_soln.master_results_list.append(results) + optimality_acceptable, infeasible = ( process_termination_condition_master_problem(config=config, results=results) ) - - master_soln.nominal_block = nlp_model.scenarios[0, 0] - master_soln.results = results - master_soln.master_model = nlp_model - - # if model was solved successfully, update/record the results - # (nominal block DOF variable and objective values) - if not try_backup and not infeasible: - master_soln.fsv_vals = list( - v.value for v in nlp_model.scenarios[0, 0].util.first_stage_variables - ) - if config.objective_focus is ObjectiveType.nominal: - master_soln.ssv_vals = list( - v.value - for v in nlp_model.scenarios[0, 0].util.second_stage_variables - ) - master_soln.second_stage_objective = value( - nlp_model.scenarios[0, 0].second_stage_objective - ) - else: - idx = max(nlp_model.scenarios.keys())[0] - master_soln.ssv_vals = list( - v.value - for v in nlp_model.scenarios[idx, 0].util.second_stage_variables - ) - master_soln.second_stage_objective = value( - nlp_model.scenarios[idx, 0].second_stage_objective - ) - master_soln.first_stage_objective = value( - nlp_model.scenarios[0, 0].first_stage_objective + time_out = check_time_limit_reached(master_data.timing, config) + + if optimality_acceptable: + master_model.solutions.load_from(results) + log_master_solve_results(master_model, config, results) + if time_out: + master_soln.pyros_termination_condition = pyrosTerminationCondition.time_out + if infeasible: + master_soln.pyros_termination_condition = ( + pyrosTerminationCondition.robust_infeasible ) - # debugging: log breakdown of master objective - if config.objective_focus == ObjectiveType.worst_case: - eval_obj_blk_idx = max( - nlp_model.scenarios.keys(), - key=lambda idx: value( - nlp_model.scenarios[idx].second_stage_objective - ), - ) - else: - eval_obj_blk_idx = (0, 0) - - eval_obj_blk = nlp_model.scenarios[eval_obj_blk_idx] - config.progress_logger.debug(" Optimized master objective breakdown:") - config.progress_logger.debug( - f" First-stage objective: {value(eval_obj_blk.first_stage_objective)}" - ) - config.progress_logger.debug( - f" Second-stage objective: {value(eval_obj_blk.second_stage_objective)}" - ) - master_obj = ( - eval_obj_blk.first_stage_objective + eval_obj_blk.second_stage_objective - ) - config.progress_logger.debug(f" Objective: {value(master_obj)}") - config.progress_logger.debug( - f" Termination condition: {results.solver.termination_condition}" - ) - config.progress_logger.debug( - f" Solve time: {getattr(results.solver, TIC_TOC_SOLVE_TIME_ATTR)}s" - ) - - master_soln.nominal_block = nlp_model.scenarios[0, 0] - master_soln.results = results - master_soln.master_model = nlp_model - - # if PyROS time limit exceeded, exit loop and return solution - elapsed = get_main_elapsed_time(model_data.timing) - if config.time_limit: - if elapsed >= config.time_limit: - try_backup = False - master_soln.master_subsolver_results = ( - None, - pyrosTerminationCondition.time_out, - ) - master_soln.pyros_termination_condition = ( - pyrosTerminationCondition.time_out - ) - - if not try_backup: + final_result_established = optimality_acceptable or time_out or infeasible + if final_result_established: return master_soln # all solvers have failed to return an acceptable status. @@ -835,13 +826,13 @@ def solver_call_master(model_data, config, solver, solve_data): ( config.uncertainty_set.type + "_" - + model_data.original.name + + master_data.original_model_name + "_master_" - + str(model_data.iteration) + + str(master_data.iteration) + ".bar" ), ) - nlp_model.write( + master_model.write( output_problem_path, io_options={'symbolic_solver_labels': True} ) serialization_msg = ( @@ -850,7 +841,7 @@ def solver_call_master(model_data, config, solver, solve_data): ) deterministic_model_qual = ( - " (i.e., the deterministic model)" if model_data.iteration == 0 else "" + " (i.e., the deterministic model)" if master_data.iteration == 0 else "" ) deterministic_msg = ( ( @@ -858,16 +849,20 @@ def solver_call_master(model_data, config, solver, solve_data): f"is solvable by at least one of the subordinate {solve_mode} " "optimizers provided." ) - if model_data.iteration == 0 + if master_data.iteration == 0 else "" ) + master_soln.pyros_termination_condition = pyrosTerminationCondition.subsolver_error + subsolver_termination_conditions = [ + res.solver.termination_condition for res in master_soln.master_results_list + ] config.progress_logger.warning( f"Could not successfully solve master problem of iteration " - f"{model_data.iteration}{deterministic_model_qual} with any of the " + f"{master_data.iteration}{deterministic_model_qual} with any of the " f"provided subordinate {solve_mode} optimizers. " f"(Termination statuses: " - f"{[term_cond for term_cond in solver_term_cond_dict.values()]}.)" + f"{[term_cond for term_cond in subsolver_termination_conditions]}.)" f"{deterministic_msg}" f"{serialization_msg}" ) @@ -875,44 +870,78 @@ def solver_call_master(model_data, config, solver, solve_data): return master_soln -def solve_master(model_data, config): +def solve_master(master_data): """ - Solve the master problem + Solve the master problem. + + Returns + ------- + master_soln : MasterResults + Master problem solve results. """ - master_soln = MasterResult() - - # no master feas problem for iteration 0 - if model_data.iteration > 0: - results = solve_master_feasibility_problem(model_data, config) - master_soln.feasibility_problem_results = results - - # if pyros time limit reached, load time out status - # to master results and return to caller - elapsed = get_main_elapsed_time(model_data.timing) - if config.time_limit: - if elapsed >= config.time_limit: - # load master model - master_soln.master_model = model_data.master_model - master_soln.nominal_block = model_data.master_model.scenarios[0, 0] - - # empty results object, with master solve time of zero - master_soln.results = SolverResults() - setattr(master_soln.results.solver, TIC_TOC_SOLVE_TIME_ATTR, 0) - - # PyROS time out status - master_soln.pyros_termination_condition = ( - pyrosTerminationCondition.time_out - ) - master_soln.master_subsolver_results = ( - None, - pyrosTerminationCondition.time_out, - ) - return master_soln + feasibility_problem_results = None + time_out_after_feasibility = False + if master_data.iteration > 0: + feasibility_problem_results = solve_master_feasibility_problem(master_data) + time_out_after_feasibility = check_time_limit_reached( + master_data.timing, master_data.config + ) - solver = ( - config.global_solver if config.solve_master_globally else config.local_solver - ) + if time_out_after_feasibility: + master_soln = MasterResults( + master_model=master_data.master_model, + feasibility_problem_results=feasibility_problem_results, + master_results_list=None, + pyros_termination_condition=pyrosTerminationCondition.time_out, + ) + else: + master_soln = solver_call_master(master_data) + master_soln.feasibility_problem_results = feasibility_problem_results - return solver_call_master( - model_data=model_data, config=config, solver=solver, solve_data=master_soln - ) + return master_soln + + +class MasterProblemData: + """ + Container for objects pertaining to the PyROS master problem. + + Parameters + ---------- + model_data : ModelData + PyROS model data object, equipped with the + fully preprocessed working model. + + Attributes + ---------- + master_model : BlockData + Master problem model object. + original_model_name : str + Name of the user-provided deterministic model object. + iteration : int + Index of the current PyROS cutting set iteration. + timing : TimingData + Main timer for the current problem being solved. + config : ConfigDict + PyROS solver options. + """ + + def __init__(self, model_data): + """Initialize self (see docstring).""" + self.master_model = construct_initial_master_problem(model_data) + # we track the original model name for serialization purposes + self.original_model_name = model_data.original_model.name + self.iteration = 0 + self.timing = model_data.timing + self.config = model_data.config + + def solve_master(self): + """ + Solve the master problem. + """ + return solve_master(self) + + def solve_dr_polishing(self): + """ + Solve the DR polishing problem. + """ + return minimize_dr_vars(self) diff --git a/pyomo/contrib/pyros/pyros.py b/pyomo/contrib/pyros/pyros.py index 582233c4a56..2ffef5054aa 100644 --- a/pyomo/contrib/pyros/pyros.py +++ b/pyomo/contrib/pyros/pyros.py @@ -10,40 +10,30 @@ # ___________________________________________________________________________ # pyros.py: Generalized Robust Cutting-Set Algorithm for Pyomo +from datetime import datetime import logging + from pyomo.common.config import document_kwargs_from_configdict -from pyomo.core.base.block import Block from pyomo.core.expr import value -from pyomo.core.base.var import Var -from pyomo.core.base.objective import Objective -from pyomo.contrib.pyros.util import time_code -from pyomo.common.modeling import unique_component_name from pyomo.opt import SolverFactory + from pyomo.contrib.pyros.config import pyros_config, logger_domain +from pyomo.contrib.pyros.pyros_algorithm_methods import ROSolver_iterative_solve +from pyomo.contrib.pyros.solve_data import ROSolveResults from pyomo.contrib.pyros.util import ( - recast_to_min_obj, - add_decision_rule_constraints, - add_decision_rule_variables, load_final_solution, pyrosTerminationCondition, - ObjectiveType, - identify_objective_functions, validate_pyros_inputs, - transform_to_standard_form, - turn_bounds_to_constraints, - replace_uncertain_bounds_with_constraints, + log_model_statistics, IterationLogRecord, setup_pyros_logger, + time_code, TimingData, + ModelData, ) -from pyomo.contrib.pyros.solve_data import ROSolveResults -from pyomo.contrib.pyros.pyros_algorithm_methods import ROSolver_iterative_solve -from pyomo.core.base import Constraint -from datetime import datetime - -__version__ = "1.2.11" +__version__ = "1.3.0" default_pyros_solver_logger = setup_pyros_logger() @@ -261,6 +251,8 @@ def _resolve_and_validate_pyros_args(self, model, **kwds): ------- config : ConfigDict Standardized arguments. + user_var_partitioning : util.VarPartitioning + User-based partitioning of the in-scope model variables. Note ---- @@ -275,9 +267,9 @@ def _resolve_and_validate_pyros_args(self, model, **kwds): """ config = self.CONFIG(kwds.pop("options", {})) config = config(kwds) - state_vars = validate_pyros_inputs(model, config) + user_var_partitioning = validate_pyros_inputs(model, config) - return config, state_vars + return config, user_var_partitioning @document_kwargs_from_configdict( config=CONFIG, @@ -329,8 +321,7 @@ def solve( Summary of PyROS termination outcome. """ - model_data = ROSolveResults() - model_data.timing = TimingData() + model_data = ModelData(original_model=model, timing=TimingData(), config=None) with time_code( timing_data_obj=model_data.timing, code_block_name="main", @@ -363,85 +354,20 @@ def solve( self._log_intro(logger=progress_logger, level=logging.INFO) self._log_disclaimer(logger=progress_logger, level=logging.INFO) - config, state_vars = self._resolve_and_validate_pyros_args(model, **kwds) + config, user_var_partitioning = self._resolve_and_validate_pyros_args( + model, **kwds + ) self._log_config( logger=config.progress_logger, config=config, exclude_options=None, level=logging.INFO, ) + model_data.config = config - # begin preprocessing config.progress_logger.info("Preprocessing...") model_data.timing.start_timer("main.preprocessing") - - # === A block to hold list-type data to make cloning easy - util = Block(concrete=True) - util.first_stage_variables = config.first_stage_variables - util.second_stage_variables = config.second_stage_variables - util.state_vars = state_vars - util.uncertain_params = config.uncertain_params - - model_data.util_block = unique_component_name(model, 'util') - model.add_component(model_data.util_block, util) - # Note: model.component(model_data.util_block) is util - - # === Leads to a logger warning here for inactive obj when cloning - model_data.original_model = model - # === For keeping track of variables after cloning - cname = unique_component_name(model_data.original_model, 'tmp_var_list') - src_vars = list(model_data.original_model.component_data_objects(Var)) - setattr(model_data.original_model, cname, src_vars) - model_data.working_model = model_data.original_model.clone() - - # identify active objective function - # (there should only be one at this point) - # recast to minimization if necessary - active_objs = list( - model_data.working_model.component_data_objects( - Objective, active=True, descend_into=True - ) - ) - assert len(active_objs) == 1 - active_obj = active_objs[0] - active_obj_original_sense = active_obj.sense - recast_to_min_obj(model_data.working_model, active_obj) - - # === Determine first and second-stage objectives - identify_objective_functions(model_data.working_model, active_obj) - active_obj.deactivate() - - # === Put model in standard form - transform_to_standard_form(model_data.working_model) - - # === Replace variable bounds depending on uncertain params with - # explicit inequality constraints - replace_uncertain_bounds_with_constraints( - model_data.working_model, model_data.working_model.util.uncertain_params - ) - - # === Add decision rule information - add_decision_rule_variables(model_data, config) - add_decision_rule_constraints(model_data, config) - - # === Move bounds on control variables to explicit ineq constraints - wm_util = model_data.working_model - - # cast bounds on second-stage and state variables to - # explicit constraints for separation objectives - for c in model_data.working_model.util.second_stage_variables: - turn_bounds_to_constraints(c, wm_util, config) - for c in model_data.working_model.util.state_vars: - turn_bounds_to_constraints(c, wm_util, config) - - # === Make control_variable_bounds array - wm_util.ssv_bounds = [] - for c in model_data.working_model.component_data_objects( - Constraint, descend_into=True - ): - if "bound_con" in c.name: - wm_util.ssv_bounds.append(c) - + robust_infeasible = model_data.preprocess(user_var_partitioning) model_data.timing.stop_timer("main.preprocessing") preprocessing_time = model_data.timing.get_total_time("main.preprocessing") config.progress_logger.info( @@ -449,46 +375,43 @@ def solve( f"{preprocessing_time:.3f}s." ) - # === Solve and load solution into model - pyros_soln, final_iter_separation_solns = ROSolver_iterative_solve( - model_data, config - ) - IterationLogRecord.log_header_rule(config.progress_logger.info) + log_model_statistics(model_data) + # === Solve and load solution into model return_soln = ROSolveResults() - if pyros_soln is not None and final_iter_separation_solns is not None: - if config.load_solution and ( - pyros_soln.pyros_termination_condition - is pyrosTerminationCondition.robust_optimal - or pyros_soln.pyros_termination_condition - is pyrosTerminationCondition.robust_feasible - ): - load_final_solution(model_data, pyros_soln.master_soln, config) - - # account for sense of the original model objective - # when reporting the final PyROS (master) objective, - # since maximization objective is changed to - # minimization objective during preprocessing - if config.objective_focus == ObjectiveType.nominal: - return_soln.final_objective_value = ( - active_obj_original_sense - * value(pyros_soln.master_soln.master_model.obj) + if not robust_infeasible: + pyros_soln = ROSolver_iterative_solve(model_data) + IterationLogRecord.log_header_rule(config.progress_logger.info) + + termination_acceptable = pyros_soln.pyros_termination_condition in { + pyrosTerminationCondition.robust_optimal, + pyrosTerminationCondition.robust_feasible, + } + if termination_acceptable: + load_final_solution( + model_data=model_data, + master_soln=pyros_soln.master_results, + original_user_var_partitioning=user_var_partitioning, ) - elif config.objective_focus == ObjectiveType.worst_case: + + # get the most recent master objective, if available + return_soln.final_objective_value = None + master_epigraph_obj_value = value( + pyros_soln.master_results.master_model.epigraph_obj, exception=False + ) + if master_epigraph_obj_value is not None: + # account for sense of the original model objective + # when reporting the final PyROS (master) objective, + # since maximization objective is changed to + # minimization objective during preprocessing return_soln.final_objective_value = ( - active_obj_original_sense - * value(pyros_soln.master_soln.master_model.zeta) + model_data.active_obj_original_sense * master_epigraph_obj_value ) + return_soln.pyros_termination_condition = ( pyros_soln.pyros_termination_condition ) - return_soln.iterations = pyros_soln.total_iters + 1 - - # === Remove util block - model.del_component(model_data.util_block) - - del pyros_soln.util_block - del pyros_soln.working_model + return_soln.iterations = pyros_soln.iterations else: return_soln.final_objective_value = None return_soln.pyros_termination_condition = ( diff --git a/pyomo/contrib/pyros/pyros_algorithm_methods.py b/pyomo/contrib/pyros/pyros_algorithm_methods.py index cfb57b08c7f..86e5d52935b 100644 --- a/pyomo/contrib/pyros/pyros_algorithm_methods.py +++ b/pyomo/contrib/pyros/pyros_algorithm_methods.py @@ -9,603 +9,166 @@ # This software is distributed under the 3-clause BSD License. # ___________________________________________________________________________ -''' -Methods for the execution of the grcs algorithm -''' - -from pyomo.core.base import Objective, ConstraintList, Var, Constraint, Block -from pyomo.opt.results import TerminationCondition -from pyomo.contrib.pyros import master_problem_methods, separation_problem_methods -from pyomo.contrib.pyros.solve_data import SeparationProblemData, MasterResult -from pyomo.contrib.pyros.uncertainty_sets import Geometry +""" +Methods for execution of the main PyROS cutting set algorithm. +""" + +from collections import namedtuple + +from pyomo.common.dependencies import numpy as np +from pyomo.common.collections import ComponentMap +from pyomo.core.base import value + +import pyomo.contrib.pyros.master_problem_methods as mp_methods +import pyomo.contrib.pyros.separation_problem_methods as sp_methods from pyomo.contrib.pyros.util import ( + check_time_limit_reached, ObjectiveType, - get_time_from_solver, pyrosTerminationCondition, IterationLogRecord, + get_main_elapsed_time, + get_dr_var_to_monomial_map, ) -from pyomo.contrib.pyros.util import get_main_elapsed_time, coefficient_matching -from pyomo.core.base import value -from pyomo.core.expr import MonomialTermExpression -from pyomo.common.collections import ComponentSet, ComponentMap -from pyomo.core.base.var import VarData as VarData -from itertools import chain -from pyomo.common.dependencies import numpy as np -def update_grcs_solve_data( - pyros_soln, term_cond, nominal_data, timing_data, separation_data, master_soln, k -): - ''' - This function updates the results data container object to return to the user so that they have all pertinent - information from the PyROS run. - :param grcs_soln: PyROS solution data container object - :param term_cond: PyROS termination condition - :param nominal_data: Contains information on all nominal data (var values, objective) - :param timing_data: Contains timing information on subsolver calls in PyROS - :param separation_data: Separation model data container - :param master_problem_subsolver_statuses: All master problem sub-solver termination conditions from the PyROS run - :param separation_problem_subsolver_statuses: All separation problem sub-solver termination conditions from the PyROS run - :param k: Iteration counter - :return: None - ''' - pyros_soln.pyros_termination_condition = term_cond - pyros_soln.total_iters = k - pyros_soln.nominal_data = nominal_data - pyros_soln.timing_data = timing_data - pyros_soln.separation_data = separation_data - pyros_soln.master_soln = master_soln - - return - - -def get_dr_var_to_scaled_expr_map( - decision_rule_eqns, second_stage_vars, uncertain_params, decision_rule_vars -): +class GRCSResults: """ - Generate mapping from decision rule variables - to their terms in a model's DR expression. + Cutting set RO algorithm solve results. + + Attributes + ---------- + master_results : MasterResults + Solve results for most recent master problem. + separation_results : SeparationResults or None + Solve results for separation problem(s) of last iteration. + If the separation subroutine was not invoked in the last + iteration, then None. + pyros_termination_condition : pyrosTerminationCondition + PyROS termination condition. + iterations : int + Number of iterations required. """ - var_to_scaled_expr_map = ComponentMap() - ssv_dr_eq_zip = zip(second_stage_vars, decision_rule_eqns) - for ssv_idx, (ssv, dr_eq) in enumerate(ssv_dr_eq_zip): - for term in dr_eq.body.args: - if isinstance(term, MonomialTermExpression): - is_ssv_term = ( - isinstance(term.args[0], int) - and term.args[0] == -1 - and isinstance(term.args[1], VarData) - ) - if not is_ssv_term: - dr_var = term.args[1] - var_to_scaled_expr_map[dr_var] = term - elif isinstance(term, VarData): - var_to_scaled_expr_map[term] = MonomialTermExpression((1, term)) - return var_to_scaled_expr_map + def __init__( + self, + master_results, + separation_results, + pyros_termination_condition, + iterations, + ): + self.master_results = master_results + self.separation_results = separation_results + self.pyros_termination_condition = pyros_termination_condition + self.iterations = iterations + + +def _evaluate_shift(current, prev, initial, norm=None): + if current.size == 0: + return None + else: + normalizers = np.max( + np.vstack((np.ones(initial.size), np.abs(initial))), axis=0 + ) + return np.max(np.abs(current - prev) / normalizers) + +VariableValueData = namedtuple( + "VariableValueData", + ("first_stage_variables", "second_stage_variables", "decision_rule_monomials"), +) -def evaluate_and_log_component_stats(model_data, separation_model, config): + +def get_variable_value_data(working_blk, dr_var_to_monomial_map): """ - Evaluate and log model component statistics. + Get variable value data. """ - IterationLogRecord.log_header_rule(config.progress_logger.info) - config.progress_logger.info("Model statistics:") - # print model statistics - dr_var_set = ComponentSet( - chain( - *tuple( - indexed_dr_var.values() - for indexed_dr_var in model_data.working_model.util.decision_rule_vars - ) - ) - ) - first_stage_vars = [ - var - for var in model_data.working_model.util.first_stage_variables - if var not in dr_var_set - ] - - # account for epigraph constraint - sep_model_epigraph_con = getattr(separation_model, "epigraph_constr", None) - has_epigraph_con = sep_model_epigraph_con is not None - - num_fsv = len(first_stage_vars) - num_ssv = len(model_data.working_model.util.second_stage_variables) - num_sv = len(model_data.working_model.util.state_vars) - num_dr_vars = len(dr_var_set) - num_vars = int(has_epigraph_con) + num_fsv + num_ssv + num_sv + num_dr_vars - - num_uncertain_params = len(model_data.working_model.util.uncertain_params) - - eq_cons = [ - con - for con in model_data.working_model.component_data_objects( - Constraint, active=True - ) - if con.equality - ] - dr_eq_set = ComponentSet( - chain( - *tuple( - indexed_dr_eq.values() - for indexed_dr_eq in model_data.working_model.util.decision_rule_eqns - ) - ) - ) - num_eq_cons = len(eq_cons) - num_dr_cons = len(dr_eq_set) - num_coefficient_matching_cons = len( - getattr(model_data.working_model, "coefficient_matching_constraints", []) - ) - num_other_eq_cons = num_eq_cons - num_dr_cons - num_coefficient_matching_cons - - # get performance constraints as referenced in the separation - # model object - new_sep_con_map = separation_model.util.map_new_constraint_list_to_original_con - perf_con_set = ComponentSet( - new_sep_con_map.get(con, con) - for con in separation_model.util.performance_constraints - ) - is_epigraph_con_first_stage = ( - has_epigraph_con and sep_model_epigraph_con not in perf_con_set - ) - working_model_perf_con_set = ComponentSet( - model_data.working_model.find_component(new_sep_con_map.get(con, con)) - for con in separation_model.util.performance_constraints - if con is not None - ) + ep = working_blk.effective_var_partitioning - num_perf_cons = len(separation_model.util.performance_constraints) - num_fsv_bounds = sum( - int(var.lower is not None) + int(var.upper is not None) - for var in first_stage_vars + first_stage_data = ComponentMap( + (var, var.value) for var in ep.first_stage_variables ) - ineq_con_set = [ - con - for con in model_data.working_model.component_data_objects( - Constraint, active=True - ) - if not con.equality - ] - num_fsv_ineqs = ( - num_fsv_bounds - + len([con for con in ineq_con_set if con not in working_model_perf_con_set]) - + is_epigraph_con_first_stage - ) - num_ineq_cons = len(ineq_con_set) + has_epigraph_con + num_fsv_bounds - - config.progress_logger.info(f"{' Number of variables'} : {num_vars}") - config.progress_logger.info(f"{' Epigraph variable'} : {int(has_epigraph_con)}") - config.progress_logger.info(f"{' First-stage variables'} : {num_fsv}") - config.progress_logger.info(f"{' Second-stage variables'} : {num_ssv}") - config.progress_logger.info(f"{' State variables'} : {num_sv}") - config.progress_logger.info(f"{' Decision rule variables'} : {num_dr_vars}") - config.progress_logger.info( - f"{' Number of uncertain parameters'} : {num_uncertain_params}" - ) - config.progress_logger.info( - f"{' Number of constraints'} : " f"{num_ineq_cons + num_eq_cons}" - ) - config.progress_logger.info(f"{' Equality constraints'} : {num_eq_cons}") - config.progress_logger.info( - f"{' Coefficient matching constraints'} : " - f"{num_coefficient_matching_cons}" - ) - config.progress_logger.info(f"{' Decision rule equations'} : {num_dr_cons}") - config.progress_logger.info( - f"{' All other equality constraints'} : " f"{num_other_eq_cons}" + second_stage_data = ComponentMap( + (var, var.value) for var in ep.second_stage_variables ) - config.progress_logger.info(f"{' Inequality constraints'} : {num_ineq_cons}") - config.progress_logger.info( - f"{' First-stage inequalities (incl. certain var bounds)'} : " - f"{num_fsv_ineqs}" + dr_term_data = ComponentMap( + (dr_var, value(monomial)) + for dr_var, monomial in get_dr_var_to_monomial_map(working_blk).items() ) - config.progress_logger.info( - f"{' Performance constraints (incl. var bounds)'} : {num_perf_cons}" + + return VariableValueData( + first_stage_variables=first_stage_data, + second_stage_variables=second_stage_data, + decision_rule_monomials=dr_term_data, ) -def evaluate_first_stage_var_shift( - current_master_fsv_vals, previous_master_fsv_vals, first_iter_master_fsv_vals -): +def evaluate_variable_shifts(current_var_data, previous_var_data, initial_var_data): """ - Evaluate first-stage variable "shift": the maximum relative - difference between first-stage variable values from the current - and previous master iterations. - - Parameters - ---------- - current_master_fsv_vals : ComponentMap - First-stage variable values from the current master - iteration. - previous_master_fsv_vals : ComponentMap - First-stage variable values from the previous master - iteration. - first_iter_master_fsv_vals : ComponentMap - First-stage variable values from the first master - iteration. - - Returns - ------- - None - Returned only if `current_master_fsv_vals` is empty, - which should occur only if the problem has no first-stage - variables. - float - The maximum relative difference - Returned only if `current_master_fsv_vals` is not empty. + Evaluate relative changes in the variable values + across solutions to a working model block, such as the + nominal master block. """ - if not current_master_fsv_vals: - # there are no first-stage variables - return None + if previous_var_data is None: + return None, None, None else: - return max( - abs(current_master_fsv_vals[var] - previous_master_fsv_vals[var]) - / max((abs(first_iter_master_fsv_vals[var]), 1)) - for var in previous_master_fsv_vals - ) - - -def evaluate_second_stage_var_shift( - current_master_nom_ssv_vals, - previous_master_nom_ssv_vals, - first_iter_master_nom_ssv_vals, -): - """ - Evaluate second-stage variable "shift": the maximum relative - difference between second-stage variable values from the current - and previous master iterations as evaluated subject to the - nominal uncertain parameter realization. - - Parameters - ---------- - current_master_nom_ssv_vals : ComponentMap - Second-stage variable values from the current master - iteration, evaluated subject to the nominal uncertain - parameter realization. - previous_master_nom_ssv_vals : ComponentMap - Second-stage variable values from the previous master - iteration, evaluated subject to the nominal uncertain - parameter realization. - first_iter_master_nom_ssv_vals : ComponentMap - Second-stage variable values from the first master - iteration, evaluated subject to the nominal uncertain - parameter realization. + var_shifts = [] + for attr in current_var_data._fields: + var_shifts.append( + _evaluate_shift( + current=np.array(list(getattr(current_var_data, attr).values())), + prev=np.array(list(getattr(previous_var_data, attr).values())), + initial=np.array(list(getattr(initial_var_data, attr).values())), + ) + ) - Returns - ------- - None - Returned only if `current_master_nom_ssv_vals` is empty, - which should occur only if the problem has no second-stage - variables. - float - The maximum relative difference. - Returned only if `current_master_nom_ssv_vals` is not empty. - """ - if not current_master_nom_ssv_vals: - return None - else: - return max( - abs(current_master_nom_ssv_vals[ssv] - previous_master_nom_ssv_vals[ssv]) - / max((abs(first_iter_master_nom_ssv_vals[ssv]), 1)) - for ssv in previous_master_nom_ssv_vals - ) + return tuple(var_shifts) -def evaluate_dr_var_shift( - current_master_dr_var_vals, - previous_master_dr_var_vals, - first_iter_master_nom_ssv_vals, - dr_var_to_ssv_map, -): +def ROSolver_iterative_solve(model_data): """ - Evaluate decision rule variable "shift": the maximum relative - difference between scaled decision rule (DR) variable expressions - (terms in the DR equations) from the current - and previous master iterations. + Solve an RO problem with the iterative GRCS algorithm. Parameters ---------- - current_master_dr_var_vals : ComponentMap - DR variable values from the current master - iteration. - previous_master_dr_var_vals : ComponentMap - DR variable values from the previous master - iteration. - first_iter_master_nom_ssv_vals : ComponentMap - Second-stage variable values (evaluated subject to the - nominal uncertain parameter realization) - from the first master iteration. - dr_var_to_ssv_map : ComponentMap - Mapping from each DR variable to the - second-stage variable whose value is a function of the - DR variable. + model_data : model data object + Model data object, equipped with the + fully preprocessed working model. Returns ------- - None - Returned only if `current_master_dr_var_vals` is empty, - which should occur only if the problem has no decision rule - (or equivalently, second-stage) variables. - float - The maximum relative difference. - Returned only if `current_master_dr_var_vals` is not empty. + GRCSResults + Iterative solve results. """ - if not current_master_dr_var_vals: - return None - else: - return max( - abs(current_master_dr_var_vals[drvar] - previous_master_dr_var_vals[drvar]) - / max((1, abs(first_iter_master_nom_ssv_vals[dr_var_to_ssv_map[drvar]]))) - for drvar in previous_master_dr_var_vals - ) - - -def ROSolver_iterative_solve(model_data, config): - ''' - GRCS algorithm implementation - :model_data: ROSolveData object with deterministic model information - :config: ConfigBlock for the instance being solved - ''' - - # === The "violation" e.g. uncertain parameter values added to the master problem are nominal in iteration 0 - # User can supply a nominal_uncertain_param_vals if they want to set nominal to a certain point, - # Otherwise, the default init value for the params is used as nominal_uncertain_param_vals - violation = list(p for p in config.nominal_uncertain_param_vals) - - # === Do coefficient matching - constraints = [ - c - for c in model_data.working_model.component_data_objects(Constraint) - if c.equality - and c not in ComponentSet(model_data.working_model.util.decision_rule_eqns) - ] - model_data.working_model.util.h_x_q_constraints = ComponentSet() - for c in constraints: - coeff_matching_success, robust_infeasible = coefficient_matching( - model=model_data.working_model, - constraint=c, - uncertain_params=model_data.working_model.util.uncertain_params, - config=config, - ) - if not coeff_matching_success and not robust_infeasible: - config.progress_logger.error( - f"Equality constraint {c.name!r} cannot be guaranteed to " - "be robustly feasible, given the current partitioning " - "among first-stage, second-stage, and state variables. " - "Consider editing this constraint to reference some " - "second-stage and/or state variable(s)." - ) - raise ValueError("Coefficient matching unsuccessful. See the solver logs.") - elif not coeff_matching_success and robust_infeasible: - config.progress_logger.info( - "PyROS has determined that the model is robust infeasible. " - f"One reason for this is that the equality constraint {c.name} " - "cannot be satisfied against all realizations of uncertainty, " - "given the current partitioning between " - "first-stage, second-stage, and state variables. " - "Consider editing this constraint to reference some (additional) " - "second-stage and/or state variable(s)." - ) - return None, None - else: - pass - - # h(x,q) == 0 becomes h'(x) == 0 - for c in model_data.working_model.util.h_x_q_constraints: - c.deactivate() - - # === Build the master problem and master problem data container object - master_data = master_problem_methods.initial_construct_master(model_data) - - # === If using p_robustness, add ConstraintList for additional constraints - if config.p_robustness: - master_data.master_model.p_robust_constraints = ConstraintList() - - # === Add scenario_0 - master_data.master_model.scenarios[0, 0].transfer_attributes_from( - master_data.original.clone() - ) - if len(master_data.master_model.scenarios[0, 0].util.uncertain_params) != len( - violation - ): - raise ValueError - - # === Set the nominal uncertain parameters to the violation values - for i, v in enumerate(violation): - master_data.master_model.scenarios[0, 0].util.uncertain_params[i].value = v - - # === Add objective function (assuming minimization of costs) with nominal second-stage costs - if config.objective_focus is ObjectiveType.nominal: - master_data.master_model.obj = Objective( - expr=master_data.master_model.scenarios[0, 0].first_stage_objective - + master_data.master_model.scenarios[0, 0].second_stage_objective - ) - elif config.objective_focus is ObjectiveType.worst_case: - # === Worst-case cost objective - master_data.master_model.zeta = Var( - initialize=value( - master_data.master_model.scenarios[0, 0].first_stage_objective - + master_data.master_model.scenarios[0, 0].second_stage_objective, - exception=False, - ) - ) - master_data.master_model.obj = Objective(expr=master_data.master_model.zeta) - master_data.master_model.scenarios[0, 0].epigraph_constr = Constraint( - expr=master_data.master_model.scenarios[0, 0].first_stage_objective - + master_data.master_model.scenarios[0, 0].second_stage_objective - <= master_data.master_model.zeta - ) - master_data.master_model.scenarios[0, 0].util.first_stage_variables.append( - master_data.master_model.zeta - ) - - # === Add deterministic constraints to ComponentSet on original so that these become part of separation model - master_data.original.util.deterministic_constraints = ComponentSet( - c - for c in master_data.original.component_data_objects( - Constraint, descend_into=True - ) - ) - - # === Make separation problem model once before entering the solve loop - separation_model = separation_problem_methods.make_separation_problem( - model_data=master_data, config=config - ) - - evaluate_and_log_component_stats( - model_data=model_data, separation_model=separation_model, config=config - ) - - # === Create separation problem data container object and add information to catalog during solve - separation_data = SeparationProblemData() - separation_data.separation_model = separation_model - separation_data.points_separated = ( - [] - ) # contains last point separated in the separation problem - separation_data.points_added_to_master = [ - config.nominal_uncertain_param_vals - ] # explicitly robust against in master - separation_data.constraint_violations = ( - [] - ) # list of constraint violations for each iteration - separation_data.total_global_separation_solves = ( - 0 # number of times global solve is used - ) - separation_data.timing = master_data.timing # timing object - - # === Keep track of subsolver termination statuses from each iteration - separation_data.separation_problem_subsolver_statuses = [] - - # for discrete set types, keep track of scenarios added to master - if config.uncertainty_set.geometry == Geometry.DISCRETE_SCENARIOS: - separation_data.idxs_of_master_scenarios = [ - config.uncertainty_set.scenarios.index( - tuple(config.nominal_uncertain_param_vals) - ) - ] - else: - separation_data.idxs_of_master_scenarios = None - - # === Nominal information - nominal_data = Block() - nominal_data.nom_fsv_vals = [] - nominal_data.nom_ssv_vals = [] - nominal_data.nom_first_stage_cost = 0 - nominal_data.nom_second_stage_cost = 0 - nominal_data.nom_obj = 0 - - # === Time information - timing_data = Block() - timing_data.total_master_solve_time = 0 - timing_data.total_separation_local_time = 0 - timing_data.total_separation_global_time = 0 - timing_data.total_dr_polish_time = 0 - - dr_var_lists_original = [] - dr_var_lists_polished = [] + config = model_data.config + master_data = mp_methods.MasterProblemData(model_data) + separation_data = sp_methods.SeparationProblemData(model_data) # set up first-stage variable and DR variable sets - master_dr_var_set = ComponentSet( - chain( - *tuple( - indexed_var.values() - for indexed_var in master_data.master_model.scenarios[ - 0, 0 - ].util.decision_rule_vars - ) - ) - ) - master_fsv_set = ComponentSet( - var - for var in master_data.master_model.scenarios[0, 0].util.first_stage_variables - if var not in master_dr_var_set - ) - master_nom_ssv_set = ComponentSet( - master_data.master_model.scenarios[0, 0].util.second_stage_variables - ) - previous_master_fsv_vals = ComponentMap((var, None) for var in master_fsv_set) - previous_master_dr_var_vals = ComponentMap((var, None) for var in master_dr_var_set) - previous_master_nom_ssv_vals = ComponentMap( - (var, None) for var in master_nom_ssv_set - ) + nominal_master_blk = master_data.master_model.scenarios[0, 0] + dr_var_monomial_map = get_dr_var_to_monomial_map(nominal_master_blk) - first_iter_master_fsv_vals = ComponentMap((var, None) for var in master_fsv_set) - first_iter_master_nom_ssv_vals = ComponentMap( - (var, None) for var in master_nom_ssv_set - ) - first_iter_dr_var_vals = ComponentMap((var, None) for var in master_dr_var_set) - nom_master_util_blk = master_data.master_model.scenarios[0, 0].util - dr_var_scaled_expr_map = get_dr_var_to_scaled_expr_map( - decision_rule_vars=nom_master_util_blk.decision_rule_vars, - decision_rule_eqns=nom_master_util_blk.decision_rule_eqns, - second_stage_vars=nom_master_util_blk.second_stage_variables, - uncertain_params=nom_master_util_blk.uncertain_params, - ) - dr_var_to_ssv_map = ComponentMap() - dr_ssv_zip = zip( - nom_master_util_blk.decision_rule_vars, - nom_master_util_blk.second_stage_variables, - ) - for indexed_dr_var, ssv in dr_ssv_zip: - for drvar in indexed_dr_var.values(): - dr_var_to_ssv_map[drvar] = ssv + # keep track of variable values for iteration logging + first_iter_var_data = None + previous_iter_var_data = None + current_iter_var_data = None + num_second_stage_ineq_cons = len( + separation_data.separation_model.second_stage.inequality_cons + ) IterationLogRecord.log_header(config.progress_logger.info) k = 0 - master_statuses = [] while config.max_iter == -1 or k < config.max_iter: master_data.iteration = k - - # === Add p-robust constraint if iteration > 0 - if k > 0 and config.p_robustness: - master_problem_methods.add_p_robust_constraint( - model_data=master_data, config=config - ) - - # === Solve Master Problem config.progress_logger.debug(f"PyROS working on iteration {k}...") - master_soln = master_problem_methods.solve_master( - model_data=master_data, config=config - ) - # config.progress_logger.info("Done solving Master Problem!") - - # === Keep track of total time and subsolver termination conditions - timing_data.total_master_solve_time += get_time_from_solver(master_soln.results) - - if k > 0: # master feas problem not solved for iteration 0 - timing_data.total_master_solve_time += get_time_from_solver( - master_soln.feasibility_problem_results - ) - master_statuses.append(master_soln.results.solver.termination_condition) - master_soln.master_problem_subsolver_statuses = master_statuses - - # === Check for robust infeasibility or error or time-out in master problem solve - if ( - master_soln.master_subsolver_results[1] - is pyrosTerminationCondition.robust_infeasible - ): - term_cond = pyrosTerminationCondition.robust_infeasible - elif ( - master_soln.pyros_termination_condition - is pyrosTerminationCondition.subsolver_error - ): - term_cond = pyrosTerminationCondition.subsolver_error - elif ( - master_soln.pyros_termination_condition - is pyrosTerminationCondition.time_out - ): - term_cond = pyrosTerminationCondition.time_out - else: - term_cond = None - if term_cond in { - pyrosTerminationCondition.subsolver_error, - pyrosTerminationCondition.time_out, + master_soln = master_data.solve_master() + master_termination_not_acceptable = master_soln.pyros_termination_condition in { pyrosTerminationCondition.robust_infeasible, - }: - log_record = IterationLogRecord( + pyrosTerminationCondition.time_out, + pyrosTerminationCondition.subsolver_error, + } + if master_termination_not_acceptable: + iter_log_record = IterationLogRecord( iteration=k, objective=None, first_stage_var_shift=None, @@ -618,181 +181,64 @@ def ROSolver_iterative_solve(model_data, config): global_separation=None, elapsed_time=get_main_elapsed_time(model_data.timing), ) - log_record.log(config.progress_logger.info) - update_grcs_solve_data( - pyros_soln=model_data, - k=k, - term_cond=term_cond, - nominal_data=nominal_data, - timing_data=timing_data, - separation_data=separation_data, - master_soln=master_soln, + iter_log_record.log(config.progress_logger.info) + return GRCSResults( + master_results=master_soln, + separation_results=None, + pyros_termination_condition=master_soln.pyros_termination_condition, + iterations=k + 1, ) - return model_data, [] - - # === Save nominal information - if k == 0: - for val in master_soln.fsv_vals: - nominal_data.nom_fsv_vals.append(val) - - for val in master_soln.ssv_vals: - nominal_data.nom_ssv_vals.append(val) - - nominal_data.nom_first_stage_cost = master_soln.first_stage_objective - nominal_data.nom_second_stage_cost = master_soln.second_stage_objective - nominal_data.nom_obj = value(master_data.master_model.obj) polishing_successful = True - if ( + polish_master_solution = ( config.decision_rule_order != 0 - and len(config.second_stage_variables) > 0 + and nominal_master_blk.first_stage.decision_rule_vars and k != 0 - ): - # === Save initial values of DR vars to file - for varslist in master_data.master_model.scenarios[ - 0, 0 - ].util.decision_rule_vars: - vals = [] - for dvar in varslist.values(): - vals.append(dvar.value) - dr_var_lists_original.append(vals) - - (polishing_results, polishing_successful) = ( - master_problem_methods.minimize_dr_vars( - model_data=master_data, config=config - ) - ) - timing_data.total_dr_polish_time += get_time_from_solver(polishing_results) - - # === Save after polish - for varslist in master_data.master_model.scenarios[ - 0, 0 - ].util.decision_rule_vars: - vals = [] - for dvar in varslist.values(): - vals.append(dvar.value) - dr_var_lists_polished.append(vals) - - # get current first-stage and DR variable values - # and compare with previous first-stage and DR variable - # values - current_master_fsv_vals = ComponentMap( - (var, value(var)) for var in master_fsv_set ) - current_master_nom_ssv_vals = ComponentMap( - (var, value(var)) for var in master_nom_ssv_set + if polish_master_solution: + _, polishing_successful = master_data.solve_dr_polishing() + + # track variable values + current_iter_var_data = get_variable_value_data( + nominal_master_blk, dr_var_monomial_map ) - current_master_dr_var_vals = ComponentMap( - (var, value(expr)) for var, expr in dr_var_scaled_expr_map.items() + if k == 0: + first_iter_var_data = current_iter_var_data + previous_iter_var_data = None + + fsv_shift, ssv_shift, dr_var_shift = evaluate_variable_shifts( + current_var_data=current_iter_var_data, + previous_var_data=previous_iter_var_data, + initial_var_data=first_iter_var_data, ) - if k > 0: - first_stage_var_shift = evaluate_first_stage_var_shift( - current_master_fsv_vals=current_master_fsv_vals, - previous_master_fsv_vals=previous_master_fsv_vals, - first_iter_master_fsv_vals=first_iter_master_fsv_vals, - ) - second_stage_var_shift = evaluate_second_stage_var_shift( - current_master_nom_ssv_vals=current_master_nom_ssv_vals, - previous_master_nom_ssv_vals=previous_master_nom_ssv_vals, - first_iter_master_nom_ssv_vals=first_iter_master_nom_ssv_vals, - ) - dr_var_shift = evaluate_dr_var_shift( - current_master_dr_var_vals=current_master_dr_var_vals, - previous_master_dr_var_vals=previous_master_dr_var_vals, - first_iter_master_nom_ssv_vals=first_iter_master_nom_ssv_vals, - dr_var_to_ssv_map=dr_var_to_ssv_map, - ) - else: - for fsv in first_iter_master_fsv_vals: - first_iter_master_fsv_vals[fsv] = value(fsv) - for ssv in first_iter_master_nom_ssv_vals: - first_iter_master_nom_ssv_vals[ssv] = value(ssv) - for drvar in first_iter_dr_var_vals: - first_iter_dr_var_vals[drvar] = value(dr_var_scaled_expr_map[drvar]) - first_stage_var_shift = None - second_stage_var_shift = None - dr_var_shift = None # === Check if time limit reached after polishing - if config.time_limit: - elapsed = get_main_elapsed_time(model_data.timing) - if elapsed >= config.time_limit: - iter_log_record = IterationLogRecord( - iteration=k, - objective=value(master_data.master_model.obj), - first_stage_var_shift=first_stage_var_shift, - second_stage_var_shift=second_stage_var_shift, - dr_var_shift=dr_var_shift, - num_violated_cons=None, - max_violation=None, - dr_polishing_success=polishing_successful, - all_sep_problems_solved=None, - global_separation=None, - elapsed_time=elapsed, - ) - update_grcs_solve_data( - pyros_soln=model_data, - k=k, - term_cond=pyrosTerminationCondition.time_out, - nominal_data=nominal_data, - timing_data=timing_data, - separation_data=separation_data, - master_soln=master_soln, - ) - iter_log_record.log(config.progress_logger.info) - return model_data, [] - - # === Set up for the separation problem - separation_data.opt_fsv_vals = [ - v.value - for v in master_soln.master_model.scenarios[0, 0].util.first_stage_variables - ] - separation_data.opt_ssv_vals = master_soln.ssv_vals - - # === Provide master model scenarios to separation problem for initialization options - separation_data.master_scenarios = master_data.master_model.scenarios - - if config.objective_focus is ObjectiveType.worst_case: - separation_model.util.zeta = value(master_soln.master_model.obj) + if check_time_limit_reached(model_data.timing, config): + iter_log_record = IterationLogRecord( + iteration=k, + objective=value(master_data.master_model.epigraph_obj), + first_stage_var_shift=fsv_shift, + second_stage_var_shift=ssv_shift, + dr_var_shift=dr_var_shift, + num_violated_cons=None, + max_violation=None, + dr_polishing_success=polishing_successful, + all_sep_problems_solved=None, + global_separation=None, + elapsed_time=model_data.timing.get_main_elapsed_time(), + ) + iter_log_record.log(config.progress_logger.info) + return GRCSResults( + master_results=master_soln, + separation_results=None, + pyros_termination_condition=pyrosTerminationCondition.time_out, + iterations=k + 1, + ) # === Solve Separation Problem separation_data.iteration = k - separation_data.master_nominal_scenario = master_data.master_model.scenarios[ - 0, 0 - ] - separation_data.master_model = master_data.master_model - - separation_results = separation_problem_methods.solve_separation_problem( - model_data=separation_data, config=config - ) - - separation_data.separation_problem_subsolver_statuses.extend( - [ - res.solver.termination_condition - for res in separation_results.generate_subsolver_results() - ] - ) - - if separation_results.solved_globally: - separation_data.total_global_separation_solves += 1 - - # make updates based on separation results - timing_data.total_separation_local_time += ( - separation_results.evaluate_local_solve_time(get_time_from_solver) - ) - timing_data.total_separation_global_time += ( - separation_results.evaluate_global_solve_time(get_time_from_solver) - ) - if separation_results.found_violation: - scaled_violations = separation_results.scaled_violations - if scaled_violations is not None: - # can be None if time out or subsolver error - # reported in separation - separation_data.constraint_violations.append(scaled_violations.values()) - separation_data.points_separated = ( - separation_results.violating_param_realization - ) + separation_results = separation_data.solve_separation(master_data) scaled_violations = [ solve_call_res.scaled_violations[con] @@ -803,19 +249,19 @@ def ROSolver_iterative_solve(model_data, config): max_sep_con_violation = max(scaled_violations) else: max_sep_con_violation = None - num_violated_cons = len(separation_results.violated_performance_constraints) + num_violated_cons = len(separation_results.violated_second_stage_ineq_cons) all_sep_problems_solved = ( - len(scaled_violations) == len(separation_model.util.performance_constraints) + len(scaled_violations) == num_second_stage_ineq_cons and not separation_results.subsolver_error and not separation_results.time_out ) or separation_results.all_discrete_scenarios_exhausted iter_log_record = IterationLogRecord( iteration=k, - objective=value(master_data.master_model.obj), - first_stage_var_shift=first_stage_var_shift, - second_stage_var_shift=second_stage_var_shift, + objective=value(master_data.master_model.epigraph_obj), + first_stage_var_shift=fsv_shift, + second_stage_var_shift=ssv_shift, dr_var_shift=dr_var_shift, num_violated_cons=num_violated_cons, max_violation=max_sep_con_violation, @@ -826,35 +272,19 @@ def ROSolver_iterative_solve(model_data, config): ) # terminate on time limit - elapsed = get_main_elapsed_time(model_data.timing) - if separation_results.time_out: - termination_condition = pyrosTerminationCondition.time_out - update_grcs_solve_data( - pyros_soln=model_data, - k=k, - term_cond=termination_condition, - nominal_data=nominal_data, - timing_data=timing_data, - separation_data=separation_data, - master_soln=master_soln, + if separation_results.time_out or separation_results.subsolver_error: + pyros_term_cond = ( + pyrosTerminationCondition.time_out + if separation_results.time_out + else pyrosTerminationCondition.subsolver_error ) iter_log_record.log(config.progress_logger.info) - return model_data, separation_results - - # terminate on separation subsolver error - if separation_results.subsolver_error: - termination_condition = pyrosTerminationCondition.subsolver_error - update_grcs_solve_data( - pyros_soln=model_data, - k=k, - term_cond=termination_condition, - nominal_data=nominal_data, - timing_data=timing_data, - separation_data=separation_data, - master_soln=master_soln, + return GRCSResults( + master_results=master_soln, + separation_results=separation_results, + pyros_termination_condition=pyros_term_cond, + iterations=k + 1, ) - iter_log_record.log(config.progress_logger.info) - return model_data, separation_results # === Check if we terminate due to robust optimality or feasibility, # or in the event of bypassing global separation, no violations @@ -874,30 +304,32 @@ def ROSolver_iterative_solve(model_data, config): termination_condition = pyrosTerminationCondition.robust_optimal else: termination_condition = pyrosTerminationCondition.robust_feasible - update_grcs_solve_data( - pyros_soln=model_data, - k=k, - term_cond=termination_condition, - nominal_data=nominal_data, - timing_data=timing_data, - separation_data=separation_data, - master_soln=master_soln, - ) iter_log_record.log(config.progress_logger.info) - return model_data, separation_results + return GRCSResults( + master_results=master_soln, + separation_results=separation_results, + pyros_termination_condition=termination_condition, + iterations=k + 1, + ) # === Add block to master at violation - master_problem_methods.add_scenario_to_master( - model_data=master_data, - violations=separation_results.violating_param_realization, + mp_methods.add_scenario_block_to_master_problem( + master_model=master_data.master_model, + scenario_idx=(k + 1, 0), + param_realization=separation_results.violating_param_realization, + from_block=nominal_master_blk, + clone_first_stage_components=False, ) - separation_data.points_added_to_master.append( + separation_data.points_added_to_master[(k + 1, 0)] = ( separation_results.violating_param_realization ) + separation_data.auxiliary_values_for_master_points[(k + 1, 0)] = ( + separation_results.auxiliary_param_values + ) config.progress_logger.debug("Points added to master:") config.progress_logger.debug( - np.array([pt for pt in separation_data.points_added_to_master]) + np.array([pt for pt in separation_data.points_added_to_master.values()]) ) # initialize second-stage and state variables @@ -913,18 +345,12 @@ def ROSolver_iterative_solve(model_data, config): k += 1 iter_log_record.log(config.progress_logger.info) - previous_master_fsv_vals = current_master_fsv_vals - previous_master_nom_ssv_vals = current_master_nom_ssv_vals - previous_master_dr_var_vals = current_master_dr_var_vals + previous_iter_var_data = current_iter_var_data # Iteration limit reached - update_grcs_solve_data( - pyros_soln=model_data, - k=k - 1, # remove last increment to fix iteration count - term_cond=pyrosTerminationCondition.max_iter, - nominal_data=nominal_data, - timing_data=timing_data, - separation_data=separation_data, - master_soln=master_soln, + return GRCSResults( + master_results=master_soln, + separation_results=separation_results, + pyros_termination_condition=pyrosTerminationCondition.max_iter, + iterations=k, # iteration count was already incremented ) - return model_data, separation_results diff --git a/pyomo/contrib/pyros/separation_problem_methods.py b/pyomo/contrib/pyros/separation_problem_methods.py index 18d0925bab0..4bb9eb7b32b 100644 --- a/pyomo/contrib/pyros/separation_problem_methods.py +++ b/pyomo/contrib/pyros/separation_problem_methods.py @@ -10,254 +10,196 @@ # ___________________________________________________________________________ """ -Functions for the construction and solving of the GRCS separation problem via ROsolver +Methods for constructing and solving PyROS separation problems +and related objects. """ -from pyomo.core.base.constraint import Constraint, ConstraintList -from pyomo.core.base.objective import Objective, maximize, value -from pyomo.core.base import Var, Param +from itertools import product +import math +import os + from pyomo.common.collections import ComponentSet, ComponentMap from pyomo.common.dependencies import numpy as np +from pyomo.core.base import Block, Constraint, maximize, Objective, value, Var +from pyomo.opt import TerminationCondition as tc +from pyomo.core.expr import replace_expressions, identify_mutable_parameters + from pyomo.contrib.pyros.solve_data import ( DiscreteSeparationSolveCallResults, SeparationSolveCallResults, SeparationLoopResults, SeparationResults, ) -from pyomo.opt import TerminationCondition as tc -from pyomo.core.expr import ( - replace_expressions, - identify_mutable_parameters, - identify_variables, -) -from pyomo.contrib.pyros.util import get_main_elapsed_time, is_certain_parameter from pyomo.contrib.pyros.uncertainty_sets import Geometry -from pyomo.common.errors import ApplicationError -from pyomo.contrib.pyros.util import ABS_CON_CHECK_FEAS_TOL -from pyomo.common.timing import TicTocTimer from pyomo.contrib.pyros.util import ( - adjust_solver_time_settings, + ABS_CON_CHECK_FEAS_TOL, call_solver, - ObjectiveType, - revert_solver_max_time_adjustment, - TIC_TOC_SOLVE_TIME_ATTR, + check_time_limit_reached, + PARAM_IS_CERTAIN_ABS_TOL, + PARAM_IS_CERTAIN_REL_TOL, ) -import os -from copy import deepcopy -from itertools import product -def add_uncertainty_set_constraints(model, config): +def add_uncertainty_set_constraints(separation_model, config): """ - Add inequality constraint(s) representing the uncertainty set. + Add to the separation model constraints restricting + the uncertain parameter proxy variables to the user-provided + uncertainty set. Note that inferred interval enclosures + on the uncertain parameters are also imposed as bounds + specified on the proxy variables. """ - - model.util.uncertainty_set_constraint = config.uncertainty_set.set_as_constraint( - uncertain_params=model.util.uncertain_param_vars, model=model, config=config + separation_model.uncertainty = Block() + separation_model.uncertainty.uncertain_param_indexed_var = Var( + range(config.uncertainty_set.dim), + initialize={ + idx: nom_val + for idx, nom_val in enumerate(config.nominal_uncertain_param_vals) + }, ) - - config.uncertainty_set.add_bounds_on_uncertain_parameters( - model=model, config=config + indexed_param_var = separation_model.uncertainty.uncertain_param_indexed_var + uncertainty_quantification = config.uncertainty_set.set_as_constraint( + uncertain_params=indexed_param_var, block=separation_model.uncertainty ) - # === Pre-process out any uncertain parameters which have q_LB = q_ub via (q_ub - q_lb)/max(1,|q_UB|) <= TOL - # before building the uncertainty set constraint(s) - uncertain_params = config.uncertain_params - for i in range(len(uncertain_params)): - if is_certain_parameter(uncertain_param_index=i, config=config): - # This parameter is effectively certain for this set, can remove it from the uncertainty set - # We do this by fixing it in separation to its nominal value - model.util.uncertain_param_vars[i].fix( - config.nominal_uncertain_param_vals[i] - ) - - return - + # facilitate retrieval later + _, uncertainty_cons, param_var_list, aux_vars = uncertainty_quantification + separation_model.uncertainty.uncertain_param_var_list = param_var_list + separation_model.uncertainty.auxiliary_var_list = aux_vars + separation_model.uncertainty.uncertainty_cons_list = uncertainty_cons -def make_separation_objective_functions(model, config): - """ - Inequality constraints referencing control variables, state variables, or uncertain parameters - must be separated against in separation problem. - """ - performance_constraints = [] - for c in model.component_data_objects(Constraint, active=True, descend_into=True): - _vars = ComponentSet(identify_variables(expr=c.expr)) - uncertain_params_in_expr = list( - v for v in model.util.uncertain_param_vars.values() if v in _vars + config.uncertainty_set._add_bounds_on_uncertain_parameters( + uncertain_param_vars=param_var_list, global_solver=config.global_solver + ) + if aux_vars: + aux_var_vals = config.uncertainty_set.compute_auxiliary_uncertain_param_vals( + point=config.nominal_uncertain_param_vals, solver=config.global_solver ) - state_vars_in_expr = list(v for v in model.util.state_vars if v in _vars) - second_stage_variables_in_expr = list( - v for v in model.util.second_stage_variables if v in _vars + for auxvar, auxval in zip(aux_vars, aux_var_vals): + auxvar.set_value(auxval) + + # preprocess uncertain parameters which have been fixed by bounds + # in order to simplify the separation problems + for param_var, nomval in zip(param_var_list, config.nominal_uncertain_param_vals): + bounds_close = math.isclose( + a=param_var.lb, + b=param_var.ub, + rel_tol=PARAM_IS_CERTAIN_REL_TOL, + abs_tol=PARAM_IS_CERTAIN_ABS_TOL, ) - if not c.equality and ( - uncertain_params_in_expr - or state_vars_in_expr - or second_stage_variables_in_expr - ): - # This inequality constraint depends on uncertain parameters therefore it must be separated against - performance_constraints.append(c) - elif not c.equality and not ( - uncertain_params_in_expr - or state_vars_in_expr - or second_stage_variables_in_expr - ): - c.deactivate() # These are x \in X constraints, not active in separation because x is fixed to x* from previous master - model.util.performance_constraints = performance_constraints - model.util.separation_objectives = [] - map_obj_to_constr = ComponentMap() - - for idx, c in enumerate(performance_constraints): - # Separation objective constraints standardized to be MAXIMIZATION of <= constraints - c.deactivate() - if c.upper is not None: - # This is an <= constraint, maximized in separation - obj = Objective(expr=c.body - c.upper, sense=maximize) - map_obj_to_constr[c] = obj - model.add_component("separation_obj_" + str(idx), obj) - model.util.separation_objectives.append(obj) - elif c.lower is not None: - # This is an >= constraint, not supported - raise ValueError( - "All inequality constraints in model must be in standard form (<= RHS)" - ) + if bounds_close: + param_var.fix(nomval) - model.util.map_obj_to_constr = map_obj_to_constr - for obj in model.util.separation_objectives: - obj.deactivate() - return +def construct_separation_problem(model_data): + """ + Construct the separation problem model from the fully preprocessed + working model. + Parameters + ---------- + model_data : model data object + Main model data object. -def make_separation_problem(model_data, config): - """ - Swap out uncertain param Param objects for Vars - Add uncertainty set constraints and separation objectives + Returns + ------- + separation_model : ConcreteModel + Separation problem model. """ - separation_model = model_data.original.clone() - separation_model.del_component("coefficient_matching_constraints") - separation_model.del_component("coefficient_matching_constraints_index") + config = model_data.config + separation_model = model_data.working_model.clone() + + # fix/deactivate all nonadjustable components + for var in separation_model.all_nonadjustable_variables: + var.fix() + for fs_eqcon in separation_model.first_stage.equality_cons.values(): + fs_eqcon.deactivate() + for fs_ineqcon in separation_model.first_stage.inequality_cons.values(): + fs_ineqcon.deactivate() + + # add block for the uncertainty set quantification + add_uncertainty_set_constraints(separation_model, config) - uncertain_params = separation_model.util.uncertain_params - separation_model.util.uncertain_param_vars = param_vars = Var( - range(len(uncertain_params)) + # the uncertain params function as decision variables + # in the separation problems. + # note: expression replacement is performed only for + # the active constraints + uncertain_params = separation_model.uncertain_params + uncertain_param_vars = separation_model.uncertainty.uncertain_param_var_list + param_id_to_var_map = { + id(param): var for param, var in zip(uncertain_params, uncertain_param_vars) + } + uncertain_params_set = ComponentSet(uncertain_params) + adjustable_cons = ( + list(separation_model.second_stage.inequality_cons.values()) + + list(separation_model.second_stage.equality_cons.values()) + + list(separation_model.second_stage.decision_rule_eqns.values()) ) - map_new_constraint_list_to_original_con = ComponentMap() - - if config.objective_focus is ObjectiveType.worst_case: - separation_model.util.zeta = Param(initialize=0, mutable=True) - constr = Constraint( - expr=separation_model.first_stage_objective - + separation_model.second_stage_objective - - separation_model.util.zeta - <= 0 + for adjcon in adjustable_cons: + uncertain_params_in_con = ( + ComponentSet(identify_mutable_parameters(adjcon.expr)) + & uncertain_params_set ) - separation_model.add_component("epigraph_constr", constr) - - substitution_map = {} - # Separation problem initialized to nominal uncertain parameter values - for idx, var in enumerate(list(param_vars.values())): - param = uncertain_params[idx] - var.set_value(param.value, skip_validation=True) - substitution_map[id(param)] = var - - separation_model.util.new_constraints = constraints = ConstraintList() - - uncertain_param_set = ComponentSet(uncertain_params) - for c in separation_model.component_data_objects(Constraint): - if any(v in uncertain_param_set for v in identify_mutable_parameters(c.expr)): - if c.equality: - if c in separation_model.util.h_x_q_constraints: - # ensure that constraints subject to - # coefficient matching are not involved in - # separation problem. - # keeping them may induce numerical sensitivity - # issues, possibly leading to incorrect result - c.deactivate() - else: - constraints.add( - replace_expressions( - expr=c.lower, substitution_map=substitution_map - ) - == replace_expressions( - expr=c.body, substitution_map=substitution_map - ) - ) - elif c.lower is not None: - constraints.add( - replace_expressions(expr=c.lower, substitution_map=substitution_map) - <= replace_expressions( - expr=c.body, substitution_map=substitution_map - ) - ) - elif c.upper is not None: - constraints.add( - replace_expressions(expr=c.upper, substitution_map=substitution_map) - >= replace_expressions( - expr=c.body, substitution_map=substitution_map - ) - ) - else: - raise ValueError( - "Unable to parse constraint for building the separation problem." - ) - c.deactivate() - map_new_constraint_list_to_original_con[ - constraints[constraints.index_set().last()] - ] = c - - separation_model.util.map_new_constraint_list_to_original_con = ( - map_new_constraint_list_to_original_con - ) - - # === Add objectives first so that the uncertainty set - # Constraints do not get picked up into the set - # of performance constraints which become objectives - make_separation_objective_functions(separation_model, config) - add_uncertainty_set_constraints(separation_model, config) + if uncertain_params_in_con: + adjcon.set_value( + replace_expressions(adjcon.expr, substitution_map=param_id_to_var_map) + ) - # === Deactivate h(x,q) == 0 constraints - for c in separation_model.util.h_x_q_constraints: - c.deactivate() + # second-stage inequality constraint expressions + # become maximization objectives in the separation problems + separation_model.second_stage_ineq_con_to_obj_map = ComponentMap() + ss_ineq_cons = separation_model.second_stage.inequality_cons.values() + for idx, ss_ineq_con in enumerate(ss_ineq_cons): + ss_ineq_con.deactivate() + separation_obj = Objective( + expr=ss_ineq_con.body - ss_ineq_con.upper, sense=maximize + ) + separation_model.add_component(f"separation_obj_{idx}", separation_obj) + separation_model.second_stage_ineq_con_to_obj_map[ss_ineq_con] = separation_obj + separation_obj.deactivate() return separation_model -def get_sep_objective_values(model_data, config, perf_cons): +def get_sep_objective_values(separation_data, ss_ineq_cons): """ - Evaluate performance constraint functions at current + Evaluate second-stage inequality constraint functions at current separation solution. Parameters ---------- - model_data : SeparationProblemData + separation_data : SeparationProblemData Separation problem data. - config : ConfigDict - PyROS solver settings. - perf_cons : list of Constraint - Performance constraints to be evaluated. + ss_ineq_cons : list of Constraint + Second-stage inequality constraints to be evaluated. Returns ------- violations : ComponentMap - Mapping from performance constraints to violation values. + Mapping from second-stage inequality constraints + to violation values. """ - con_to_obj_map = model_data.separation_model.util.map_obj_to_constr + config = separation_data.config + con_to_obj_map = separation_data.separation_model.second_stage_ineq_con_to_obj_map violations = ComponentMap() - for perf_con in perf_cons: - obj = con_to_obj_map[perf_con] + user_var_partitioning = separation_data.separation_model.user_var_partitioning + first_stage_variables = user_var_partitioning.first_stage_variables + second_stage_variables = user_var_partitioning.second_stage_variables + + for ss_ineq_con in ss_ineq_cons: + obj = con_to_obj_map[ss_ineq_con] try: - violations[perf_con] = value(obj.expr) + violations[ss_ineq_con] = value(obj.expr) except ValueError: - for v in model_data.separation_model.util.first_stage_variables: + for v in first_stage_variables: config.progress_logger.info(v.name + " " + str(v.value)) - for v in model_data.separation_model.util.second_stage_variables: + for v in second_stage_variables: config.progress_logger.info(v.name + " " + str(v.value)) raise ArithmeticError( - f"Evaluation of performance constraint {perf_con.name} " + f"Evaluation of second-stage inequality constraint {ss_ineq_con.name} " f"(separation objective {obj.name}) " "led to a math domain error. " - "Does the performance constraint expression " + "Does the constraint expression " "contain log(x) or 1/x functions " "or others with tricky domains?" ) @@ -265,41 +207,43 @@ def get_sep_objective_values(model_data, config, perf_cons): return violations -def get_argmax_sum_violations(solver_call_results_map, perf_cons_to_evaluate): +def get_argmax_sum_violations(solver_call_results_map, ss_ineq_cons_to_evaluate): """ Get key of entry of `solver_call_results_map` which contains - separation problem solution with maximal sum of performance - constraint violations over a specified sequence of performance - constraints. + separation problem solution with maximal sum of second-stage + inequality constraint violations over a specified sequence of + second-stage inequality constraints. Parameters ---------- solver_call_results : ComponentMap - Mapping from performance constraints to corresponding + Mapping from second-stage inequality constraints to corresponding separation solver call results. - perf_cons_to_evaluate : list of Constraints - Performance constraints to consider for evaluating + ss_ineq_cons_to_evaluate : list of Constraints + Second-stage inequality constraints to consider for evaluating maximal sum. Returns ------- - worst_perf_con : None or Constraint - Performance constraint corresponding to solver call + worst_ss_ineq_con : None or Constraint + Second-stage inequality constraint corresponding to solver call results object containing solution with maximal sum - of violations across all performance constraints. + of violations across all second-stage inequality constraints. If ``found_violation`` attribute of all value entries of `solver_call_results_map` is False, then `None` is - returned, as this means none of the performance constraints + returned, as this means + none of the second-stage inequality constraints were found to be violated. """ - # get indices of performance constraints for which violation found - idx_to_perf_con_map = { - idx: perf_con for idx, perf_con in enumerate(solver_call_results_map) + # get indices of second-stage ineq constraints + # for which violation found + idx_to_ss_ineq_con_map = { + idx: ss_ineq_con for idx, ss_ineq_con in enumerate(solver_call_results_map) } idxs_of_violated_cons = [ idx - for idx, perf_con in idx_to_perf_con_map.items() - if solver_call_results_map[perf_con].found_violation + for idx, ss_ineq_con in idx_to_ss_ineq_con_map.items() + if solver_call_results_map[ss_ineq_con].found_violation ] num_violated_cons = len(idxs_of_violated_cons) @@ -309,7 +253,7 @@ def get_argmax_sum_violations(solver_call_results_map, perf_cons_to_evaluate): # assemble square matrix (2D array) of constraint violations. # matrix size: number of constraints for which violation was found - # each row corresponds to a performance constraint + # each row corresponds to a second-stage inequality constraint # each column corresponds to a separation problem solution violations_arr = np.zeros(shape=(num_violated_cons, num_violated_cons)) idxs_product = product( @@ -319,37 +263,38 @@ def get_argmax_sum_violations(solver_call_results_map, perf_cons_to_evaluate): violations_arr[row_idx, col_idx] = max( 0, ( - # violation of this row's performance constraint + # violation of this row's second-stage inequality con # by this column's separation solution # if separation problems were solved globally, # then diagonal entries should be the largest in each row solver_call_results_map[ - idx_to_perf_con_map[viol_param_idx] - ].scaled_violations[idx_to_perf_con_map[viol_con_idx]] + idx_to_ss_ineq_con_map[viol_param_idx] + ].scaled_violations[idx_to_ss_ineq_con_map[viol_con_idx]] ), ) worst_col_idx = np.argmax(np.sum(violations_arr, axis=0)) - return idx_to_perf_con_map[idxs_of_violated_cons[worst_col_idx]] + return idx_to_ss_ineq_con_map[idxs_of_violated_cons[worst_col_idx]] -def solve_separation_problem(model_data, config): +def solve_separation_problem(separation_data, master_data): """ Solve PyROS separation problems. Parameters ---------- - model_data : SeparationProblemData + separation_data : SeparationProblemData Separation problem data. - config : ConfigDict - PyROS solver settings. + master_data : MasterProblemData + Master problem data. Returns ------- pyros.solve_data.SeparationResults Separation problem solve results. """ + config = separation_data.config run_local = not config.bypass_local_separation run_global = config.bypass_local_separation @@ -359,7 +304,9 @@ def solve_separation_problem(model_data, config): if run_local: local_separation_loop_results = perform_separation_loop( - model_data=model_data, config=config, solve_globally=False + separation_data=separation_data, + master_data=master_data, + solve_globally=False, ) run_global = not ( local_separation_loop_results.found_violation @@ -373,7 +320,9 @@ def solve_separation_problem(model_data, config): if run_global: global_separation_loop_results = perform_separation_loop( - model_data=model_data, config=config, solve_globally=True + separation_data=separation_data, + master_data=master_data, + solve_globally=True, ) else: global_separation_loop_results = None @@ -384,106 +333,83 @@ def solve_separation_problem(model_data, config): ) -def evaluate_violations_by_nominal_master(model_data, performance_cons): +def evaluate_violations_by_nominal_master(separation_data, master_data, ss_ineq_cons): """ - Evaluate violation of performance constraints by + Evaluate violation of second-stage inequality constraints by variables in nominal block of most recent master problem. Returns ------- - nom_perf_con_violations : dict - Mapping from performance constraint names + nom_ss_ineq_con_violations : dict + Mapping from second-stage inequality constraint names to floats equal to violations by nominal master problem variables. """ - constraint_map_to_master = ( - model_data.separation_model.util.map_new_constraint_list_to_original_con - ) - - # get deterministic model constraints (include epigraph) - set_of_deterministic_constraints = ( - model_data.separation_model.util.deterministic_constraints - ) - if hasattr(model_data.separation_model, "epigraph_constr"): - set_of_deterministic_constraints.add( - model_data.separation_model.epigraph_constr - ) - nom_perf_con_violations = {} - - for perf_con in performance_cons: - if perf_con in set_of_deterministic_constraints: - nom_constraint = perf_con - else: - nom_constraint = constraint_map_to_master[perf_con] + nom_ss_ineq_con_violations = ComponentMap() + for ss_ineq_con in ss_ineq_cons: nom_violation = value( - model_data.master_nominal_scenario.find_component(nom_constraint) + master_data.master_model.scenarios[0, 0].find_component(ss_ineq_con) ) - nom_perf_con_violations[perf_con] = nom_violation + nom_ss_ineq_con_violations[ss_ineq_con] = nom_violation - return nom_perf_con_violations + return nom_ss_ineq_con_violations -def group_performance_constraints_by_priority(model_data, config): +def group_ss_ineq_constraints_by_priority(separation_data): """ - Group model performance constraints by separation priority. + Group model second-stage inequality constraints + by separation priority. Parameters ---------- - model_data : SeparationProblemData + separation_data : SeparationProblemData Separation problem data. - config : ConfigDict - User-specified PyROS solve options. Returns ------- dict - Mapping from an int to a list of performance constraints + Mapping from an int to a list of second-stage + inequality constraints (Constraint objects), for which the int is equal to the specified priority. Keys are sorted in descending order (i.e. highest priority first). """ + ss_ineq_cons = separation_data.separation_model.second_stage.inequality_cons separation_priority_groups = dict() - config_sep_priority_dict = config.separation_priority_order - for perf_con in model_data.separation_model.util.performance_constraints: + for name, ss_ineq_con in ss_ineq_cons.items(): # by default, priority set to 0 - priority = config_sep_priority_dict.get(perf_con.name, 0) + priority = separation_data.separation_priority_order[name] cons_with_same_priority = separation_priority_groups.setdefault(priority, []) - cons_with_same_priority.append(perf_con) + cons_with_same_priority.append(ss_ineq_con) # sort separation priority groups return { - priority: perf_cons - for priority, perf_cons in sorted( + priority: ss_ineq_cons + for priority, ss_ineq_cons in sorted( separation_priority_groups.items(), reverse=True ) } def get_worst_discrete_separation_solution( - performance_constraint, - model_data, - config, - perf_cons_to_evaluate, - discrete_solve_results, + ss_ineq_con, config, ss_ineq_cons_to_evaluate, discrete_solve_results ): """ Determine separation solution (and therefore worst-case uncertain parameter realization) with maximum violation - of specified performance constraint. + of specified second-stage inequality constraint. Parameters ---------- - performance_constraint : Constraint - Performance constraint of interest. - model_data : SeparationProblemData - Separation problem data. + ss_ineq_con : Constraint + Second-stage inequality constraint of interest. config : ConfigDict User-specified PyROS solver settings. - perf_cons_to_evaluate : list of Constraint - Performance constraints for which to report violations - by separation solution. + ss_ineq_cons_to_evaluate : list of Constraint + Second-stage inequality constraints for which to report + violations by separation solution. discrete_solve_results : DiscreteSeparationSolveCallResults Separation problem solutions corresponding to the uncertain parameter scenarios listed in @@ -492,42 +418,43 @@ def get_worst_discrete_separation_solution( Returns ------- SeparationSolveCallResult - Solver call result for performance constraint of interest. + Solver call result for second-stage inequality constraint of interest. """ - # violation of specified performance constraint by separation + # violation of specified second-stage inequality + # constraint by separation # problem solutions for all scenarios - violations_of_perf_con = [ - solve_call_res.scaled_violations[performance_constraint] + violations_of_ss_ineq_con = [ + solve_call_res.scaled_violations[ss_ineq_con] for solve_call_res in discrete_solve_results.solver_call_results.values() ] list_of_scenario_idxs = list(discrete_solve_results.solver_call_results.keys()) # determine separation solution for which scaled violation of this - # performance constraint is the worst + # second-stage inequality constraint is the worst worst_case_res = discrete_solve_results.solver_call_results[ - list_of_scenario_idxs[np.argmax(violations_of_perf_con)] + list_of_scenario_idxs[np.argmax(violations_of_ss_ineq_con)] ] - worst_case_violation = np.max(violations_of_perf_con) + worst_case_violation = np.max(violations_of_ss_ineq_con) assert worst_case_violation in worst_case_res.scaled_violations.values() - # evaluate violations for specified performance constraints - eval_perf_con_scaled_violations = ComponentMap( - (perf_con, worst_case_res.scaled_violations[perf_con]) - for perf_con in perf_cons_to_evaluate + # evaluate violations for specified second-stage inequality constraints + eval_ss_ineq_con_scaled_violations = ComponentMap( + (ss_ineq_con, worst_case_res.scaled_violations[ss_ineq_con]) + for ss_ineq_con in ss_ineq_cons_to_evaluate ) # discrete separation solutions were obtained by optimizing - # just one performance constraint, as an efficiency. + # just one second-stage inequality constraint, as an efficiency. # if the constraint passed to this routine is the same as the # constraint used to obtain the solutions, then we bundle # the separation solve call results into a single list. # otherwise, we return an empty list, as we did not need to call - # subsolvers for the other performance constraints - is_optimized_performance_con = ( - performance_constraint is discrete_solve_results.performance_constraint + # subsolvers for the other second-stage inequality constraints + is_optimized_ss_ineq_con = ( + ss_ineq_con is discrete_solve_results.second_stage_ineq_con ) - if is_optimized_performance_con: + if is_optimized_ss_ineq_con: results_list = [ res for solve_call_results in discrete_solve_results.solver_call_results.values() @@ -539,7 +466,7 @@ def get_worst_discrete_separation_solution( return SeparationSolveCallResults( solved_globally=worst_case_res.solved_globally, results_list=results_list, - scaled_violations=eval_perf_con_scaled_violations, + scaled_violations=eval_ss_ineq_con_scaled_violations, violating_param_realization=worst_case_res.violating_param_realization, variable_values=worst_case_res.variable_values, found_violation=(worst_case_violation > config.robust_feasibility_tolerance), @@ -549,11 +476,10 @@ def get_worst_discrete_separation_solution( ) -def get_con_name_repr(separation_model, con, with_orig_name=True, with_obj_name=True): +def get_con_name_repr(separation_model, con, with_obj_name=True): """ - Get string representation of performance constraint - and any other modeling components to which it has - been mapped. + Get string representation of second-stage inequality constraint + and the objective to which it has been mapped. Parameters ---------- @@ -561,15 +487,9 @@ def get_con_name_repr(separation_model, con, with_orig_name=True, with_obj_name= Separation model. con : ScalarConstraint or ConstraintData Constraint for which to get the representation. - with_orig_name : bool, optional - If constraint was added during construction of the - separation problem (i.e. if the constraint is a member of - in `separation_model.util.new_constraints`), - include the name of the original constraint from which - `perf_con` was created. with_obj_name : bool, optional Include name of separation model objective to which - constraint is mapped. Applicable only to performance + constraint is mapped. Applicable only to second-stage inequality constraints of the separation problem. Returns @@ -577,37 +497,26 @@ def get_con_name_repr(separation_model, con, with_orig_name=True, with_obj_name= str Constraint name representation. """ - - qual_strs = [] - if with_orig_name: - # check performance constraint was not added - # at construction of separation problem - orig_con = separation_model.util.map_new_constraint_list_to_original_con.get( - con, con - ) - if orig_con is not con: - qual_strs.append(f"originally {orig_con.name!r}") + qual_str = "" if with_obj_name: - objectives_map = separation_model.util.map_obj_to_constr + objectives_map = separation_model.second_stage_ineq_con_to_obj_map separation_obj = objectives_map[con] - qual_strs.append(f"mapped to objective {separation_obj.name!r}") - - final_qual_str = f" ({', '.join(qual_strs)})" if qual_strs else "" + qual_str = f" (mapped to objective {separation_obj.name!r})" - return f"{con.name!r}{final_qual_str}" + return f"{con.index()!r}{qual_str}" -def perform_separation_loop(model_data, config, solve_globally): +def perform_separation_loop(separation_data, master_data, solve_globally): """ Loop through, and solve, PyROS separation problems to desired optimality condition. Parameters ---------- - model_data : SeparationProblemData + separation_data : SeparationProblemData Separation problem data. - config : ConfigDict - PyROS solver settings. + master_data : MasterProblemData + Master problem data. solve_globally : bool True to solve separation problems globally, False to solve separation problems locally. @@ -617,30 +526,31 @@ def perform_separation_loop(model_data, config, solve_globally): pyros.solve_data.SeparationLoopResults Separation problem solve results. """ - all_performance_constraints = ( - model_data.separation_model.util.performance_constraints + config = separation_data.config + all_ss_ineq_constraints = list( + separation_data.separation_model.second_stage.inequality_cons.values() ) - if not all_performance_constraints: + if not all_ss_ineq_constraints: # robustness certified: no separation problems to solve return SeparationLoopResults( solver_call_results=ComponentMap(), solved_globally=solve_globally, - worst_case_perf_con=None, + worst_case_ss_ineq_con=None, ) # needed for normalizing separation solution constraint violations - model_data.nom_perf_con_violations = evaluate_violations_by_nominal_master( - model_data=model_data, performance_cons=all_performance_constraints - ) - sorted_priority_groups = group_performance_constraints_by_priority( - model_data, config + separation_data.nom_ss_ineq_con_violations = evaluate_violations_by_nominal_master( + separation_data=separation_data, + master_data=master_data, + ss_ineq_cons=all_ss_ineq_constraints, ) + sorted_priority_groups = group_ss_ineq_constraints_by_priority(separation_data) uncertainty_set_is_discrete = ( config.uncertainty_set.geometry == Geometry.DISCRETE_SCENARIOS ) if uncertainty_set_is_discrete: - all_scenarios_exhausted = len(model_data.idxs_of_master_scenarios) == len( + all_scenarios_exhausted = len(separation_data.idxs_of_master_scenarios) == len( config.uncertainty_set.scenarios ) if all_scenarios_exhausted: @@ -649,22 +559,22 @@ def perform_separation_loop(model_data, config, solve_globally): return SeparationLoopResults( solver_call_results=ComponentMap(), solved_globally=solve_globally, - worst_case_perf_con=None, + worst_case_ss_ineq_con=None, all_discrete_scenarios_exhausted=True, ) - perf_con_to_maximize = sorted_priority_groups[ + ss_ineq_con_to_maximize = sorted_priority_groups[ max(sorted_priority_groups.keys()) ][0] # efficiency: evaluate all separation problem solutions in # advance of entering loop discrete_sep_results = discrete_solve( - model_data=model_data, - config=config, + separation_data=separation_data, + master_data=master_data, solve_globally=solve_globally, - perf_con_to_maximize=perf_con_to_maximize, - perf_cons_to_evaluate=all_performance_constraints, + ss_ineq_con_to_maximize=ss_ineq_con_to_maximize, + ss_ineq_cons_to_evaluate=all_ss_ineq_constraints, ) termination_not_ok = ( @@ -677,7 +587,7 @@ def perform_separation_loop(model_data, config, solve_globally): for solve_call_results in discrete_sep_results.solver_call_results.values() for res in solve_call_results.results_list ] - single_solver_call_res[perf_con_to_maximize] = ( + single_solver_call_res[ss_ineq_con_to_maximize] = ( # not the neatest assembly, # but should maintain accuracy of total solve times # and overall outcome @@ -691,46 +601,46 @@ def perform_separation_loop(model_data, config, solve_globally): return SeparationLoopResults( solver_call_results=single_solver_call_res, solved_globally=solve_globally, - worst_case_perf_con=None, + worst_case_ss_ineq_con=None, ) all_solve_call_results = ComponentMap() priority_groups_enum = enumerate(sorted_priority_groups.items()) - for group_idx, (priority, perf_constraints) in priority_groups_enum: + for group_idx, (priority, ss_ineq_constraints) in priority_groups_enum: priority_group_solve_call_results = ComponentMap() - for idx, perf_con in enumerate(perf_constraints): + for idx, ss_ineq_con in enumerate(ss_ineq_constraints): # log progress of separation loop solve_adverb = "Globally" if solve_globally else "Locally" config.progress_logger.debug( - f"{solve_adverb} separating performance constraint " - f"{get_con_name_repr(model_data.separation_model, perf_con)} " + f"{solve_adverb} separating second-stage inequality constraint " + f"{get_con_name_repr(separation_data.separation_model, ss_ineq_con)} " f"(priority {priority}, priority group {group_idx + 1} of " f"{len(sorted_priority_groups)}, " - f"constraint {idx + 1} of {len(perf_constraints)} " + f"constraint {idx + 1} of {len(ss_ineq_constraints)} " "in priority group, " f"{len(all_solve_call_results) + idx + 1} of " - f"{len(all_performance_constraints)} total)" + f"{len(all_ss_ineq_constraints)} total)" ) - # solve separation problem for this performance constraint + # solve separation problem for + # this second-stage inequality constraint if uncertainty_set_is_discrete: solve_call_results = get_worst_discrete_separation_solution( - performance_constraint=perf_con, - model_data=model_data, + ss_ineq_con=ss_ineq_con, config=config, - perf_cons_to_evaluate=all_performance_constraints, + ss_ineq_cons_to_evaluate=all_ss_ineq_constraints, discrete_solve_results=discrete_sep_results, ) else: solve_call_results = solver_call_separation( - model_data=model_data, - config=config, + separation_data=separation_data, + master_data=master_data, solve_globally=solve_globally, - perf_con_to_maximize=perf_con, - perf_cons_to_evaluate=all_performance_constraints, + ss_ineq_con_to_maximize=ss_ineq_con, + ss_ineq_cons_to_evaluate=all_ss_ineq_constraints, ) - priority_group_solve_call_results[perf_con] = solve_call_results + priority_group_solve_call_results[ss_ineq_con] = solve_call_results termination_not_ok = ( solve_call_results.time_out or solve_call_results.subsolver_error @@ -740,29 +650,29 @@ def perform_separation_loop(model_data, config, solve_globally): return SeparationLoopResults( solver_call_results=all_solve_call_results, solved_globally=solve_globally, - worst_case_perf_con=None, + worst_case_ss_ineq_con=None, ) all_solve_call_results.update(priority_group_solve_call_results) # there may be multiple separation problem solutions - # found to have violated a performance constraint. + # found to have violated a second-stage inequality constraint. # we choose just one for master problem of next iteration - worst_case_perf_con = get_argmax_sum_violations( + worst_case_ss_ineq_con = get_argmax_sum_violations( solver_call_results_map=all_solve_call_results, - perf_cons_to_evaluate=perf_constraints, + ss_ineq_cons_to_evaluate=ss_ineq_constraints, ) - if worst_case_perf_con is not None: + if worst_case_ss_ineq_con is not None: # take note of chosen separation solution - worst_case_res = all_solve_call_results[worst_case_perf_con] + worst_case_res = all_solve_call_results[worst_case_ss_ineq_con] if uncertainty_set_is_discrete: - model_data.idxs_of_master_scenarios.append( + separation_data.idxs_of_master_scenarios.append( worst_case_res.discrete_set_scenario_index ) # # auxiliary log messages violated_con_names = "\n ".join( - get_con_name_repr(model_data.separation_model, con) + get_con_name_repr(separation_data.separation_model, con) for con, res in all_solve_call_results.items() if res.found_violation ) @@ -771,13 +681,13 @@ def perform_separation_loop(model_data, config, solve_globally): ) config.progress_logger.debug( "Worst-case constraint: " - f"{get_con_name_repr(model_data.separation_model, worst_case_perf_con)} " + f"{get_con_name_repr(separation_data.separation_model, worst_case_ss_ineq_con)} " "under realization " f"{worst_case_res.violating_param_realization}." ) config.progress_logger.debug( f"Maximal scaled violation " - f"{worst_case_res.scaled_violations[worst_case_perf_con]} " + f"{worst_case_res.scaled_violations[worst_case_ss_ineq_con]} " "from this constraint " "exceeds the robust feasibility tolerance " f"{config.robust_feasibility_tolerance}" @@ -787,17 +697,19 @@ def perform_separation_loop(model_data, config, solve_globally): # exit loop break else: - config.progress_logger.debug("No violated performance constraints found.") + config.progress_logger.debug( + "No violated second-stage inequality constraints found." + ) return SeparationLoopResults( solver_call_results=all_solve_call_results, solved_globally=solve_globally, - worst_case_perf_con=worst_case_perf_con, + worst_case_ss_ineq_con=worst_case_ss_ineq_con, ) -def evaluate_performance_constraint_violations( - model_data, config, perf_con_to_maximize, perf_cons_to_evaluate +def evaluate_ss_ineq_con_violations( + separation_data, ss_ineq_con_to_maximize, ss_ineq_cons_to_evaluate ): """ Evaluate the inequality constraint function violations @@ -809,12 +721,13 @@ def evaluate_performance_constraint_violations( Parameters ---------- - model_data : SeparationProblemData + separation_data : SeparationProblemData Object containing the separation model. - config : ConfigDict - PyROS solver settings. - perf_cons_to_evaluate : list of Constraint - Performance constraints whose expressions are to + ss_ineq_con_to_maximize : ConstraintData + Second-stage inequality constraint + to which the current solution is mapped. + ss_ineq_cons_to_evaluate : list of Constraint + Second-stage inequality constraints whose expressions are to be evaluated at the current separation problem solution. Exactly one of these constraints should be mapped @@ -826,41 +739,46 @@ def evaluate_performance_constraint_violations( Uncertain parameter realization corresponding to maximum constraint violation. scaled_violations : ComponentMap - Mapping from performance constraints to be evaluated + Mapping from second-stage inequality constraints to be evaluated to their violations by the separation problem solution. constraint_violated : bool - True if performance constraint mapped to active + True if second-stage inequality constraint mapped to active separation model Objective is violated (beyond tolerance), False otherwise Raises ------ ValueError - If `perf_cons_to_evaluate` does not contain exactly + If `ss_ineq_cons_to_evaluate` does not contain exactly 1 entry which can be mapped to an active Objective of ``model_data.separation_model``. """ + config = separation_data.config + # parameter realization for current separation problem solution + uncertain_param_vars = ( + separation_data.separation_model.uncertainty.uncertain_param_var_list + ) violating_param_realization = list( - param.value - for param in model_data.separation_model.util.uncertain_param_vars.values() + param_var.value for param_var in uncertain_param_vars ) - # evaluate violations for all performance constraints provided + # evaluate violations for all second-stage inequality + # constraints provided violations_by_sep_solution = get_sep_objective_values( - model_data=model_data, config=config, perf_cons=perf_cons_to_evaluate + separation_data=separation_data, ss_ineq_cons=ss_ineq_cons_to_evaluate ) # normalize constraint violation: i.e. divide by # absolute value of constraint expression evaluated at # nominal master solution (if expression value is large enough) scaled_violations = ComponentMap() - for perf_con, sep_sol_violation in violations_by_sep_solution.items(): + for ss_ineq_con, sep_sol_violation in violations_by_sep_solution.items(): scaled_violation = sep_sol_violation / max( - 1, abs(model_data.nom_perf_con_violations[perf_con]) + 1, abs(separation_data.nom_ss_ineq_con_violations[ss_ineq_con]) ) - scaled_violations[perf_con] = scaled_violation - if perf_con is perf_con_to_maximize: + scaled_violations[ss_ineq_con] = scaled_violation + if ss_ineq_con is ss_ineq_con_to_maximize: scaled_active_obj_violation = scaled_violation constraint_violated = ( @@ -870,127 +788,83 @@ def evaluate_performance_constraint_violations( return (violating_param_realization, scaled_violations, constraint_violated) -def initialize_separation(perf_con_to_maximize, model_data, config): +def initialize_separation(ss_ineq_con_to_maximize, separation_data, master_data): """ - Initialize separation problem variables, and fix all first-stage - variables to their corresponding values from most recent - master problem solution. + Initialize separation problem variables using the solution + to the most recent master problem. Parameters ---------- - perf_con_to_maximize : ConstraintData - Performance constraint whose violation is to be maximized + ss_ineq_con_to_maximize : ConstraintData + Second-stage inequality constraint + whose violation is to be maximized for the separation problem of interest. - model_data : SeparationProblemData + separation_data : SeparationProblemData Separation problem data. - config : ConfigDict - PyROS solver settings. + master_data : MasterProblemData + Master problem data. Note ---- - If a static DR policy is used, then all second-stage variables - are fixed and the decision rule equations are deactivated. - The point to which the separation model is initialized should, in general, be feasible, provided the set does not have a discrete geometry (as there is no master model block corresponding to any of the remaining discrete scenarios against which we - separate). - - This method assumes that the master model has only one block - per iteration. + separate). If the uncertainty set constraints involve + auxiliary variables, then some uncertainty set constraints + may be violated. """ + config = separation_data.config + master_model = master_data.master_model + sep_model = separation_data.separation_model - def eval_master_violation(block_idx): + def eval_master_violation(scenario_idx): """ - Evaluate violation of `perf_con` by variables of + Evaluate violation of `ss_ineq_con` by variables of specified master block. """ - new_con_map = ( - model_data.separation_model.util.map_new_constraint_list_to_original_con - ) - in_new_cons = perf_con_to_maximize in new_con_map - if in_new_cons: - sep_con = new_con_map[perf_con_to_maximize] - else: - sep_con = perf_con_to_maximize - master_con = model_data.master_model.scenarios[block_idx, 0].find_component( - sep_con + master_con = master_model.scenarios[scenario_idx].find_component( + ss_ineq_con_to_maximize ) return value(master_con) # initialize from master block with max violation of the - # performance constraint of interest. This gives the best known + # second-stage ineq constraint of interest. Gives the best known # feasible solution (for case of non-discrete uncertainty sets). - block_num = max(range(model_data.iteration + 1), key=eval_master_violation) - - master_blk = model_data.master_model.scenarios[block_num, 0] - master_blks = list(model_data.master_model.scenarios.values()) - fsv_set = ComponentSet(master_blk.util.first_stage_variables) - sep_model = model_data.separation_model - - def get_parent_master_blk(var): - """ - Determine the master model scenario block of which - a given variable is a child component (or descendant). - """ - parent = var.parent_block() - while parent not in master_blks: - parent = parent.parent_block() - return parent - - for master_var in master_blk.component_data_objects(Var, active=True): - # parent block of the variable need not be `master_blk` - # (e.g. for first stage and decision rule variables, it - # may be the nominal block) - parent_master_blk = get_parent_master_blk(master_var) - sep_var_name = master_var.getname( - relative_to=parent_master_blk, fully_qualified=True - ) - - # initialize separation problem var to value from master block - sep_var = sep_model.find_component(sep_var_name) + worst_master_block_idx = max( + master_model.scenarios.keys(), key=eval_master_violation + ) + worst_case_master_blk = master_model.scenarios[worst_master_block_idx] + for sep_var in sep_model.all_variables: + master_var = worst_case_master_blk.find_component(sep_var) sep_var.set_value(value(master_var, exception=False)) - # fix first-stage variables (including decision rule vars) - if master_var in fsv_set: - sep_var.fix() - - # initialize uncertain parameter variables to most recent - # point added to master + # for discrete uncertainty sets, the uncertain parameters + # have already been addressed if config.uncertainty_set.geometry != Geometry.DISCRETE_SCENARIOS: - param_vars = sep_model.util.uncertain_param_vars - latest_param_values = model_data.points_added_to_master[block_num] - for param_var, val in zip(param_vars.values(), latest_param_values): + param_vars = sep_model.uncertainty.uncertain_param_var_list + param_values = separation_data.points_added_to_master[worst_master_block_idx] + for param_var, val in zip(param_vars, param_values): param_var.set_value(val) - # if static approximation, fix second-stage variables - # and deactivate the decision rule equations - for c in model_data.separation_model.util.second_stage_variables: - if config.decision_rule_order != 0: - c.unfix() - else: - c.fix() - if config.decision_rule_order == 0: - for v in model_data.separation_model.util.decision_rule_eqns: - v.deactivate() - for v in model_data.separation_model.util.decision_rule_vars: - v.fix() - - if any(c.active for c in model_data.separation_model.util.h_x_q_constraints): - raise AttributeError( - "All h(x,q) type constraints must be deactivated in separation." - ) + aux_param_vars = sep_model.uncertainty.auxiliary_var_list + aux_param_values = separation_data.auxiliary_values_for_master_points[ + worst_master_block_idx + ] + for aux_param_var, aux_val in zip(aux_param_vars, aux_param_values): + aux_param_var.set_value(val) # confirm the initial point is feasible for cases where # we expect it to be (i.e. non-discrete uncertainty sets). # otherwise, log the violated constraints + # NOTE: some uncertainty set constraints may be violated + # at the initial point if there are auxiliary variables + # (e.g. factor model, cardinality sets). + # revisit initialization of auxiliary uncertainty set + # variables later tol = ABS_CON_CHECK_FEAS_TOL - perf_con_name_repr = get_con_name_repr( - separation_model=model_data.separation_model, - con=perf_con_to_maximize, - with_orig_name=True, - with_obj_name=True, + ss_ineq_con_name_repr = get_con_name_repr( + separation_model=sep_model, con=ss_ineq_con_to_maximize, with_obj_name=True ) uncertainty_set_is_discrete = ( config.uncertainty_set.geometry is Geometry.DISCRETE_SCENARIOS @@ -999,14 +873,11 @@ def get_parent_master_blk(var): lslack, uslack = con.lslack(), con.uslack() if (lslack < -tol or uslack < -tol) and not uncertainty_set_is_discrete: con_name_repr = get_con_name_repr( - separation_model=model_data.separation_model, - con=con, - with_orig_name=True, - with_obj_name=False, + separation_model=sep_model, con=con, with_obj_name=False ) config.progress_logger.debug( - f"Initial point for separation of performance constraint " - f"{perf_con_name_repr} violates the model constraint " + f"Initial point for separation of second-stage ineq constraint " + f"{ss_ineq_con_name_repr} violates the model constraint " f"{con_name_repr} by more than {tol}. " f"(lslack={con.lslack()}, uslack={con.uslack()})" ) @@ -1017,25 +888,30 @@ def get_parent_master_blk(var): def solver_call_separation( - model_data, config, solve_globally, perf_con_to_maximize, perf_cons_to_evaluate + separation_data, + master_data, + solve_globally, + ss_ineq_con_to_maximize, + ss_ineq_cons_to_evaluate, ): """ Invoke subordinate solver(s) on separation problem. Parameters ---------- - model_data : SeparationProblemData + separation_data : SeparationProblemData Separation problem data. - config : ConfigDict - PyROS solver settings. + master_data : MasterProblemData + Master problem data. solve_globally : bool True to solve separation problems globally, False to solve locally. - perf_con_to_maximize : Constraint - Performance constraint for which to solve separation problem. + ss_ineq_con_to_maximize : Constraint + Second-stage inequality constraint + for which to solve separation problem. Informs the objective (constraint violation) to maximize. - perf_cons_to_evaluate : list of Constraint - Performance constraints whose expressions are to be + ss_ineq_cons_to_evaluate : list of Constraint + Second-stage inequality constraints whose expressions are to be evaluated at the separation problem solution obtained. @@ -1044,34 +920,30 @@ def solver_call_separation( solve_call_results : pyros.solve_data.SeparationSolveCallResults Solve results for separation problem of interest. """ - # objective corresponding to specified performance constraint - objectives_map = model_data.separation_model.util.map_obj_to_constr - separation_obj = objectives_map[perf_con_to_maximize] - - if solve_globally: - solvers = [config.global_solver] + config.backup_global_solvers - else: - solvers = [config.local_solver] + config.backup_local_solvers - - # keep track of solver statuses for output logging - solver_status_dict = {} - nlp_model = model_data.separation_model + config = separation_data.config + # prepare the problem + separation_model = separation_data.separation_model + objectives_map = separation_data.separation_model.second_stage_ineq_con_to_obj_map + separation_obj = objectives_map[ss_ineq_con_to_maximize] + initialize_separation(ss_ineq_con_to_maximize, separation_data, master_data) + separation_obj.activate() - # get name of constraint for loggers + # get name (index) of constraint for loggers con_name_repr = get_con_name_repr( - separation_model=nlp_model, - con=perf_con_to_maximize, - with_orig_name=True, + separation_model=separation_model, + con=ss_ineq_con_to_maximize, with_obj_name=True, ) - solve_mode = "global" if solve_globally else "local" - - # === Initialize separation problem; fix first-stage variables - initialize_separation(perf_con_to_maximize, model_data, config) - - separation_obj.activate() + # keep track of solver statuses for output logging + solve_mode = "global" if solve_globally else "local" + solver_status_dict = {} + if solve_globally: + solvers = [config.global_solver] + config.backup_global_solvers + else: + solvers = [config.local_solver] + config.backup_local_solvers solve_mode_adverb = "globally" if solve_globally else "locally" + solve_call_results = SeparationSolveCallResults( solved_globally=solve_globally, time_out=False, @@ -1084,20 +956,20 @@ def solver_call_separation( config.progress_logger.warning( f"Invoking backup solver {opt!r} " f"(solver {idx + 1} of {len(solvers)}) for {solve_mode} " - f"separation of performance constraint {con_name_repr} " - f"in iteration {model_data.iteration}." + f"separation of second-stage inequality constraint {con_name_repr} " + f"in iteration {separation_data.iteration}." ) results = call_solver( - model=nlp_model, + model=separation_model, solver=opt, config=config, - timing_obj=model_data.timing, + timing_obj=separation_data.timing, timer_name=f"main.{solve_mode}_separation", err_msg=( f"Optimizer {repr(opt)} ({idx + 1} of {len(solvers)}) " f"encountered exception attempting " f"to {solve_mode_adverb} solve separation problem for constraint " - f"{con_name_repr} in iteration {model_data.iteration}." + f"{con_name_repr} in iteration {separation_data.iteration}." ), ) @@ -1106,12 +978,10 @@ def solver_call_separation( solve_call_results.results_list.append(results) # has PyROS time limit been reached? - elapsed = get_main_elapsed_time(model_data.timing) - if config.time_limit: - if elapsed >= config.time_limit: - solve_call_results.time_out = True - separation_obj.deactivate() - return solve_call_results + if check_time_limit_reached(separation_data.timing, config): + solve_call_results.time_out = True + separation_obj.deactivate() + return solve_call_results # if separation problem solved to optimality, record results # and exit @@ -1122,13 +992,11 @@ def solver_call_separation( acceptable_conditions ) if optimal_termination: - nlp_model.solutions.load_from(results) + separation_model.solutions.load_from(results) # record second-stage and state variable values solve_call_results.variable_values = ComponentMap() - for var in nlp_model.util.second_stage_variables: - solve_call_results.variable_values[var] = value(var) - for var in nlp_model.util.state_vars: + for var in separation_model.all_adjustable_variables: solve_call_results.variable_values[var] = value(var) # record uncertain parameter realization @@ -1137,9 +1005,15 @@ def solver_call_separation( solve_call_results.violating_param_realization, solve_call_results.scaled_violations, solve_call_results.found_violation, - ) = evaluate_performance_constraint_violations( - model_data, config, perf_con_to_maximize, perf_cons_to_evaluate + ) = evaluate_ss_ineq_con_violations( + separation_data=separation_data, + ss_ineq_con_to_maximize=ss_ineq_con_to_maximize, + ss_ineq_cons_to_evaluate=ss_ineq_cons_to_evaluate, ) + solve_call_results.auxiliary_param_values = [ + auxvar.value + for auxvar in separation_model.uncertainty.auxiliary_var_list + ] separation_obj.deactivate() @@ -1147,9 +1021,9 @@ def solver_call_separation( else: config.progress_logger.debug( f"Solver {opt} ({idx + 1} of {len(solvers)}) " - f"failed for {solve_mode} separation of performance " + f"failed for {solve_mode} separation of second-stage inequality " f"constraint {con_name_repr} in iteration " - f"{model_data.iteration}. Termination condition: " + f"{separation_data.iteration}. Termination condition: " f"{results.solver.termination_condition!r}." ) config.progress_logger.debug(f"Results:\n{results.solver}") @@ -1167,15 +1041,15 @@ def solver_call_separation( ( config.uncertainty_set.type + "_" - + nlp_model.name + + separation_model.name + "_separation_" - + str(model_data.iteration) + + str(separation_data.iteration) + "_obj_" + objective + ".bar" ), ) - nlp_model.write( + separation_model.write( output_problem_path, io_options={'symbolic_solver_labels': True} ) serialization_msg = ( @@ -1184,8 +1058,8 @@ def solver_call_separation( ) solve_call_results.message = ( "Could not successfully solve separation problem of iteration " - f"{model_data.iteration} " - f"for performance constraint {con_name_repr} with any of the " + f"{separation_data.iteration} " + f"for second-stage inequality constraint {con_name_repr} with any of the " f"provided subordinate {solve_mode} optimizers. " f"(Termination statuses: " f"{[str(term_cond) for term_cond in solver_status_dict.values()]}.)" @@ -1199,7 +1073,11 @@ def solver_call_separation( def discrete_solve( - model_data, config, solve_globally, perf_con_to_maximize, perf_cons_to_evaluate + separation_data, + master_data, + solve_globally, + ss_ineq_con_to_maximize, + ss_ineq_cons_to_evaluate, ): """ Obtain separation problem solution for each scenario @@ -1208,27 +1086,27 @@ def discrete_solve( Parameters ---------- - model_data : SeparationProblemData + separation_data : SeparationProblemData Separation problem data. - config : ConfigDict - PyROS solver settings. + master_data : MasterProblemData + Master problem data. solver : solver type Primary subordinate optimizer with which to solve the model. solve_globally : bool Is separation problem to be solved globally. - perf_con_to_maximize : Constraint - Performance constraint for which to solve separation + ss_ineq_con_to_maximize : Constraint + Second-stage inequality constraint for which to solve separation problem. - perf_cons_to_evaluate : list of Constraint - Performance constraints whose expressions are to be + ss_ineq_cons_to_evaluate : list of Constraint + Secnod-stage inequality constraints whose expressions are to be evaluated at the each of separation problem solutions obtained. Returns ------- discrete_separation_results : DiscreteSeparationSolveCallResults - Separation solver call results on performance constraint + Separation solver call results on second-stage inequality constraint of interest for every scenario considered. Notes @@ -1237,23 +1115,23 @@ def discrete_solve( variables and uncertain parameter values uniquely define the state variables, this method need be only be invoked once per separation loop. Subject to our assumption, the choice of objective - (``perf_con_to_maximize``) should not affect the solutions returned - beyond subsolver tolerances. For other performance constraints, the + (``ss_ineq_con_to_maximize``) should not affect the solutions returned + beyond subsolver tolerances. + For other second-stage inequality constraints, the optimal separation problem solution can then be evaluated by simple enumeration of the solutions returned by this function, since for discrete uncertainty sets, the number of feasible separation solutions is, under our assumption, merely equal to the number of scenarios in the uncertainty set. """ + config = separation_data.config - # Ensure uncertainty set constraints deactivated - model_data.separation_model.util.uncertainty_set_constraint.deactivate() uncertain_param_vars = list( - model_data.separation_model.util.uncertain_param_vars.values() + separation_data.separation_model.uncertainty.uncertain_param_var_list ) # skip scenarios already added to most recent master problem - master_scenario_idxs = model_data.idxs_of_master_scenarios + master_scenario_idxs = separation_data.idxs_of_master_scenarios scenario_idxs_to_separate = [ idx for idx, _ in enumerate(config.uncertainty_set.scenarios) @@ -1270,11 +1148,11 @@ def discrete_solve( # obtain separation problem solution solve_call_results = solver_call_separation( - model_data=model_data, - config=config, + separation_data=separation_data, + master_data=master_data, solve_globally=solve_globally, - perf_con_to_maximize=perf_con_to_maximize, - perf_cons_to_evaluate=perf_cons_to_evaluate, + ss_ineq_con_to_maximize=ss_ineq_con_to_maximize, + ss_ineq_cons_to_evaluate=ss_ineq_cons_to_evaluate, ) solve_call_results.discrete_set_scenario_index = scenario_idx solve_call_results_dict[scenario_idx] = solve_call_results @@ -1289,5 +1167,82 @@ def discrete_solve( return DiscreteSeparationSolveCallResults( solved_globally=solve_globally, solver_call_results=solve_call_results_dict, - performance_constraint=perf_con_to_maximize, + second_stage_ineq_con=ss_ineq_con_to_maximize, ) + + +class SeparationProblemData: + """ + Container for objects related to the PyROS separation problem. + + Parameters + ---------- + model_data : ModelData + PyROS model data object, equipped with the + fully preprocessed working model. + + Attributes + ---------- + separation_model : BlockData + Separation problem model object. + timing : TimingData + Main timer for the current problem being solved. + config : ConfigDict + PyROS solver options. + separation_priority_order : dict + Standardized/preprocessed mapping from names of the + second-stage inequality constraint objects to integers + specifying their priorities. + iteration : int + Index of the current PyROS cutting set iteration. + points_added_to_master : dict + Maps each scenario index (2-tuple of ints) of the + master problem model object to the corresponding + uncertain parameter realization. + auxiliary_values_for_master_points : dict + Maps each scenario index (2-tuple of ints) of the + master problem model object to the auxiliary parameter + values corresponding to the associated uncertain parameter + realization. + idxs_of_master_scenarios : None or list of int + If ``config.uncertainty_set`` is of type + :class:`~pyomo.contrib.pyros.uncertainty_sets.DiscreteScenarioSet`, + then this attribute is a list + of ints, each entry of which is a list index for + an entry in the ``scenarios`` attribute of the + uncertainty set. Otherwise, this attribute is set to None. + """ + + def __init__(self, model_data): + """Initialize self (see class docstring).""" + self.separation_model = construct_separation_problem(model_data) + self.timing = model_data.timing + self.separation_priority_order = model_data.separation_priority_order.copy() + self.iteration = 0 + + config = model_data.config + self.config = config + self.points_added_to_master = {(0, 0): config.nominal_uncertain_param_vals} + self.auxiliary_values_for_master_points = { + (0, 0): [ + # auxiliary variable values for nominal point have already + # been computed and loaded into separation model + aux_var.value + for aux_var in self.separation_model.uncertainty.auxiliary_var_list + ] + } + + if config.uncertainty_set.geometry == Geometry.DISCRETE_SCENARIOS: + self.idxs_of_master_scenarios = [ + config.uncertainty_set.scenarios.index( + tuple(config.nominal_uncertain_param_vals) + ) + ] + else: + self.idxs_of_master_scenarios = None + + def solve_separation(self, master_data): + """ + Solve the separation problem. + """ + return solve_separation_problem(self, master_data) diff --git a/pyomo/contrib/pyros/solve_data.py b/pyomo/contrib/pyros/solve_data.py index a1667d88781..db403212eb4 100644 --- a/pyomo/contrib/pyros/solve_data.py +++ b/pyomo/contrib/pyros/solve_data.py @@ -10,7 +10,7 @@ # ___________________________________________________________________________ """ -Objects to contain all model data and solve results for the ROSolver +Containers for PyROS subproblem solve results. """ @@ -33,15 +33,19 @@ class ROSolveResults(object): Attributes ---------- - config : ConfigDict, optional + config : ConfigDict User-specified solver settings. - iterations : int, optional + iterations : int Number of iterations required by PyROS. - time : float, optional + time : float Total elapsed time (or wall time), in seconds. - final_objective_value : float, optional + final_objective_value : float Final objective function value to report. - pyros_termination_condition : pyros.util.pyrosTerminationStatus + If a nominal objective focus was elected, then the + value of the nominal objective function is reported. + If a worst-case objective focus was elected, then + the value of the worst-case objective function is reported. + pyros_termination_condition : pyrosTerminationCondition Indicator of the manner of termination. """ @@ -83,78 +87,39 @@ def __str__(self): return "\n".join(lines) -class MasterProblemData(object): +class MasterResults: """ - Container for the grcs master problem + Result of solving the master problem in a single PyROS iteration. Attributes ---------- - master_model : BlockData - master problem model object - - base_model : BlockData - block representing the original model object - - iteration : int - current iteration of the algorithm - - """ - - -class SeparationProblemData(object): - """Container for the grcs separation problem - - Attributes - ---------- - separation_model : BlockData - separation problem model object - - points_added_to_master : List[] - list of parameter violations added to the master problem over - the course of the algorithm - - separation_problem_subsolver_statuses : List[] - list of subordinate sub-solver statuses throughout separations - - total_global_separation_solvers : int - Counter for number of times global solvers were employed in separation - - constraint_violations : List[] - List of constraint violations identified in separation - + master_model : ConcreteModel + Master model. + feasibility_problem_results : SolverResults + Feasibility problem subsolver results. + master_results_list : list of SolverResults + List of subsolver results for the master problem. + pyros_termination_condition : None or pyrosTerminationCondition + PyROS termination status established via solution of + the master problem. + If `None`, then no termination status has been established. """ - pass - - -class MasterResult(object): - """Data class for master problem results data. - - Attributes - ---------- - termination_condition : - Solver termination condition - - fsv_values : List[] - list of design variable values - - ssv_values : List[] - list of control variable values - - first_stage_objective : float - objective contribution due to first-stage degrees of freedom - - second_stage_objective : float - objective contribution due to second-stage degrees of freedom - - grcs_termination_condition : - the conditions under which the grcs terminated (max_iter, - robust_optimal, error) - - pyomo_results : - results object from solve() statement - - """ + def __init__( + self, + master_model=None, + feasibility_problem_results=None, + master_results_list=None, + pyros_termination_condition=None, + ): + """Initialize self (see class docstring).""" + self.master_model = master_model + self.feasibility_problem_results = feasibility_problem_results + if master_results_list is None: + self.master_results_list = [] + else: + self.master_results_list = list(master_results_list) + self.pyros_termination_condition = pyros_termination_condition class SeparationSolveCallResults: @@ -179,19 +144,22 @@ class SeparationSolveCallResults: subordinate local/global solvers provided (including backup) and the number of scenarios in the uncertainty set. scaled_violations : ComponentMap, optional - Mapping from performance constraints to floats equal + Mapping from second-stage inequality constraints to floats equal to their scaled violations by separation problem solution stored in this result. violating_param_realization : list of float, optional Uncertain parameter realization for reported separation problem solution. + auxiliary_param_values : list of float, optional + Auxiliary parameter values corresponding to the + uncertain parameter realization `violating_param_realization`. variable_values : ComponentMap, optional Second-stage DOF and state variable values for reported separation problem solution. found_violation : bool, optional - True if violation of performance constraint (i.e. constraint - expression value) by reported separation solution was found to - exceed tolerance, False otherwise. + True if violation of second-stage inequality constraint + (i.e. constraint expression value) by reported separation + solution was found to exceed tolerance, False otherwise. time_out : bool, optional True if PyROS time limit reached attempting to solve the separation problem, False otherwise. @@ -210,6 +178,7 @@ class SeparationSolveCallResults: results_list scaled_violations violating_param_realizations + auxiliary_param_values variable_values found_violation time_out @@ -223,6 +192,7 @@ def __init__( results_list=None, scaled_violations=None, violating_param_realization=None, + auxiliary_param_values=None, variable_values=None, found_violation=None, time_out=None, @@ -234,6 +204,7 @@ def __init__( self.solved_globally = solved_globally self.scaled_violations = scaled_violations self.violating_param_realization = violating_param_realization + self.auxiliary_param_values = auxiliary_param_values self.variable_values = variable_values self.found_violation = found_violation self.time_out = time_out @@ -260,31 +231,6 @@ def termination_acceptable(self, acceptable_terminations): for res in self.results_list ) - def evaluate_total_solve_time(self, evaluator_func, **evaluator_func_kwargs): - """ - Evaluate total time required by subordinate solvers - for separation problem of interest, according to Pyomo - ``SolverResults`` objects stored in ``self.results_list``. - - Parameters - ---------- - evaluator_func : callable - Solve time evaluator function. - This callable should accept an object of type - ``pyomo.opt.results.SolverResults``, and - return a float equal to the time required. - **evaluator_func_kwargs : dict, optional - Keyword arguments to evaluator function. - - Returns - ------- - float - Total time spent by solvers. - """ - return sum( - evaluator_func(res, **evaluator_func_kwargs) for res in self.results_list - ) - class DiscreteSeparationSolveCallResults: """ @@ -300,25 +246,24 @@ class DiscreteSeparationSolveCallResults: Mapping from discrete uncertainty set scenario list indexes to solver call results for separation problems subject to the scenarios. - performance_constraint : Constraint - Separation problem performance constraint for which + second_stage_ineq_con : Constraint + Separation problem second-stage inequality constraint for which `self` was generated. Attributes ---------- solved_globally - scenario_indexes solver_call_results - performance_constraint + second_stage_ineq_con """ def __init__( - self, solved_globally, solver_call_results=None, performance_constraint=None + self, solved_globally, solver_call_results=None, second_stage_ineq_con=None ): """Initialize self (see class docstring).""" self.solved_globally = solved_globally self.solver_call_results = solver_call_results - self.performance_constraint = performance_constraint + self.second_stage_ineq_con = second_stage_ineq_con @property def time_out(self): @@ -338,31 +283,6 @@ def subsolver_error(self): """ return any(res.subsolver_error for res in self.solver_call_results.values()) - def evaluate_total_solve_time(self, evaluator_func, **evaluator_func_kwargs): - """ - Evaluate total time required by subordinate solvers - for separation problem of interest. - - Parameters - ---------- - evaluator_func : callable - Solve time evaluator function. - This callable should accept an object of type - ``pyomo.opt.results.SolveResults``, and - return a float equal to the time required. - **evaluator_func_kwargs : dict, optional - Keyword arguments to evaluator function. - - Returns - ------- - float - Total time spent by solvers. - """ - return sum( - solver_call_res.evaluate_total_solve_time(evaluator_func) - for solver_call_res in self.solver_call_results.values() - ) - class SeparationLoopResults: """ @@ -375,10 +295,11 @@ class SeparationLoopResults: True if separation problems were solved to global optimality, False otherwise. solver_call_results : ComponentMap - Mapping from performance constraints to corresponding + Mapping from second-stage inequality constraints to corresponding ``SeparationSolveCallResults`` objects. - worst_case_perf_con : None or Constraint - Performance constraint mapped to ``SeparationSolveCallResults`` + worst_case_ss_ineq_con : None or Constraint + Second-stage inequality constraint mapped to + ``SeparationSolveCallResults`` object in `self` corresponding to maximally violating separation problem solution. all_discrete_scenarios_exhausted : bool, optional @@ -390,23 +311,30 @@ class SeparationLoopResults: Attributes ---------- - solver_call_results - solved_globally - worst_case_perf_con - all_discrete_scenarios_exhausted + solved_globally : bool + True if global solver was used, False otherwise. + solver_call_results : ComponentMap + Mapping from second-stage inequality constraints to corresponding + ``SeparationSolveCallResults`` objects. + worst_case_ss_ineq_con : None or ConstraintData + Worst-case second-stage inequality constraint. + all_discrete_scenarios_exhausted : bool + True if all scenarios of the discrete set were exhausted + already explicitly accounted for in the master problems, + False otherwise. """ def __init__( self, solved_globally, solver_call_results, - worst_case_perf_con, + worst_case_ss_ineq_con, all_discrete_scenarios_exhausted=False, ): """Initialize self (see class docstring).""" self.solver_call_results = solver_call_results self.solved_globally = solved_globally - self.worst_case_perf_con = worst_case_perf_con + self.worst_case_ss_ineq_con = worst_case_ss_ineq_con self.all_discrete_scenarios_exhausted = all_discrete_scenarios_exhausted @property @@ -414,8 +342,8 @@ def found_violation(self): """ bool : True if separation solution for at least one ``SeparationSolveCallResults`` object listed in self - was reported to violate its corresponding performance - constraint, False otherwise. + was reported to violate its corresponding second-stage + inequality constraint, False otherwise. """ return any( solver_call_res.found_violation @@ -428,29 +356,45 @@ def violating_param_realization(self): None or list of float : Uncertain parameter values for for maximally violating separation problem solution, specified according to solver call results object - listed in self at index ``self.worst_case_perf_con``. - If ``self.worst_case_perf_con`` is not specified, + listed in self at index ``self.worst_case_ss_ineq_con``. + If ``self.worst_case_ss_ineq_con`` is not specified, then None is returned. """ - if self.worst_case_perf_con is not None: + if self.worst_case_ss_ineq_con is not None: return self.solver_call_results[ - self.worst_case_perf_con + self.worst_case_ss_ineq_con ].violating_param_realization else: return None + @property + def auxiliary_param_values(self): + """ + None or list of float : Auxiliary parameter values for the + maximially violating separation problem solution. + """ + if self.worst_case_ss_ineq_con is not None: + return self.solver_call_results[ + self.worst_case_ss_ineq_con + ].auxiliary_param_values + else: + return None + @property def scaled_violations(self): """ - None or ComponentMap : Scaled performance constraint violations + None or ComponentMap : Scaled second-stage inequality + constraint violations for maximally violating separation problem solution, specified according to solver call results object - listed in self at index ``self.worst_case_perf_con``. - If ``self.worst_case_perf_con`` is not specified, + listed in self at index ``self.worst_case_ss_ineq_con``. + If ``self.worst_case_ss_ineq_con`` is not specified, then None is returned. """ - if self.worst_case_perf_con is not None: - return self.solver_call_results[self.worst_case_perf_con].scaled_violations + if self.worst_case_ss_ineq_con is not None: + return self.solver_call_results[ + self.worst_case_ss_ineq_con + ].scaled_violations else: return None @@ -460,20 +404,20 @@ def violating_separation_variable_values(self): None or ComponentMap : Second-stage and state variable values for maximally violating separation problem solution, specified according to solver call results object - listed in self at index ``self.worst_case_perf_con``. - If ``self.worst_case_perf_con`` is not specified, + listed in self at index ``self.worst_case_ss_ineq_con``. + If ``self.worst_case_ss_ineq_con`` is not specified, then None is returned. """ - if self.worst_case_perf_con is not None: - return self.solver_call_results[self.worst_case_perf_con].variable_values + if self.worst_case_ss_ineq_con is not None: + return self.solver_call_results[self.worst_case_ss_ineq_con].variable_values else: return None @property - def violated_performance_constraints(self): + def violated_second_stage_ineq_cons(self): """ - list of Constraint : Performance constraints for which violation - found. + list of Constraint : Second-stage inequality constraints + for which violation found. """ return [ con @@ -505,31 +449,6 @@ def time_out(self): for solver_call_res in self.solver_call_results.values() ) - def evaluate_total_solve_time(self, evaluator_func, **evaluator_func_kwargs): - """ - Evaluate total time required by subordinate solvers - for separation problem of interest. - - Parameters - ---------- - evaluator_func : callable - Solve time evaluator function. - This callable should accept an object of type - ``pyomo.opt.results.SolveResults``, and - return a float equal to the time required. - **evaluator_func_kwargs : dict, optional - Keyword arguments to evaluator function. - - Returns - ------- - float - Total time spent by solvers. - """ - return sum( - res.evaluate_total_solve_time(evaluator_func) - for res in self.solver_call_results.values() - ) - class SeparationResults: """ @@ -544,8 +463,14 @@ class SeparationResults: Attributes ---------- - local_separation_loop_results - global_separation_loop_results + local_separation_loop_results : None or SeparationLoopResults + Local separation results. If separation problems + were not solved locally, then this attribute is set + to None. + global_separation_loop_results : None or SeparationLoopResults + Global separation results. If separation problems + were not solved globally, then this attribute is set + to None. """ def __init__(self, local_separation_loop_results, global_separation_loop_results): @@ -639,12 +564,13 @@ def all_discrete_scenarios_exhausted(self): return self.get_violating_attr("all_discrete_scenarios_exhausted") @property - def worst_case_perf_con(self): + def worst_case_ss_ineq_con(self): """ - ConstraintData : Performance constraint corresponding to the + ConstraintData : Second-stage inequality constraint + corresponding to the separation solution chosen for the next master problem. """ - return self.get_violating_attr("worst_case_perf_con") + return self.get_violating_attr("worst_case_ss_ineq_con") @property def main_loop_results(self): @@ -675,19 +601,28 @@ def violating_param_realization(self): None or list of float : Uncertain parameter values for maximally violating separation problem solution reported in local or global separation loop results. - If no such solution found, (i.e. ``worst_case_perf_con`` + If no such solution found, (i.e. ``worst_case_ss_ineq_con`` set to None for both local and global loop results), then None is returned. """ return self.get_violating_attr("violating_param_realization") + @property + def auxiliary_param_values(self): + """ + None or list of float: Auxiliary parameter values accompanying + `self.violating_param_realization`. + """ + return self.get_violating_attr("auxiliary_param_values") + @property def scaled_violations(self): """ - None or ComponentMap : Scaled performance constraint violations + None or ComponentMap : + Scaled second-stage inequality constraint violations for maximally violating separation problem solution reported in local or global separation loop results. - If no such solution found, (i.e. ``worst_case_perf_con`` + If no such solution found, (i.e. ``worst_case_ss_ineq_con`` set to None for both local and global loop results), then None is returned. """ @@ -699,72 +634,18 @@ def violating_separation_variable_values(self): None or ComponentMap : Second-stage and state variable values for maximally violating separation problem solution reported in local or global separation loop results. - If no such solution found, (i.e. ``worst_case_perf_con`` + If no such solution found, (i.e. ``worst_case_ss_ineq_con`` set to None for both local and global loop results), then None is returned. """ return self.get_violating_attr("violating_separation_variable_values") @property - def violated_performance_constraints(self): - """ - Return list of violated performance constraints. + def violated_second_stage_ineq_cons(self): """ - return self.get_violating_attr("violated_performance_constraints") - - def evaluate_local_solve_time(self, evaluator_func, **evaluator_func_kwargs): - """ - Evaluate total time required by local subordinate solvers - for separation problem of interest. - - Parameters - ---------- - evaluator_func : callable - Solve time evaluator function. - This callable should accept an object of type - ``pyomo.opt.results.SolverResults``, and - return a float equal to the time required. - **evaluator_func_kwargs : dict, optional - Keyword arguments to evaluator function. - - Returns - ------- - float - Total time spent by local solvers. - """ - if self.solved_locally: - return self.local_separation_loop_results.evaluate_total_solve_time( - evaluator_func, **evaluator_func_kwargs - ) - else: - return 0 - - def evaluate_global_solve_time(self, evaluator_func, **evaluator_func_kwargs): - """ - Evaluate total time required by global subordinate solvers - for separation problem of interest. - - Parameters - ---------- - evaluator_func : callable - Solve time evaluator function. - This callable should accept an object of type - ``pyomo.opt.results.SolverResults``, and - return a float equal to the time required. - **evaluator_func_kwargs : dict, optional - Keyword arguments to evaluator function. - - Returns - ------- - float - Total time spent by global solvers. + Return list of violated second-stage inequality constraints. """ - if self.solved_globally: - return self.global_separation_loop_results.evaluate_total_solve_time( - evaluator_func, **evaluator_func_kwargs - ) - else: - return 0 + return self.get_violating_attr("violated_second_stage_ineq_cons") @property def robustness_certified(self): @@ -793,30 +674,3 @@ def robustness_certified(self): is_robust = heuristically_robust return is_robust - - def generate_subsolver_results(self, include_local=True, include_global=True): - """ - Generate flattened sequence all Pyomo SolverResults objects - for all ``SeparationSolveCallResults`` objects listed in - the local and global ``SeparationLoopResults`` - attributes of `self`. - - Yields - ------ - pyomo.opt.SolverResults - """ - if include_local and self.local_separation_loop_results is not None: - all_local_call_results = ( - self.local_separation_loop_results.solver_call_results.values() - ) - for solve_call_res in all_local_call_results: - for res in solve_call_res.results_list: - yield res - - if include_global and self.global_separation_loop_results is not None: - all_global_call_results = ( - self.global_separation_loop_results.solver_call_results.values() - ) - for solve_call_res in all_global_call_results: - for res in solve_call_res.results_list: - yield res diff --git a/pyomo/contrib/pyros/tests/test_config.py b/pyomo/contrib/pyros/tests/test_config.py index 166fbada4ff..e4588953eca 100644 --- a/pyomo/contrib/pyros/tests/test_config.py +++ b/pyomo/contrib/pyros/tests/test_config.py @@ -1,9 +1,20 @@ +# ___________________________________________________________________________ +# +# Pyomo: Python Optimization Modeling Objects +# Copyright (c) 2008-2024 +# National Technology and Engineering Solutions of Sandia, LLC +# Under the terms of Contract DE-NA0003525 with National Technology and +# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain +# rights in this software. +# This software is distributed under the 3-clause BSD License. +# ___________________________________________________________________________ + """ Test objects for construction of PyROS ConfigDict. """ import logging -import unittest +import pyomo.common.unittest as unittest from pyomo.core.base import ConcreteModel, Var, VarData from pyomo.common.log import LoggingIntercept diff --git a/pyomo/contrib/pyros/tests/test_grcs.py b/pyomo/contrib/pyros/tests/test_grcs.py index f2954750a16..ebb2c8b7a37 100644 --- a/pyomo/contrib/pyros/tests/test_grcs.py +++ b/pyomo/contrib/pyros/tests/test_grcs.py @@ -9,67 +9,25 @@ # This software is distributed under the 3-clause BSD License. # ___________________________________________________________________________ -''' -Unit tests for the grcs API -One class per function being tested, minimum one test per class -''' +""" +Tests for the PyROS solver. +""" + +import logging +import math +import time import pyomo.common.unittest as unittest from pyomo.common.log import LoggingIntercept -from pyomo.common.collections import ComponentSet, ComponentMap -from pyomo.common.config import ConfigBlock, ConfigValue -from pyomo.core.base.set_types import NonNegativeIntegers -from pyomo.core.base.var import VarData -from pyomo.core.expr import ( - identify_variables, - identify_mutable_parameters, - MonomialTermExpression, - SumExpression, -) -from pyomo.contrib.pyros.util import ( - selective_clone, - add_decision_rule_variables, - add_decision_rule_constraints, - turn_bounds_to_constraints, - transform_to_standard_form, - ObjectiveType, - pyrosTerminationCondition, - coefficient_matching, - TimingData, - IterationLogRecord, -) -from pyomo.contrib.pyros.util import replace_uncertain_bounds_with_constraints -from pyomo.contrib.pyros.util import get_vars_from_component -from pyomo.contrib.pyros.util import identify_objective_functions from pyomo.common.collections import Bunch +from pyomo.common.errors import InvalidValueError +from pyomo.core.base.set_types import NonNegativeIntegers from pyomo.repn.plugins import nl_writer as pyomo_nl_writer -import time -import math -from pyomo.contrib.pyros.util import time_code -from pyomo.contrib.pyros.uncertainty_sets import ( - UncertaintySet, - BoxSet, - CardinalitySet, - BudgetSet, - FactorModelSet, - PolyhedralSet, - EllipsoidalSet, - AxisAlignedEllipsoidalSet, - IntersectionSet, - DiscreteScenarioSet, - Geometry, -) -from pyomo.contrib.pyros.master_problem_methods import ( - add_scenario_to_master, - initial_construct_master, - solve_master, - minimize_dr_vars, -) -from pyomo.contrib.pyros.solve_data import MasterProblemData, ROSolveResults +import pyomo.repn.ampl as pyomo_ampl_repn from pyomo.common.dependencies import numpy as np, numpy_available -from pyomo.common.dependencies import scipy as sp, scipy_available -from pyomo.environ import maximize as pyo_max +from pyomo.common.dependencies import scipy_available from pyomo.common.errors import ApplicationError, InfeasibleConstraintException +from pyomo.environ import maximize as pyo_max, units as u from pyomo.opt import ( SolverResults, SolverStatus, @@ -81,32 +39,39 @@ Reals, Set, Block, - ConstraintList, ConcreteModel, Constraint, - Expression, Objective, Param, SolverFactory, Var, - cos, exp, log, - sin, sqrt, value, maximize, minimize, ) -import logging -from itertools import chain +from pyomo.contrib.pyros.solve_data import ROSolveResults +from pyomo.contrib.pyros.uncertainty_sets import ( + BoxSet, + AxisAlignedEllipsoidalSet, + FactorModelSet, + IntersectionSet, + DiscreteScenarioSet, +) +from pyomo.contrib.pyros.util import ( + IterationLogRecord, + ObjectiveType, + pyrosTerminationCondition, +) logger = logging.getLogger(__name__) if not (numpy_available and scipy_available): - raise unittest.SkipTest('PyROS unit tests require numpy and scipy') + raise unittest.SkipTest('PyROS unit tests require parameterized, numpy, and scipy') # === Config args for testing nlp_solver = 'ipopt' @@ -215,2791 +180,194 @@ def solve(self, model, **kwargs): return results -# === util.py -class testSelectiveClone(unittest.TestCase): - ''' - Testing for the selective_clone function. This function takes as input a Pyomo model object - and a list of variables objects "first_stage_vars" in that Pyomo model which should *not* be cloned. - It returns a clone of the original Pyomo model object wherein the "first_stage_vars" members are unchanged, - i.e. all cloned model expressions still reference the "first_stage_vars" of the original model object. - ''' - - def test_cloning_negative_case(self): - ''' - Testing correct behavior if incorrect first_stage_vars list object is passed to selective_clone - ''' - m = ConcreteModel() - m.x = Var(initialize=2) - m.y = Var(initialize=2) - m.p = Param(initialize=1) - m.con = Constraint(expr=m.x * m.p + m.y <= 0) - - n = ConcreteModel() - n.x = Var() - m.first_stage_vars = [n.x] - - cloned_model = selective_clone(block=m, first_stage_vars=m.first_stage_vars) +def build_leyffer(): + """ + Build original Leyffer two-variable test problem. + """ + m = ConcreteModel() - self.assertNotEqual( - id(m.first_stage_vars), - id(cloned_model.first_stage_vars), - msg="First stage variables should not be equal.", - ) + m.u = Param(initialize=1.125, mutable=True) - def test_cloning_positive_case(self): - ''' - Testing if selective_clone works correctly for correct first_stage_var object definition. - ''' - m = ConcreteModel() - m.x = Var(initialize=2) - m.y = Var(initialize=2) - m.p = Param(initialize=1) - m.con = Constraint(expr=m.x * m.p + m.y <= 0) - m.first_stage_vars = [m.x] + m.x1 = Var(initialize=0, bounds=(0, None)) + m.x2 = Var(initialize=0, bounds=(0, None)) - cloned_model = selective_clone(block=m, first_stage_vars=m.first_stage_vars) + m.con = Constraint(expr=m.x1 * sqrt(m.u) - m.u * m.x2 <= 2) + m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) - self.assertEqual( - id(m.x), id(cloned_model.x), msg="First stage variables should be equal." - ) - self.assertNotEqual( - id(m.y), - id(cloned_model.y), - msg="Non-first-stage variables should not be equal.", - ) - self.assertNotEqual( - id(m.p), id(cloned_model.p), msg="Params should not be equal." - ) - self.assertNotEqual( - id(m.con), - id(cloned_model.con), - msg="Constraint objects should not be equal.", - ) + return m -class testAddDecisionRuleVars(unittest.TestCase): +def build_leyffer_two_cons(): """ - Test method for adding decision rule variables to working model. - The number of decision rule variables per control variable - should depend on: - - - the number of uncertain parameters in the model - - the decision rule order specified by the user. + Build extended Leyffer problem with single uncertain parameter. """ + m = ConcreteModel() - def make_simple_test_model(self): - """ - Make simple test model for DR variable - declaration testing. - """ - m = ConcreteModel() - - # uncertain parameters - m.p = Param(range(3), initialize=0, mutable=True) + m.u = Param(initialize=1.125, mutable=True) - # second-stage variables - m.z = Var([0, 1], initialize=0) + m.x1 = Var(initialize=0, bounds=(0, None)) + m.x2 = Var(initialize=0, bounds=(0, None)) + m.x3 = Var(initialize=0, bounds=(None, None)) - # util block - m.util = Block() - m.util.first_stage_variables = [] - m.util.second_stage_variables = list(m.z.values()) - m.util.uncertain_params = list(m.p.values()) + m.con1 = Constraint(expr=m.x1 * sqrt(m.u) - m.x2 * m.u <= 2) + m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - return m + m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) - @unittest.skipIf(not scipy_available, 'Scipy is not available.') - def test_correct_num_dr_vars_static(self): - """ - Test DR variable setup routines declare the correct - number of DR coefficient variables, static DR case. - """ - model_data = ROSolveResults() - model_data.working_model = m = self.make_simple_test_model() + return m - config = Bunch() - config.decision_rule_order = 0 - add_decision_rule_variables(model_data=model_data, config=config) +def build_leyffer_two_cons_two_params(): + """ + Build extended Leyffer problem with two uncertain parameters. + """ + m = ConcreteModel() - for indexed_dr_var in m.util.decision_rule_vars: - self.assertEqual( - len(indexed_dr_var), - 1, - msg=( - "Number of decision rule coefficient variables " - f"in indexed Var object {indexed_dr_var.name!r}" - "does not match correct value." - ), - ) + m.u1 = Param(initialize=1.125, mutable=True) + m.u2 = Param(initialize=1, mutable=True) - self.assertEqual( - len(ComponentSet(m.util.decision_rule_vars)), - len(m.util.second_stage_variables), - msg=( - "Number of unique indexed DR variable components should equal " - "number of second-stage variables." - ), - ) + m.x1 = Var(initialize=0, bounds=(0, None)) + m.x2 = Var(initialize=0, bounds=(0, None)) + m.x3 = Var(initialize=0, bounds=(None, None)) - @unittest.skipIf(not scipy_available, 'Scipy is not available.') - def test_correct_num_dr_vars_affine(self): - """ - Test DR variable setup routines declare the correct - number of DR coefficient variables, affine DR case. - """ - model_data = ROSolveResults() - model_data.working_model = m = self.make_simple_test_model() + m.con1 = Constraint(expr=m.x1 * sqrt(m.u1) - m.x2 * m.u1 <= 2) + m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u1 == m.x3) - config = Bunch() - config.decision_rule_order = 1 + m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - m.u2) ** 2) - add_decision_rule_variables(model_data=model_data, config=config) + return m - for indexed_dr_var in m.util.decision_rule_vars: - self.assertEqual( - len(indexed_dr_var), - 1 + len(m.util.uncertain_params), - msg=( - "Number of decision rule coefficient variables " - f"in indexed Var object {indexed_dr_var.name!r}" - "does not match correct value." - ), - ) - self.assertEqual( - len(ComponentSet(m.util.decision_rule_vars)), - len(m.util.second_stage_variables), - msg=( - "Number of unique indexed DR variable components should equal " - "number of second-stage variables." - ), - ) +class TestPyROSSolveFactorModelSet(unittest.TestCase): + """ + Test PyROS successfully solves model with factor model uncertainty. + """ - @unittest.skipIf(not scipy_available, 'Scipy is not available.') - def test_correct_num_dr_vars_quadratic(self): + @unittest.skipUnless( + baron_license_is_valid, "Global NLP solver is not available and licensed." + ) + def test_two_stg_mod_with_factor_model_set(self): """ - Test DR variable setup routines declare the correct - number of DR coefficient variables, quadratic DR case. + Test two-stage model with `FactorModelSet` + as the uncertainty set. """ - model_data = ROSolveResults() - model_data.working_model = m = self.make_simple_test_model() + m = build_leyffer_two_cons_two_params() + + # Define the uncertainty set + # we take the parameter `u2` to be 'fixed' + fset = FactorModelSet( + origin=[1.125, 1], beta=1, number_of_factors=1, psi_mat=[[0.5], [0.5]] + ) - config = Bunch() - config.decision_rule_order = 2 + # Instantiate the PyROS solver + pyros_solver = SolverFactory("pyros") - add_decision_rule_variables(model_data=model_data, config=config) + # Define subsolvers utilized in the algorithm + local_subsolver = SolverFactory('baron') + global_subsolver = SolverFactory("baron") - num_params = len(m.util.uncertain_params) - correct_num_dr_vars = ( - 1 # static term - + num_params # affine terms - + sp.special.comb(num_params, 2, repetition=True, exact=True) - # quadratic terms + # Call the PyROS solver + results = pyros_solver.solve( + model=m, + first_stage_variables=[m.x1, m.x2], + second_stage_variables=[], + uncertain_params=[m.u1, m.u2], + uncertainty_set=fset, + local_solver=local_subsolver, + global_solver=global_subsolver, + options={ + "objective_focus": ObjectiveType.worst_case, + "solve_master_globally": True, + }, ) - for indexed_dr_var in m.util.decision_rule_vars: - self.assertEqual( - len(indexed_dr_var), - correct_num_dr_vars, - msg=( - "Number of decision rule coefficient variables " - f"in indexed Var object {indexed_dr_var.name!r}" - "does not match correct value." - ), - ) + # check successful termination self.assertEqual( - len(ComponentSet(m.util.decision_rule_vars)), - len(m.util.second_stage_variables), - msg=( - "Number of unique indexed DR variable components should equal " - "number of second-stage variables." - ), + results.pyros_termination_condition, + pyrosTerminationCondition.robust_optimal, + msg="Did not identify robust optimal solution to problem instance.", ) -class testAddDecisionRuleConstraints(unittest.TestCase): +class TestPyROSSolveAxisAlignedEllipsoidalSet(unittest.TestCase): """ - Test method for adding decision rule equality constraints - to the working model. There should be as many decision - rule equality constraints as there are second-stage - variables, and each constraint should relate a second-stage - variable to the uncertain parameters and corresponding - decision rule variables. + Unit tests for the AxisAlignedEllipsoidalSet. """ - def make_simple_test_model(self): - """ - Make simple model for DR constraint testing. - """ - m = ConcreteModel() - - # uncertain parameters - m.p = Param(range(3), initialize=0, mutable=True) - - # second-stage variables - m.z = Var([0, 1], initialize=0) - - # util block - m.util = Block() - m.util.first_stage_variables = [] - m.util.second_stage_variables = list(m.z.values()) - m.util.uncertain_params = list(m.p.values()) - - return m - - @unittest.skipIf(not scipy_available, 'Scipy is not available.') - def test_num_dr_eqns_added_correct(self): - """ - Check that number of DR equality constraints added - by constraint declaration routines matches the number - of second-stage variables in the model. - """ - model_data = ROSolveResults() - model_data.working_model = m = self.make_simple_test_model() - - # === Decision rule vars have been added - m.decision_rule_var_0 = Var([0], initialize=0) - m.decision_rule_var_1 = Var([0], initialize=0) - m.util.decision_rule_vars = [m.decision_rule_var_0, m.decision_rule_var_1] - - # set up simple config-like object - config = Bunch() - config.decision_rule_order = 0 - - add_decision_rule_constraints(model_data=model_data, config=config) - - self.assertEqual( - len(m.util.decision_rule_eqns), - len(m.util.second_stage_variables), - msg="The number of decision rule constraints added to model should equal" - "the number of control variables in the model.", - ) - - @unittest.skipIf(not scipy_available, 'Scipy is not available.') - def test_dr_eqns_form_correct(self): + @unittest.skipUnless( + scip_available and scip_license_is_valid, "SCIP is not available and licensed" + ) + def test_two_stg_mod_with_axis_aligned_set(self): """ - Check that form of decision rule equality constraints - is as expected. - - Decision rule equations should be of the standard form: - (sum of DR monomial terms) - (second-stage variable) == 0 - where each monomial term should be of form: - (product of uncertain parameters) * (decision rule variable) - - This test checks that the equality constraints are of this - standard form. + Test two-stage model with `AxisAlignedEllipsoidalSet` + as the uncertainty set. """ - # set up simple model data like object - model_data = ROSolveResults() - model_data.working_model = m = self.make_simple_test_model() - - # set up simple config-like object - config = Bunch() - config.decision_rule_order = 2 - - # add DR variables and constraints - add_decision_rule_variables(model_data, config) - add_decision_rule_constraints(model_data, config) - - # DR polynomial terms and order in which they should - # appear depends on number of uncertain parameters - # and order in which the parameters are listed. - # so uncertain parameters participating in each term - # of the monomial is known, and listed out here. - dr_monomial_param_combos = [ - (1,), - (m.p[0],), - (m.p[1],), - (m.p[2],), - (m.p[0], m.p[0]), - (m.p[0], m.p[1]), - (m.p[0], m.p[2]), - (m.p[1], m.p[1]), - (m.p[1], m.p[2]), - (m.p[2], m.p[2]), - ] - - dr_zip = zip( - m.util.second_stage_variables, - m.util.decision_rule_vars, - m.util.decision_rule_eqns, - ) - for ss_var, indexed_dr_var, dr_eq in dr_zip: - dr_eq_terms = dr_eq.body.args - - # check constraint body is sum expression - self.assertTrue( - isinstance(dr_eq.body, SumExpression), - msg=( - f"Body of DR constraint {dr_eq.name!r} is not of type " - f"{SumExpression.__name__}." - ), - ) - - # ensure DR equation has correct number of (additive) terms - self.assertEqual( - len(dr_eq_terms), - len(dr_monomial_param_combos) + 1, - msg=( - "Number of additive terms in the DR expression of " - f"DR constraint with name {dr_eq.name!r} does not match " - "expected value." - ), - ) - - # check last term is negative of second-stage variable - second_stage_var_term = dr_eq_terms[-1] - last_term_is_neg_ss_var = ( - isinstance(second_stage_var_term, MonomialTermExpression) - and (second_stage_var_term.args[0] == -1) - and (second_stage_var_term.args[1] is ss_var) - and len(second_stage_var_term.args) == 2 - ) - self.assertTrue( - last_term_is_neg_ss_var, - msg=( - "Last argument of last term in second-stage variable" - f"term of DR constraint with name {dr_eq.name!r} " - "is not the negative corresponding second-stage variable " - f"{ss_var.name!r}" - ), - ) - - # now we check the other terms. - # these should comprise the DR polynomial expression - dr_polynomial_terms = dr_eq_terms[:-1] - dr_polynomial_zip = zip( - dr_polynomial_terms, indexed_dr_var.values(), dr_monomial_param_combos - ) - for idx, (term, dr_var, param_combo) in enumerate(dr_polynomial_zip): - # term should be either a monomial expression or scalar variable - if isinstance(term, MonomialTermExpression): - # should be of form (uncertain parameter product) * - # (decision rule variable) so length of expression - # object should be 2 - self.assertEqual( - len(term.args), - 2, - msg=( - f"Length of `args` attribute of term {str(term)} " - f"of DR equation {dr_eq.name!r} is not as expected. " - f"Args: {term.args}" - ), - ) - - # check that uncertain parameters participating in - # the monomial are as expected - param_product_multiplicand = term.args[0] - dr_var_multiplicand = term.args[1] - else: - self.assertIsInstance(term, VarData) - param_product_multiplicand = 1 - dr_var_multiplicand = term - - if idx == 0: - # static DR term - param_combo_found_in_term = (param_product_multiplicand,) - param_names = (str(param) for param in param_combo) - elif len(param_combo) == 1: - # affine DR terms - param_combo_found_in_term = (param_product_multiplicand,) - param_names = (param.name for param in param_combo) - else: - # higher-order DR terms - param_combo_found_in_term = param_product_multiplicand.args - param_names = (param.name for param in param_combo) - - self.assertEqual( - param_combo_found_in_term, - param_combo, - msg=( - f"All but last multiplicand of DR monomial {str(term)} " - f"is not the uncertain parameter tuple " - f"({', '.join(param_names)})." - ), - ) + # define model + m = build_leyffer_two_cons_two_params() - # check that DR variable participating in the monomial - # is as expected - self.assertIs( - dr_var_multiplicand, - dr_var, - msg=( - f"Last multiplicand of DR monomial {str(term)} " - f"is not the DR variable {dr_var.name!r}." - ), - ) + # Define the uncertainty set + # we take the parameter `u2` to be 'fixed' + ellipsoid = AxisAlignedEllipsoidalSet(center=[1.125, 1], half_lengths=[1, 0]) + # Instantiate the PyROS solver + pyros_solver = SolverFactory("pyros") -class testTurnBoundsToConstraints(unittest.TestCase): - def test_bounds_to_constraints(self): - m = ConcreteModel() - m.x = Var(initialize=1, bounds=(0, 1)) - m.y = Var(initialize=0, bounds=(None, 1)) - m.w = Var(initialize=0, bounds=(1, None)) - m.z = Var(initialize=0, bounds=(None, None)) - turn_bounds_to_constraints(m.z, m) - self.assertEqual( - len(list(m.component_data_objects(Constraint))), - 0, - msg="Inequality constraints were written for bounds on a variable with no bounds.", - ) - turn_bounds_to_constraints(m.y, m) - self.assertEqual( - len(list(m.component_data_objects(Constraint))), - 1, - msg="Inequality constraints were not " - "written correctly for a variable with an upper bound and no lower bound.", - ) - turn_bounds_to_constraints(m.w, m) - self.assertEqual( - len(list(m.component_data_objects(Constraint))), - 2, - msg="Inequality constraints were not " - "written correctly for a variable with a lower bound and no upper bound.", - ) - turn_bounds_to_constraints(m.x, m) - self.assertEqual( - len(list(m.component_data_objects(Constraint))), - 4, - msg="Inequality constraints were not " - "written correctly for a variable with both lower and upper bound.", - ) + # Define subsolvers utilized in the algorithm + local_subsolver = SolverFactory("scip") + global_subsolver = SolverFactory("scip") - def test_uncertain_bounds_to_constraints(self): - # test model - m = ConcreteModel() - # parameters - m.p = Param(initialize=8, mutable=True) - m.r = Param(initialize=-5, mutable=True) - m.q = Param(initialize=1, mutable=False) - m.s = Param(initialize=1, mutable=True) - m.n = Param(initialize=1, mutable=True) - - # variables, with bounds contingent on params - m.u = Var(initialize=0, bounds=(0, m.p)) - m.v = Var(initialize=1, bounds=(m.r, m.p)) - m.w = Var(initialize=1, bounds=(None, None)) - m.x = Var(initialize=1, bounds=(0, exp(-1 * m.p / 8) * m.q * m.s)) - m.y = Var(initialize=-1, bounds=(m.r * m.p, 0)) - m.z = Var(initialize=1, bounds=(0, m.s)) - m.t = Var(initialize=1, bounds=(0, m.p**2)) - - # objective - m.obj = Objective(sense=maximize, expr=m.x**2 - m.y + m.t**2 + m.v) - - # clone model - mod = m.clone() - uncertain_params = [mod.n, mod.p, mod.r] - - # check variable replacement without any active objective - # or active performance constraints - mod.obj.deactivate() - replace_uncertain_bounds_with_constraints(mod, uncertain_params) - self.assertTrue( - hasattr(mod, 'uncertain_var_bound_cons'), - msg='Uncertain variable bounds erroneously added. ' - 'Check only variables participating in active ' - 'objective and constraints are added.', - ) - self.assertFalse(mod.uncertain_var_bound_cons) - mod.obj.activate() - - # add performance constraints - constraints_m = ConstraintList() - m.add_component('perf_constraints', constraints_m) - constraints_m.add(m.w == 2 * m.x + m.y) - constraints_m.add(m.v + m.x + m.y >= 0) - constraints_m.add(m.y**2 + m.z >= 0) - constraints_m.add(m.x**2 + m.u <= 1) - constraints_m[4].deactivate() - - # clone model with constraints added - mod_2 = m.clone() - - # manually replace uncertain parameter bounds with explicit constraints - uncertain_cons = ConstraintList() - m.add_component('uncertain_var_bound_cons', uncertain_cons) - uncertain_cons.add(m.x - m.x.upper <= 0) - uncertain_cons.add(m.y.lower - m.y <= 0) - uncertain_cons.add(m.v - m.v._ub <= 0) - uncertain_cons.add(m.v.lower - m.v <= 0) - uncertain_cons.add(m.t - m.t.upper <= 0) - - # remove corresponding variable bounds - m.x.setub(None) - m.y.setlb(None) - m.v.setlb(None) - m.v.setub(None) - m.t.setub(None) - - # check that vars participating in - # active objective and activated constraints correctly determined - svars_con = ComponentSet(get_vars_from_component(mod_2, Constraint)) - svars_obj = ComponentSet(get_vars_from_component(mod_2, Objective)) - vars_in_active_cons = ComponentSet( - [mod_2.z, mod_2.w, mod_2.y, mod_2.x, mod_2.v] - ) - vars_in_active_obj = ComponentSet([mod_2.x, mod_2.y, mod_2.t, mod_2.v]) - self.assertEqual( - svars_con, - vars_in_active_cons, - msg='Mismatch of variables participating in activated constraints.', - ) - self.assertEqual( - svars_obj, - vars_in_active_obj, - msg='Mismatch of variables participating in activated objectives.', + # Call the PyROS solver + results = pyros_solver.solve( + model=m, + first_stage_variables=[m.x1, m.x2], + second_stage_variables=[], + uncertain_params=[m.u1, m.u2], + uncertainty_set=ellipsoid, + local_solver=local_subsolver, + global_solver=global_subsolver, + options={ + "objective_focus": ObjectiveType.worst_case, + "solve_master_globally": True, + }, ) - # replace bounds in model with performance constraints - uncertain_params = [mod_2.p, mod_2.r] - replace_uncertain_bounds_with_constraints(mod_2, uncertain_params) - - # check that same number of constraints added to model - self.assertEqual( - len(list(m.component_data_objects(Constraint))), - len(list(mod_2.component_data_objects(Constraint))), - msg='Mismatch between number of explicit variable ' - 'bound inequality constraints added ' - 'automatically and added manually.', - ) - - # check that explicit constraints contain correct vars and params - vars_in_cons = ComponentSet() - params_in_cons = ComponentSet() - - # get variables, mutable params in the explicit constraints - cons = mod_2.uncertain_var_bound_cons - for idx in cons: - for p in identify_mutable_parameters(cons[idx].expr): - params_in_cons.add(p) - for v in identify_variables(cons[idx].expr): - vars_in_cons.add(v) - # reduce only to uncertain mutable params found - params_in_cons = params_in_cons & uncertain_params - - # expected participating variables - vars_with_bounds_removed = ComponentSet([mod_2.x, mod_2.y, mod_2.v, mod_2.t]) - # complete the check + # check successful termination self.assertEqual( - params_in_cons, - ComponentSet([mod_2.p, mod_2.r]), - msg='Mismatch of parameters added to explicit inequality constraints.', + results.pyros_termination_condition, + pyrosTerminationCondition.robust_optimal, + msg="Did not identify robust optimal solution to problem instance.", ) - self.assertEqual( - vars_in_cons, - vars_with_bounds_removed, - msg='Mismatch of variables added to explicit inequality constraints.', - ) - - -class testTransformToStandardForm(unittest.TestCase): - def test_transform_to_std_form(self): - """Check that `pyros.util.transform_to_standard_form` works - correctly for an example model. That is: - - all Constraints with a finite `upper` or `lower` attribute - are either equality constraints, or inequalities - of the standard form `expression(vars) <= upper`; - - every inequality Constraint for which the `upper` and `lower` - attribute are identical is converted to an equality constraint; - - every inequality Constraint with distinct finite `upper` and - `lower` attributes is split into two standard form inequality - Constraints. - """ - - m = ConcreteModel() - - m.p = Param(initialize=1, mutable=True) - - m.x = Var(initialize=0) - m.y = Var(initialize=1) - m.z = Var(initialize=1) - - # example constraints - m.c1 = Constraint(expr=m.x >= 1) - m.c2 = Constraint(expr=-m.y <= 0) - m.c3 = Constraint(rule=(None, m.x + m.y, None)) - m.c4 = Constraint(rule=(1, m.x + m.y, 2)) - m.c5 = Constraint(rule=(m.p, m.x, m.p)) - m.c6 = Constraint(rule=(1.0000, m.z, 1.0)) - - # example ConstraintList - clist = ConstraintList() - m.add_component('clist', clist) - clist.add(m.y <= 0) - clist.add(m.x >= 1) - clist.add((0, m.x, 1)) - - num_orig_cons = len( - [ - con - for con in m.component_data_objects( - Constraint, active=True, descend_into=True - ) - ] - ) - # constraints with finite, distinct lower & upper bounds - num_lbub_cons = len( - [ - con - for con in m.component_data_objects( - Constraint, active=True, descend_into=True - ) - if con.lower is not None - and con.upper is not None - and con.lower is not con.upper - ] - ) - - # count constraints with no bounds - num_nobound_cons = len( - [ - con - for con in m.component_data_objects( - Constraint, active=True, descend_into=True - ) - if con.lower is None and con.upper is None - ] + self.assertGreater( + results.iterations, + 0, + msg="Robust infeasible model terminated in 0 iterations (nominal case).", ) - transform_to_standard_form(m) - cons = [ - con - for con in m.component_data_objects( - Constraint, active=True, descend_into=True - ) - ] - for con in cons: - has_lb_or_ub = not (con.lower is None and con.upper is None) - if has_lb_or_ub and not con.equality: - self.assertTrue( - con.lower is None, - msg="Constraint %s not in standard form" % con.name, - ) - lb_is_ub = con.lower is con.upper - self.assertFalse( - lb_is_ub, - msg="Constraint %s should be converted to equality" % con.name, - ) - if con is not m.c3: - self.assertTrue( - has_lb_or_ub, - msg="Constraint %s should have" - " a lower or upper bound" % con.name, - ) - self.assertEqual( - len( - [ - con - for con in m.component_data_objects( - Constraint, active=True, descend_into=True - ) - ] - ), - num_orig_cons + num_lbub_cons - num_nobound_cons, - msg="Expected number of constraints after\n " - "standardizing constraints not matched. " - "Number of constraints after\n " - "transformation" - " should be (number constraints in original " - "model) \n + (number of constraints with " - "distinct finite lower and upper bounds).", - ) +class TestPyROSSolveDiscreteSet(unittest.TestCase): + """ + Test PyROS solves models with discrete uncertainty sets. + """ - def test_transform_does_not_alter_num_of_constraints(self): + @unittest.skipUnless( + baron_license_is_valid, "Global NLP solver is not available and licensed." + ) + def test_two_stg_model_discrete_set_single_scenario(self): """ - Check that if model does not contain any constraints - for which both the `lower` and `upper` attributes are - distinct and not None, then number of constraints remains the same - after constraint standardization. - Standard form for the purpose of PyROS is all inequality constraints - as `g(.)<=0`. + Test two-stage model under discrete uncertainty with + a single scenario. """ - m = ConcreteModel() - m.x = Var(initialize=1, bounds=(0, 1)) - m.y = Var(initialize=0, bounds=(None, 1)) - m.con1 = Constraint(expr=m.x >= 1 + m.y) - m.con2 = Constraint(expr=m.x**2 + m.y**2 >= 9) - original_num_constraints = len(list(m.component_data_objects(Constraint))) - transform_to_standard_form(m) - final_num_constraints = len(list(m.component_data_objects(Constraint))) - self.assertEqual( - original_num_constraints, - final_num_constraints, - msg="Transform to standard form function led to a " - "different number of constraints than in the original model.", - ) - number_of_non_standard_form_inequalities = len( - list( - c for c in list(m.component_data_objects(Constraint)) if c.lower != None - ) - ) - self.assertEqual( - number_of_non_standard_form_inequalities, - 0, - msg="All inequality constraints were not transformed to standard form.", - ) - + m = build_leyffer_two_cons_two_params() -# === UncertaintySets.py -# Mock abstract class -class myUncertaintySet(UncertaintySet): - ''' - returns single Constraint representing the uncertainty set which is - simply a linear combination of uncertain_params - ''' + # uncertainty set + discrete_set = DiscreteScenarioSet(scenarios=[(1.125, 1)]) - def set_as_constraint(self, uncertain_params, **kwargs): - return Constraint(expr=sum(v for v in uncertain_params) <= 0) + # Instantiate PyROS solver + pyros_solver = SolverFactory("pyros") - def point_in_set(self, uncertain_params, **kwargs): - return True - - def geometry(self): - self.geometry = Geometry.LINEAR - - def dim(self): - self.dim = 1 - - def parameter_bounds(self): - return [(0, 1)] - - -class testAbstractUncertaintySetClass(unittest.TestCase): - ''' - The UncertaintySet class has an abstract base class implementing set_as_constraint method, as well as a couple - basic uncertainty sets (ellipsoidal, polyhedral). The set_as_constraint method must return a Constraint object - which references the Param objects from the uncertain_params list in the original model object. - ''' - - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = m.uncertain_params - - _set = myUncertaintySet() - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - uncertain_params_in_expr = list( - v - for v in m.uncertain_param_vars - if v in ComponentSet(identify_variables(expr=m.uncertainty_set_contr.expr)) - ) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - "be the same uncertain param Var objects in the original model.", - ) - - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the UncertaintySet is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - m.p1 = Param(initialize=0, mutable=True) - m.p2 = Param(initialize=0, mutable=True) - m.uncertain_params = [m.p1, m.p2] - - _set = myUncertaintySet() - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_params - ) - variables_in_constr = list( - v - for v in m.uncertain_params - if v in ComponentSet(identify_variables(expr=m.uncertainty_set_contr.expr)) - ) - - self.assertEqual( - len(variables_in_constr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - "variable expression.", - ) - - -class testEllipsoidalUncertaintySetClass(unittest.TestCase): - """ - Unit tests for the EllipsoidalSet - """ - - def test_normal_construction_and_update(self): - """ - Test EllipsoidalSet constructor and setter - work normally when arguments are appropriate. - """ - center = [0, 0] - shape_matrix = [[1, 0], [0, 2]] - scale = 2 - eset = EllipsoidalSet(center, shape_matrix, scale) - np.testing.assert_allclose( - center, eset.center, err_msg="EllipsoidalSet center not as expected" - ) - np.testing.assert_allclose( - shape_matrix, - eset.shape_matrix, - err_msg="EllipsoidalSet shape matrix not as expected", - ) - np.testing.assert_allclose( - scale, eset.scale, err_msg="EllipsoidalSet scale not as expected" - ) - - # check attributes update - new_center = [-1, -3] - new_shape_matrix = [[2, 1], [1, 3]] - new_scale = 1 - - eset.center = new_center - eset.shape_matrix = new_shape_matrix - eset.scale = new_scale - - np.testing.assert_allclose( - new_center, - eset.center, - err_msg="EllipsoidalSet center update not as expected", - ) - np.testing.assert_allclose( - new_shape_matrix, - eset.shape_matrix, - err_msg="EllipsoidalSet shape matrix update not as expected", - ) - np.testing.assert_allclose( - new_scale, eset.scale, err_msg="EllipsoidalSet scale update not as expected" - ) - - def test_error_on_ellipsoidal_dim_change(self): - """ - EllipsoidalSet dimension is considered immutable. - Test ValueError raised when center size is not equal - to set dimension. - """ - invalid_center = [0, 0] - shape_matrix = [[1, 0], [0, 1]] - scale = 2 - - eset = EllipsoidalSet([0, 0], shape_matrix, scale) - - exc_str = r"Attempting to set.*dimension 2 to value of dimension 3" - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - eset.center = [0, 0, 0] - - def test_error_on_neg_scale(self): - """ - Test ValueError raised if scale attribute set to negative - value. - """ - center = [0, 0] - shape_matrix = [[1, 0], [0, 2]] - neg_scale = -1 - - exc_str = r".*must be a non-negative real \(provided.*-1\)" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - EllipsoidalSet(center, shape_matrix, neg_scale) - - # construct a valid EllipsoidalSet - eset = EllipsoidalSet(center, shape_matrix, scale=2) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - eset.scale = neg_scale - - def test_error_on_shape_matrix_with_wrong_size(self): - """ - Test error in event EllipsoidalSet shape matrix - is not in accordance with set dimension. - """ - center = [0, 0] - invalid_shape_matrix = [[1, 0]] - scale = 1 - - exc_str = r".*must be a square matrix of size 2.*\(provided.*shape \(1, 2\)\)" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - EllipsoidalSet(center, invalid_shape_matrix, scale) - - # construct a valid EllipsoidalSet - eset = EllipsoidalSet(center, [[1, 0], [0, 1]], scale) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - eset.shape_matrix = invalid_shape_matrix - - def test_error_on_invalid_shape_matrix(self): - """ - Test exceptional cases of invalid square shape matrix - arguments - """ - center = [0, 0] - scale = 3 - - # assert error on construction - with self.assertRaisesRegex( - ValueError, - r"Shape matrix must be symmetric", - msg="Asymmetric shape matrix test failed", - ): - EllipsoidalSet(center, [[1, 1], [0, 1]], scale) - with self.assertRaises( - np.linalg.LinAlgError, msg="Singular shape matrix test failed" - ): - EllipsoidalSet(center, [[0, 0], [0, 0]], scale) - with self.assertRaisesRegex( - ValueError, - r"Non positive-definite.*", - msg="Indefinite shape matrix test failed", - ): - EllipsoidalSet(center, [[1, 0], [0, -2]], scale) - - # construct a valid EllipsoidalSet - eset = EllipsoidalSet(center, [[1, 0], [0, 2]], scale) - - # assert error on update - with self.assertRaisesRegex( - ValueError, - r"Shape matrix must be symmetric", - msg="Asymmetric shape matrix test failed", - ): - eset.shape_matrix = [[1, 1], [0, 1]] - with self.assertRaises( - np.linalg.LinAlgError, msg="Singular shape matrix test failed" - ): - eset.shape_matrix = [[0, 0], [0, 0]] - with self.assertRaisesRegex( - ValueError, - r"Non positive-definite.*", - msg="Indefinite shape matrix test failed", - ): - eset.shape_matrix = [[1, 0], [0, -2]] - - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - cov = [[1, 0], [0, 1]] - s = 1 - - _set = EllipsoidalSet(center=[0, 0], shape_matrix=cov, scale=s) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - uncertain_params_in_expr = list( - v - for v in m.uncertain_param_vars.values() - if v - in ComponentSet(identify_variables(expr=m.uncertainty_set_contr[1].expr)) - ) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars.values()], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the EllipsoidalSet is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - m.p1 = Param(initialize=0, mutable=True) - m.p2 = Param(initialize=0, mutable=True) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - cov = [[1, 0], [0, 1]] - s = 1 - - _set = EllipsoidalSet(center=[0, 0], shape_matrix=cov, scale=s) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - variables_in_constr = list( - v - for v in m.uncertain_params - if v - in ComponentSet(identify_variables(expr=m.uncertainty_set_contr[1].expr)) - ) - - self.assertEqual( - len(variables_in_constr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_point_in_set(self): - m = ConcreteModel() - m.p1 = Param(initialize=0, mutable=True) - m.p2 = Param(initialize=0, mutable=True) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - cov = [[1, 0], [0, 1]] - s = 1 - - _set = EllipsoidalSet(center=[0, 0], shape_matrix=cov, scale=s) - self.assertTrue( - _set.point_in_set([0, 0]), msg="Point is not in the EllipsoidalSet." - ) - - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0.5) - cov = [[1, 0], [0, 1]] - s = 1 - - _set = EllipsoidalSet(center=[0, 0], shape_matrix=cov, scale=s) - config = Block() - config.uncertainty_set = _set - - EllipsoidalSet.add_bounds_on_uncertain_parameters(model=m, config=config) - - self.assertNotEqual( - m.util.uncertain_param_vars[0].lb, - None, - "Bounds not added correctly for EllipsoidalSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[0].ub, - None, - "Bounds not added correctly for EllipsoidalSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].lb, - None, - "Bounds not added correctly for EllipsoidalSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].ub, - None, - "Bounds not added correctly for EllipsoidalSet", - ) - - def test_ellipsoidal_set_bounds(self): - """Check `EllipsoidalSet` parameter bounds method correct.""" - cov = [[2, 1], [1, 2]] - scales = [0.5, 2] - mean = [1, 1] - - for scale in scales: - ell = EllipsoidalSet(center=mean, shape_matrix=cov, scale=scale) - bounds = ell.parameter_bounds - actual_bounds = list() - for idx, val in enumerate(mean): - diff = (cov[idx][idx] * scale) ** 0.5 - actual_bounds.append((val - diff, val + diff)) - self.assertTrue( - np.allclose(np.array(bounds), np.array(actual_bounds)), - msg=( - f"EllipsoidalSet bounds {bounds} do not match their actual" - f" values {actual_bounds} (for scale {scale}" - f" and shape matrix {cov})." - " Check the `parameter_bounds`" - " method for the EllipsoidalSet." - ), - ) - - -class testAxisAlignedEllipsoidalUncertaintySetClass(unittest.TestCase): - """ - Unit tests for the AxisAlignedEllipsoidalSet. - """ - - def test_normal_construction_and_update(self): - """ - Test AxisAlignedEllipsoidalSet constructor and setter - work normally when bounds are appropriate. - """ - center = [0, 0] - half_lengths = [1, 3] - aset = AxisAlignedEllipsoidalSet(center, half_lengths) - np.testing.assert_allclose( - center, - aset.center, - err_msg="AxisAlignedEllipsoidalSet center not as expected", - ) - np.testing.assert_allclose( - half_lengths, - aset.half_lengths, - err_msg="AxisAlignedEllipsoidalSet half-lengths not as expected", - ) - - # check attributes update - new_center = [-1, -3] - new_half_lengths = [0, 1] - aset.center = new_center - aset.half_lengths = new_half_lengths - - np.testing.assert_allclose( - new_center, - aset.center, - err_msg="AxisAlignedEllipsoidalSet center update not as expected", - ) - np.testing.assert_allclose( - new_half_lengths, - aset.half_lengths, - err_msg=("AxisAlignedEllipsoidalSet half lengths update not as expected"), - ) - - def test_error_on_axis_aligned_dim_change(self): - """ - AxisAlignedEllipsoidalSet dimension is considered immutable. - Test ValueError raised when attempting to alter the - box set dimension (i.e. number of rows of `bounds`). - """ - center = [0, 0] - half_lengths = [1, 3] - aset = AxisAlignedEllipsoidalSet(center, half_lengths) - - exc_str = r"Attempting to set.*dimension 2 to value of dimension 3" - with self.assertRaisesRegex(ValueError, exc_str): - aset.center = [0, 0, 1] - - with self.assertRaisesRegex(ValueError, exc_str): - aset.half_lengths = [0, 0, 1] - - def test_error_on_negative_axis_aligned_half_lengths(self): - """ - Test ValueError if half lengths for AxisAlignedEllipsoidalSet - contains a negative value. - """ - center = [1, 1] - invalid_half_lengths = [1, -1] - exc_str = r"Entry -1 of.*'half_lengths' is negative.*" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - AxisAlignedEllipsoidalSet(center, invalid_half_lengths) - - # construct a valid axis-aligned ellipsoidal set - aset = AxisAlignedEllipsoidalSet(center, [1, 0]) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - aset.half_lengths = invalid_half_lengths - - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - _set = AxisAlignedEllipsoidalSet(center=[0, 0], half_lengths=[2, 1]) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - uncertain_params_in_expr = list( - v - for v in m.uncertain_param_vars.values() - if v - in ComponentSet(identify_variables(expr=m.uncertainty_set_contr[1].expr)) - ) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars.values()], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the set is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - m.p1 = Param(initialize=0, mutable=True) - m.p2 = Param(initialize=0, mutable=True) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - _set = AxisAlignedEllipsoidalSet(center=[0, 0], half_lengths=[2, 1]) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - variables_in_constr = list( - v - for v in m.uncertain_params - if v - in ComponentSet(identify_variables(expr=m.uncertainty_set_contr[1].expr)) - ) - - self.assertEqual( - len(variables_in_constr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_point_in_set(self): - m = ConcreteModel() - m.p1 = Param(initialize=0, mutable=True) - m.p2 = Param(initialize=0, mutable=True) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - _set = AxisAlignedEllipsoidalSet(center=[0, 0], half_lengths=[2, 1]) - self.assertTrue( - _set.point_in_set([0, 0]), - msg="Point is not in the AxisAlignedEllipsoidalSet.", - ) - - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0.5) - - _set = AxisAlignedEllipsoidalSet(center=[0, 0], half_lengths=[2, 1]) - config = Block() - config.uncertainty_set = _set - - AxisAlignedEllipsoidalSet.add_bounds_on_uncertain_parameters( - model=m, config=config - ) - - self.assertNotEqual( - m.util.uncertain_param_vars[0].lb, - None, - "Bounds not added correctly for AxisAlignedEllipsoidalSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[0].ub, - None, - "Bounds not added correctly for AxisAlignedEllipsoidalSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].lb, - None, - "Bounds not added correctly for AxisAlignedEllipsoidalSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].ub, - None, - "Bounds not added correctly for AxisAlignedEllipsoidalSet", - ) - - def test_set_with_zero_half_lengths(self): - # construct ellipsoid - half_lengths = [1, 0, 2, 0] - center = [1, 1, 1, 1] - ell = AxisAlignedEllipsoidalSet(center, half_lengths) - - # construct model - m = ConcreteModel() - m.v1 = Var() - m.v2 = Var([1, 2]) - m.v3 = Var() - - # test constraints - conlist = ell.set_as_constraint([m.v1, m.v2, m.v3]) - eq_cons = [con for con in conlist.values() if con.equality] - - self.assertEqual( - len(conlist), - 3, - msg=( - "Constraint list for this `AxisAlignedEllipsoidalSet` should" - f" be of length 3, but is of length {len(conlist)}" - ), - ) - self.assertEqual( - len(eq_cons), - 2, - msg=( - "Number of equality constraints for this" - "`AxisAlignedEllipsoidalSet` should be 2," - f" there are {len(eq_cons)} such constraints" - ), - ) - - @unittest.skipUnless( - baron_license_is_valid, "Global NLP solver is not available and licensed." - ) - def test_two_stg_mod_with_axis_aligned_set(self): - """ - Test two-stage model with `AxisAlignedEllipsoidalSet` - as the uncertainty set. - """ - # define model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u1 = Param(initialize=1.125, mutable=True) - m.u2 = Param(initialize=1, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u1 ** (0.5) - m.x2 * m.u1 <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u1 == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - m.u2) ** 2) - - # Define the uncertainty set - # we take the parameter `u2` to be 'fixed' - ellipsoid = AxisAlignedEllipsoidalSet(center=[1.125, 1], half_lengths=[1, 0]) - - # Instantiate the PyROS solver - pyros_solver = SolverFactory("pyros") - - # Define subsolvers utilized in the algorithm - local_subsolver = SolverFactory('baron') - global_subsolver = SolverFactory("baron") - - # Call the PyROS solver - results = pyros_solver.solve( - model=m, - first_stage_variables=[m.x1, m.x2], - second_stage_variables=[], - uncertain_params=[m.u1, m.u2], - uncertainty_set=ellipsoid, - local_solver=local_subsolver, - global_solver=global_subsolver, - options={ - "objective_focus": ObjectiveType.worst_case, - "solve_master_globally": True, - }, - ) - - # check successful termination - self.assertEqual( - results.pyros_termination_condition, - pyrosTerminationCondition.robust_optimal, - msg="Did not identify robust optimal solution to problem instance.", - ) - self.assertGreater( - results.iterations, - 0, - msg="Robust infeasible model terminated in 0 iterations (nominal case).", - ) - - -class testPolyhedralUncertaintySetClass(unittest.TestCase): - """ - Unit tests for the Polyhedral set. - """ - - def test_normal_construction_and_update(self): - """ - Test PolyhedralSet constructor and attribute setters work - appropriately. - """ - lhs_coefficients_mat = [[1, 2, 3], [4, 5, 6]] - rhs_vec = [1, 3] - - pset = PolyhedralSet(lhs_coefficients_mat, rhs_vec) - - # check attributes are as expected - np.testing.assert_allclose(lhs_coefficients_mat, pset.coefficients_mat) - np.testing.assert_allclose(rhs_vec, pset.rhs_vec) - - # update the set - pset.coefficients_mat = [[1, 0, 1], [1, 1, 1.5]] - pset.rhs_vec = [3, 4] - - # check updates work - np.testing.assert_allclose([[1, 0, 1], [1, 1, 1.5]], pset.coefficients_mat) - np.testing.assert_allclose([3, 4], pset.rhs_vec) - - def test_error_on_polyhedral_set_dim_change(self): - """ - PolyhedralSet dimension (number columns of 'coefficients_mat') - is considered immutable. - Test ValueError raised if attempt made to change dimension. - """ - # construct valid set - pset = PolyhedralSet([[1, 2, 3], [4, 5, 6]], [1, 3]) - - exc_str = ( - r".*must have 3 columns to match set dimension \(provided.*2 columns\)" - ) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - pset.coefficients_mat = [[1, 2], [3, 4]] - - def test_error_on_inconsistent_rows(self): - """ - Number of rows of budget membership mat is immutable. - Similarly, size of rhs_vec is immutable. - Check ValueError raised in event of attempted change. - """ - coeffs_mat_exc_str = ( - r".*must have 2 rows to match shape of attribute 'rhs_vec' " - r"\(provided.*3 rows\)" - ) - rhs_vec_exc_str = ( - r".*must have 2 entries to match shape of attribute " - r"'coefficients_mat' \(provided.*3 entries\)" - ) - # assert error on construction - with self.assertRaisesRegex(ValueError, rhs_vec_exc_str): - PolyhedralSet([[1, 2], [3, 4]], rhs_vec=[1, 3, 3]) - - # construct a valid polyhedral set - # (2 x 2 coefficients, 2-vector for RHS) - pset = PolyhedralSet([[1, 2], [3, 4]], rhs_vec=[1, 3]) - - # assert error on update - with self.assertRaisesRegex(ValueError, coeffs_mat_exc_str): - # 3 x 2 matrix row mismatch - pset.coefficients_mat = [[1, 2], [3, 4], [5, 6]] - with self.assertRaisesRegex(ValueError, rhs_vec_exc_str): - # 3-vector mismatches 2 rows - pset.rhs_vec = [1, 3, 2] - - def test_error_on_empty_set(self): - """ - Check ValueError raised if nonemptiness check performed - at construction returns a negative result. - """ - exc_str = r"PolyhedralSet.*is empty.*" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - PolyhedralSet([[1], [-1]], rhs_vec=[1, -3]) - - def test_error_on_polyhedral_mat_all_zero_columns(self): - """ - Test ValueError raised if budget membership mat - has a column with all zeros. - """ - invalid_col_mat = [[0, 0, 1], [0, 0, 1], [0, 0, 1]] - rhs_vec = [1, 1, 2] - - exc_str = r".*all entries zero in columns at indexes: 0, 1.*" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - PolyhedralSet(invalid_col_mat, rhs_vec) - - # construct a valid budget set - pset = PolyhedralSet([[1, 0, 1], [1, 1, 0], [1, 1, 1]], rhs_vec) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - pset.coefficients_mat = invalid_col_mat - - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - A = [[0, 1], [1, 0]] - b = [0, 0] - - _set = PolyhedralSet(lhs_coefficients_mat=A, rhs_vec=b) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - uncertain_params_in_expr = ComponentSet() - for con in m.uncertainty_set_contr.values(): - con_vars = ComponentSet(identify_variables(expr=con.expr)) - for v in m.uncertain_param_vars.values(): - if v in con_vars: - uncertain_params_in_expr.add(v) - - self.assertEqual( - uncertain_params_in_expr, - ComponentSet(m.uncertain_param_vars.values()), - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the PolyHedral is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - A = [[0, 1], [1, 0]] - b = [0, 0] - - _set = PolyhedralSet(lhs_coefficients_mat=A, rhs_vec=b) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - vars_in_expr = [] - for con in m.uncertainty_set_contr.values(): - vars_in_expr.extend( - v - for v in m.uncertain_param_vars - if v in ComponentSet(identify_variables(expr=con.expr)) - ) - - self.assertEqual( - len(vars_in_expr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_polyhedral_set_as_constraint(self): - ''' - The set_as_constraint method must return an indexed uncertainty_set_constr - which has as many elements at their are dimensions in A. - ''' - - A = [[1, 0], [0, 1]] - b = [0, 0] - - m = ConcreteModel() - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - - polyhedral_set = PolyhedralSet(lhs_coefficients_mat=A, rhs_vec=b) - m.uncertainty_set_constr = polyhedral_set.set_as_constraint( - uncertain_params=[m.p1, m.p2] - ) - - self.assertEqual( - len(A), - len(m.uncertainty_set_constr.index_set()), - msg="Polyhedral uncertainty set constraints must be as many as the" - "number of rows in the matrix A.", - ) - - def test_point_in_set(self): - A = [[1, 0], [0, 1]] - b = [0, 0] - - m = ConcreteModel() - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - polyhedral_set = PolyhedralSet(lhs_coefficients_mat=A, rhs_vec=b) - self.assertTrue( - polyhedral_set.point_in_set([0, 0]), - msg="Point is not in the PolyhedralSet.", - ) - - @unittest.skipUnless(baron_available, "Global NLP solver is not available.") - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0.5) - - A = [[1, 0], [0, 1]] - b = [0, 0] - - polyhedral_set = PolyhedralSet(lhs_coefficients_mat=A, rhs_vec=b) - config = Block() - config.uncertainty_set = polyhedral_set - config.global_solver = SolverFactory("baron") - - PolyhedralSet.add_bounds_on_uncertain_parameters(model=m, config=config) - - self.assertNotEqual( - m.util.uncertain_param_vars[0].lb, - None, - "Bounds not added correctly for PolyhedralSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[0].ub, - None, - "Bounds not added correctly for PolyhedralSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].lb, - None, - "Bounds not added correctly for PolyhedralSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].ub, - None, - "Bounds not added correctly for PolyhedralSet", - ) - - -class testBudgetUncertaintySetClass(unittest.TestCase): - ''' - Budget uncertainty sets. - Required inputs are matrix budget_membership_mat, rhs_vec. - ''' - - def test_normal_budget_construction_and_update(self): - """ - Test BudgetSet constructor and attribute setters work - appropriately. - """ - budget_mat = [[1, 0, 1], [0, 1, 0]] - budget_rhs_vec = [1, 3] - - # check attributes are as expected - buset = BudgetSet(budget_mat, budget_rhs_vec) - - np.testing.assert_allclose(budget_mat, buset.budget_membership_mat) - np.testing.assert_allclose(budget_rhs_vec, buset.budget_rhs_vec) - np.testing.assert_allclose( - [[1, 0, 1], [0, 1, 0], [-1, 0, 0], [0, -1, 0], [0, 0, -1]], - buset.coefficients_mat, - ) - np.testing.assert_allclose([1, 3, 0, 0, 0], buset.rhs_vec) - np.testing.assert_allclose(np.zeros(3), buset.origin) - - # update the set - buset.budget_membership_mat = [[1, 1, 0], [0, 0, 1]] - buset.budget_rhs_vec = [3, 4] - - # check updates work - np.testing.assert_allclose([[1, 1, 0], [0, 0, 1]], buset.budget_membership_mat) - np.testing.assert_allclose([3, 4], buset.budget_rhs_vec) - np.testing.assert_allclose( - [[1, 1, 0], [0, 0, 1], [-1, 0, 0], [0, -1, 0], [0, 0, -1]], - buset.coefficients_mat, - ) - np.testing.assert_allclose([3, 4, 0, 0, 0], buset.rhs_vec) - - # update origin - buset.origin = [1, 0, -1.5] - np.testing.assert_allclose([1, 0, -1.5], buset.origin) - - def test_error_on_budget_set_dim_change(self): - """ - BudgetSet dimension is considered immutable. - Test ValueError raised when attempting to alter the - budget set dimension. - """ - budget_mat = [[1, 0, 1], [0, 1, 0]] - budget_rhs_vec = [1, 3] - bu_set = BudgetSet(budget_mat, budget_rhs_vec) - - # error on budget incidence matrix update - exc_str = ( - r".*must have 3 columns to match set dimension \(provided.*1 columns\)" - ) - with self.assertRaisesRegex(ValueError, exc_str): - bu_set.budget_membership_mat = [[1], [1]] - - # error on origin update - exc_str = ( - r".*must have 3 entries to match set dimension \(provided.*4 entries\)" - ) - with self.assertRaisesRegex(ValueError, exc_str): - bu_set.origin = [1, 2, 1, 0] - - def test_error_on_budget_member_mat_row_change(self): - """ - Number of rows of budget membership mat is immutable. - Hence, size of budget_rhs_vec is also immutable. - """ - budget_mat = [[1, 0, 1], [0, 1, 0]] - budget_rhs_vec = [1, 3] - bu_set = BudgetSet(budget_mat, budget_rhs_vec) - - exc_str = ( - r".*must have 2 rows to match shape of attribute 'budget_rhs_vec' " - r"\(provided.*1 rows\)" - ) - with self.assertRaisesRegex(ValueError, exc_str): - bu_set.budget_membership_mat = [[1, 0, 1]] - - exc_str = ( - r".*must have 2 entries to match shape of attribute " - r"'budget_membership_mat' \(provided.*1 entries\)" - ) - with self.assertRaisesRegex(ValueError, exc_str): - bu_set.budget_rhs_vec = [1] - - def test_error_on_neg_budget_rhs_vec_entry(self): - """ - Test ValueError raised if budget RHS vec has entry - with negative value entry. - """ - budget_mat = [[1, 0, 1], [1, 1, 0]] - neg_val_rhs_vec = [1, -1] - - exc_str = r"Entry -1 of.*'budget_rhs_vec' is negative*" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BudgetSet(budget_mat, neg_val_rhs_vec) - - # construct a valid budget set - buset = BudgetSet(budget_mat, [1, 1]) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - buset.budget_rhs_vec = neg_val_rhs_vec - - def test_error_on_non_bool_budget_mat_entry(self): - """ - Test ValueError raised if budget membership mat has - entry which is not a 0-1 value. - """ - invalid_budget_mat = [[1, 0, 1], [1, 1, 0.1]] - budget_rhs_vec = [1, 1] - - exc_str = r"Attempting.*entries.*not 0-1 values \(example: 0.1\).*" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BudgetSet(invalid_budget_mat, budget_rhs_vec) - - # construct a valid budget set - buset = BudgetSet([[1, 0, 1], [1, 1, 0]], budget_rhs_vec) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - buset.budget_membership_mat = invalid_budget_mat - - def test_error_on_budget_mat_all_zero_rows(self): - """ - Test ValueError raised if budget membership mat - has a row with all zeros. - """ - invalid_row_mat = [[0, 0, 0], [1, 1, 1], [0, 0, 0]] - budget_rhs_vec = [1, 1, 2] - - exc_str = r".*all entries zero in rows at indexes: 0, 2.*" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BudgetSet(invalid_row_mat, budget_rhs_vec) - - # construct a valid budget set - buset = BudgetSet([[1, 0, 1], [1, 1, 0], [1, 1, 1]], budget_rhs_vec) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - buset.budget_membership_mat = invalid_row_mat - - def test_error_on_budget_mat_all_zero_columns(self): - """ - Test ValueError raised if budget membership mat - has a column with all zeros. - """ - invalid_col_mat = [[0, 0, 1], [0, 0, 1], [0, 0, 1]] - budget_rhs_vec = [1, 1, 2] - - exc_str = r".*all entries zero in columns at indexes: 0, 1.*" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BudgetSet(invalid_col_mat, budget_rhs_vec) - - # construct a valid budget set - buset = BudgetSet([[1, 0, 1], [1, 1, 0], [1, 1, 1]], budget_rhs_vec) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - buset.budget_membership_mat = invalid_col_mat - - @unittest.skipUnless( - SolverFactory("cbc").available(exception_flag=False), - "LP solver CBC not available", - ) - def test_budget_set_parameter_bounds_correct(self): - """ - If LP solver is available, test parameter bounds method - for factor model set is correct (check against - results from an LP solver). - """ - solver = SolverFactory("cbc") - - # construct budget set instances - buset1 = BudgetSet( - budget_membership_mat=[[1, 1], [0, 1]], rhs_vec=[2, 3], origin=None - ) - buset2 = BudgetSet( - budget_membership_mat=[[1, 0], [1, 1]], rhs_vec=[3, 2], origin=[1, 1] - ) - - # check parameter bounds matches LP results - # exactly for each case - for buset in [buset1, buset2]: - param_bounds = buset.parameter_bounds - lp_param_bounds = eval_parameter_bounds(buset, solver) - - self.assertTrue( - np.allclose(param_bounds, lp_param_bounds), - msg=( - "Parameter bounds not consistent with LP values for " - "BudgetSet with parameterization:\n" - f"budget_membership_mat={buset.budget_membership_mat},\n" - f"budget_rhs_vec={buset.budget_rhs_vec},\n" - f"origin={buset.origin}.\n" - f"({param_bounds} does not match {lp_param_bounds})" - ), - ) - - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - # Single budget - budget_membership_mat = [[1 for i in range(len(m.uncertain_param_vars))]] - rhs_vec = [ - 0.1 * len(m.uncertain_param_vars) - + sum(p.value for p in m.uncertain_param_vars.values()) - ] - - _set = BudgetSet(budget_membership_mat=budget_membership_mat, rhs_vec=rhs_vec) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - uncertain_params_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if v in ComponentSet(identify_variables(expr=con.expr)): - if id(v) not in list(id(u) for u in uncertain_params_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - uncertain_params_in_expr.append(v) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars.values()], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the BudgetSet is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - # Single budget - budget_membership_mat = [[1 for i in range(len(m.uncertain_param_vars))]] - rhs_vec = [ - 0.1 * len(m.uncertain_param_vars) - + sum(p.value for p in m.uncertain_param_vars.values()) - ] - - _set = BudgetSet(budget_membership_mat=budget_membership_mat, rhs_vec=rhs_vec) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - vars_in_expr = [] - for con in m.uncertainty_set_contr.values(): - vars_in_expr.extend( - v - for v in m.uncertain_param_vars.values() - if v in ComponentSet(identify_variables(expr=con.expr)) - ) - - self.assertEqual( - len(vars_in_expr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_budget_set_as_constraint(self): - ''' - The set_as_constraint method must return an indexed uncertainty_set_constr - which has as many elements at their are dimensions in A. - ''' - - m = ConcreteModel() - m.p1 = Var(initialize=1) - m.p2 = Var(initialize=1) - m.uncertain_params = [m.p1, m.p2] - - # Single budget - budget_membership_mat = [[1 for i in range(len(m.uncertain_params))]] - rhs_vec = [ - 0.1 * len(m.uncertain_params) + sum(p.value for p in m.uncertain_params) - ] - - budget_set = BudgetSet( - budget_membership_mat=budget_membership_mat, rhs_vec=rhs_vec - ) - m.uncertainty_set_constr = budget_set.set_as_constraint( - uncertain_params=m.uncertain_params - ) - - self.assertEqual( - len(budget_set.coefficients_mat), - len(m.uncertainty_set_constr.index_set()), - msg=( - "Number of budget set constraints should be equal to the " - "number of rows in the 'coefficients_mat' attribute" - ), - ) - - def test_point_in_set(self): - m = ConcreteModel() - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - - budget_membership_mat = [[1 for i in range(len(m.uncertain_params))]] - rhs_vec = [ - 0.1 * len(m.uncertain_params) + sum(p.value for p in m.uncertain_params) - ] - - budget_set = BudgetSet( - budget_membership_mat=budget_membership_mat, rhs_vec=rhs_vec - ) - self.assertTrue( - budget_set.point_in_set([0, 0]), msg="Point is not in the BudgetSet." - ) - - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0.5) - - budget_membership_mat = [[1 for i in range(len(m.util.uncertain_param_vars))]] - rhs_vec = [ - 0.1 * len(m.util.uncertain_param_vars) - + sum(value(p) for p in m.util.uncertain_param_vars.values()) - ] - - budget_set = BudgetSet( - budget_membership_mat=budget_membership_mat, rhs_vec=rhs_vec - ) - config = Block() - config.uncertainty_set = budget_set - - BudgetSet.add_bounds_on_uncertain_parameters(model=m, config=config) - - self.assertNotEqual( - m.util.uncertain_param_vars[0].lb, - None, - "Bounds not added correctly for BudgetSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[0].ub, - None, - "Bounds not added correctly for BudgetSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].lb, - None, - "Bounds not added correctly for BudgetSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].ub, - None, - "Bounds not added correctly for BudgetSet", - ) - - -class testCardinalityUncertaintySetClass(unittest.TestCase): - ''' - Cardinality uncertainty sets. Required inputs are origin, positive_deviation, gamma. - Because Cardinality adds cassi vars to model, must pass model to set_as_constraint() - ''' - - def test_normal_cardinality_construction_and_update(self): - """ - Test CardinalitySet constructor and setter work normally - when bounds are appropriate. - """ - # valid inputs - cset = CardinalitySet(origin=[0, 0], positive_deviation=[1, 3], gamma=2) - - # check attributes are as expected - np.testing.assert_allclose(cset.origin, [0, 0]) - np.testing.assert_allclose(cset.positive_deviation, [1, 3]) - np.testing.assert_allclose(cset.gamma, 2) - self.assertEqual(cset.dim, 2) - - # update the set - cset.origin = [1, 2] - cset.positive_deviation = [3, 0] - cset.gamma = 0.5 - - # check updates work - np.testing.assert_allclose(cset.origin, [1, 2]) - np.testing.assert_allclose(cset.positive_deviation, [3, 0]) - np.testing.assert_allclose(cset.gamma, 0.5) - - def test_error_on_neg_positive_deviation(self): - """ - Cardinality set positive deviation attribute should - contain nonnegative numerical entries. - - Check ValueError raised if any negative entries provided. - """ - origin = [0, 0] - positive_deviation = [1, -2] # invalid - gamma = 2 - - exc_str = r"Entry -2 of attribute 'positive_deviation' is negative value" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - cset = CardinalitySet(origin, positive_deviation, gamma) - - # construct a valid cardinality set - cset = CardinalitySet(origin, [1, 1], gamma) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - cset.positive_deviation = positive_deviation - - def test_error_on_invalid_gamma(self): - """ - Cardinality set gamma attribute should be a float-like - between 0 and the set dimension. - - Check ValueError raised if gamma attribute is set - to an invalid value. - """ - origin = [0, 0] - positive_deviation = [1, 1] - gamma = 3 # should be invalid - - exc_str = ( - r".*attribute 'gamma' must be a real number " - r"between 0 and dimension 2 \(provided value 3\)" - ) - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - CardinalitySet(origin, positive_deviation, gamma) - - # construct a valid cardinality set - cset = CardinalitySet(origin, positive_deviation, gamma=2) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - cset.gamma = gamma - - def test_error_on_cardinality_set_dim_change(self): - """ - Dimension is considered immutable. - Test ValueError raised when attempting to alter the - set dimension (i.e. number of entries of `origin`). - """ - # construct a valid cardinality set - cset = CardinalitySet(origin=[0, 0], positive_deviation=[1, 1], gamma=2) - - exc_str = r"Attempting to set.*dimension 2 to value of dimension 3" - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - cset.origin = [0, 0, 0] - with self.assertRaisesRegex(ValueError, exc_str): - cset.positive_deviation = [1, 1, 1] - - @unittest.skipIf(not numpy_available, 'Numpy is not available.') - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - m.util = Block() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - - center = list(p.value for p in m.uncertain_param_vars.values()) - positive_deviation = list(0.3 for j in range(len(center))) - gamma = np.ceil(len(m.uncertain_param_vars) / 2) - - _set = CardinalitySet( - origin=center, positive_deviation=positive_deviation, gamma=gamma - ) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars, model=m - ) - uncertain_params_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if v in ComponentSet(identify_variables(expr=con.expr)): - if id(v) not in list(id(u) for u in uncertain_params_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - uncertain_params_in_expr.append(v) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars.values()], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - @unittest.skipIf(not numpy_available, 'Numpy is not available.') - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the CardinalitySet is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - m.util = Block() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - - center = list(p.value for p in m.uncertain_param_vars.values()) - positive_deviation = list(0.3 for j in range(len(center))) - gamma = np.ceil(len(m.uncertain_param_vars) / 2) - - _set = CardinalitySet( - origin=center, positive_deviation=positive_deviation, gamma=gamma - ) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars, model=m - ) - vars_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if id(v) in [id(u) for u in list(identify_variables(expr=con.expr))]: - if id(v) not in list(id(u) for u in vars_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - vars_in_expr.append(v) - - self.assertEqual( - len(vars_in_expr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_point_in_set(self): - m = ConcreteModel() - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - - center = list(p.value for p in m.uncertain_param_vars.values()) - positive_deviation = list(0.3 for j in range(len(center))) - gamma = np.ceil(len(m.uncertain_param_vars) / 2) - - _set = CardinalitySet( - origin=center, positive_deviation=positive_deviation, gamma=gamma - ) - - self.assertTrue( - _set.point_in_set([0, 0]), msg="Point is not in the CardinalitySet." - ) - - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0.5) - - center = list(p.value for p in m.util.uncertain_param_vars.values()) - positive_deviation = list(0.3 for j in range(len(center))) - gamma = np.ceil(len(center) / 2) - - cardinality_set = CardinalitySet( - origin=center, positive_deviation=positive_deviation, gamma=gamma - ) - config = Block() - config.uncertainty_set = cardinality_set - - CardinalitySet.add_bounds_on_uncertain_parameters(model=m, config=config) - - self.assertNotEqual( - m.util.uncertain_param_vars[0].lb, - None, - "Bounds not added correctly for CardinalitySet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[0].ub, - None, - "Bounds not added correctly for CardinalitySet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].lb, - None, - "Bounds not added correctly for CardinalitySet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].ub, - None, - "Bounds not added correctly for CardinalitySet", - ) - - -def eval_parameter_bounds(uncertainty_set, solver): - """ - Evaluate parameter bounds of uncertainty set by solving - bounding problems (as opposed to via the `parameter_bounds` - method). - """ - bounding_mdl = uncertainty_set.bounding_model() - - param_bounds = [] - for idx, obj in bounding_mdl.param_var_objectives.items(): - # activate objective for corresponding dimension - obj.activate() - bounds = [] - - # solve for lower bound, then upper bound - # solve should be successful - for sense in (minimize, maximize): - obj.sense = sense - solver.solve(bounding_mdl) - bounds.append(value(obj)) - - # add parameter bounds for current dimension - param_bounds.append(tuple(bounds)) - - # ensure sense is minimize when done, deactivate - obj.sense = minimize - obj.deactivate() - - return param_bounds - - -class testBoxUncertaintySetClass(unittest.TestCase): - """ - Unit tests for the box uncertainty set (BoxSet). - """ - - def test_normal_construction_and_update(self): - """ - Test BoxSet constructor and setter work normally - when bounds are appropriate. - """ - bounds = [[1, 2], [3, 4]] - bset = BoxSet(bounds=bounds) - np.testing.assert_allclose( - bounds, bset.bounds, err_msg="BoxSet bounds not as expected" - ) - - # check bounds update - new_bounds = [[3, 4], [5, 6]] - bset.bounds = new_bounds - np.testing.assert_allclose( - new_bounds, bset.bounds, err_msg="BoxSet bounds not as expected" - ) - - def test_error_on_box_set_dim_change(self): - """ - BoxSet dimension is considered immutable. - Test ValueError raised when attempting to alter the - box set dimension (i.e. number of rows of `bounds`). - """ - bounds = [[1, 2], [3, 4]] - bset = BoxSet(bounds=bounds) # 2-dimensional set - - exc_str = r"Attempting to set.*dimension 2 to a value of dimension 3" - with self.assertRaisesRegex(ValueError, exc_str): - bset.bounds = [[1, 2], [3, 4], [5, 6]] - - def test_error_on_lb_exceeds_ub(self): - """ - Test exception raised when an LB exceeds a UB. - """ - bad_bounds = [[1, 2], [4, 3]] - - exc_str = r"Lower bound 4 exceeds upper bound 3" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BoxSet(bad_bounds) - - # construct a valid box set - bset = BoxSet([[1, 2], [3, 4]]) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - bset.bounds = bad_bounds - - def test_error_on_ragged_bounds_array(self): - """ - Test ValueError raised on attempting to set BoxSet bounds - to a ragged array. - - This test also validates `uncertainty_sets.is_ragged` for all - pre-defined array-like attributes of all set-types, as the - `is_ragged` method is used throughout. - """ - # example ragged arrays - ragged_arrays = ( - [[1, 2], 3], # list and int in same sequence - [[1, 2], [3, [4, 5]]], # 2nd row ragged (list and int) - [[1, 2], [3]], # variable row lengths - ) - - # construct valid box set - bset = BoxSet(bounds=[[1, 2], [3, 4]]) - - # exception message should match this regex - exc_str = r"Argument `bounds` should not be a ragged array-like.*" - for ragged_arr in ragged_arrays: - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BoxSet(bounds=ragged_arr) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - bset.bounds = ragged_arr - - def test_error_on_invalid_bounds_shape(self): - """ - Test ValueError raised when attempting to set - Box set bounds to array of incorrect shape - (should be a 2-D array with 2 columns). - """ - # 3d array - three_d_arr = [[[1, 2], [3, 4], [5, 6]]] - exc_str = ( - r"Argument `bounds` must be a 2-dimensional.*" - r"\(detected 3 dimensions.*\)" - ) - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BoxSet(three_d_arr) - - # construct valid box set - bset = BoxSet([[1, 2], [3, 4], [5, 6]]) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - bset.bounds = three_d_arr - - def test_error_on_wrong_number_columns(self): - """ - BoxSet bounds should be a 2D array-like with 2 columns. - ValueError raised if number columns wrong - """ - three_col_arr = [[1, 2, 3], [4, 5, 6]] - exc_str = ( - r"Attribute 'bounds' should be of shape \(\.{3},2\), " - r"but detected shape \(\.{3},3\)" - ) - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BoxSet(three_col_arr) - - # construct a valid box set - bset = BoxSet([[1, 2], [3, 4]]) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - bset.bounds = three_col_arr - - def test_error_on_empty_last_dimension(self): - """ - Check ValueError raised when last dimension of BoxSet bounds is - empty. - """ - empty_2d_arr = [[], [], []] - exc_str = ( - r"Last dimension of argument `bounds` must be non-empty " - r"\(detected shape \(3, 0\)\)" - ) - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BoxSet(bounds=empty_2d_arr) - - # create a valid box set - bset = BoxSet([[1, 2], [3, 4]]) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - bset.bounds = empty_2d_arr - - def test_error_on_non_numeric_bounds(self): - """ - Test that ValueError is raised if box set bounds - are set to array-like with entries of a non-numeric - type (such as int, float). - """ - # invalid bounds (contains an entry type str) - new_bounds = [[1, "test"], [3, 2]] - - exc_str = ( - r"Entry 'test' of the argument `bounds` " - r"is not a valid numeric type \(provided type 'str'\)" - ) - - # assert error on construction - with self.assertRaisesRegex(TypeError, exc_str): - BoxSet(new_bounds) - - # construct a valid box set - bset = BoxSet(bounds=[[1, 2], [3, 4]]) - - # assert error on update - with self.assertRaisesRegex(TypeError, exc_str): - bset.bounds = new_bounds - - def test_error_on_bounds_with_nan_or_inf(self): - """ - Box set bounds set to array-like with inf or nan. - """ - # construct a valid box set - bset = BoxSet(bounds=[[1, 2], [3, 4]]) - - for val_str in ["inf", "nan"]: - bad_bounds = [[1, float(val_str)], [2, 3]] - exc_str = ( - fr"Entry '{val_str}' of the argument `bounds` " - fr"is not a finite numeric value" - ) - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - BoxSet(bad_bounds) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - bset.bounds = bad_bounds - - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - bounds = [(-1, 1), (-1, 1)] - _set = BoxSet(bounds=bounds) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - uncertain_params_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if v in ComponentSet(identify_variables(expr=con.expr)): - if id(v) not in list(id(u) for u in uncertain_params_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - uncertain_params_in_expr.append(v) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars.values()], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the set is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - bounds = [(-1, 1), (-1, 1)] - _set = BoxSet(bounds=bounds) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - vars_in_expr = [] - vars_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if id(v) in [id(u) for u in list(identify_variables(expr=con.expr))]: - if id(v) not in list(id(u) for u in vars_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - vars_in_expr.append(v) - - self.assertEqual( - len(vars_in_expr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_point_in_set(self): - m = ConcreteModel() - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - - bounds = [(-1, 1), (-1, 1)] - _set = BoxSet(bounds=bounds) - self.assertTrue(_set.point_in_set([0, 0]), msg="Point is not in the BoxSet.") - - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0) - - bounds = [(-1, 1), (-1, 1)] - box_set = BoxSet(bounds=bounds) - config = Block() - config.uncertainty_set = box_set - - BoxSet.add_bounds_on_uncertain_parameters(model=m, config=config) - - self.assertEqual( - m.util.uncertain_param_vars[0].lb, - -1, - "Bounds not added correctly for BoxSet", - ) - self.assertEqual( - m.util.uncertain_param_vars[0].ub, - 1, - "Bounds not added correctly for BoxSet", - ) - self.assertEqual( - m.util.uncertain_param_vars[1].lb, - -1, - "Bounds not added correctly for BoxSet", - ) - self.assertEqual( - m.util.uncertain_param_vars[1].ub, - 1, - "Bounds not added correctly for BoxSet", - ) - - -class testDiscreteUncertaintySetClass(unittest.TestCase): - ''' - Discrete uncertainty sets. Required inputis a scenarios list. - ''' - - def test_normal_discrete_set_construction_and_update(self): - """ - Test DiscreteScenarioSet constructor and setter work normally - when scenarios are appropriate. - """ - scenarios = [[0, 0, 0], [1, 2, 3]] - - # normal construction should work - dset = DiscreteScenarioSet(scenarios) - - # check scenarios added appropriately - np.testing.assert_allclose( - scenarios, dset.scenarios, err_msg="BoxSet bounds not as expected" - ) - - # check scenarios updated appropriately - new_scenarios = [[0, 1, 2], [1, 2, 0], [3, 5, 4]] - dset.scenarios = new_scenarios - np.testing.assert_allclose( - new_scenarios, dset.scenarios, err_msg="BoxSet bounds not as expected" - ) - - def test_error_on_discrete_set_dim_change(self): - """ - Test ValueError raised when attempting to update - DiscreteScenarioSet dimension. - """ - scenarios = [[1, 2], [3, 4]] - dset = DiscreteScenarioSet(scenarios) # 2-dimensional set - - exc_str = ( - r".*must have 2 columns.* to match set dimension " - r"\(provided.*with 3 columns\)" - ) - with self.assertRaisesRegex(ValueError, exc_str): - dset.scenarios = [[1, 2, 3], [4, 5, 6]] - - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - scenarios = [(0, 0), (1, 0), (0, 1), (1, 1), (2, 0)] - _set = DiscreteScenarioSet(scenarios=scenarios) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - uncertain_params_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if v in ComponentSet(identify_variables(expr=con.expr)): - if id(v) not in list(id(u) for u in uncertain_params_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - uncertain_params_in_expr.append(v) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars.values()], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the set is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - scenarios = [(0, 0), (1, 0), (0, 1), (1, 1), (2, 0)] - _set = DiscreteScenarioSet(scenarios=scenarios) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars - ) - vars_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if id(v) in [id(u) for u in list(identify_variables(expr=con.expr))]: - if id(v) not in list(id(u) for u in vars_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - vars_in_expr.append(v) - - self.assertEqual( - len(vars_in_expr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_point_in_set(self): - m = ConcreteModel() - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - - scenarios = [(0, 0), (1, 0), (0, 1), (1, 1), (2, 0)] - _set = DiscreteScenarioSet(scenarios=scenarios) - self.assertTrue( - _set.point_in_set([0, 0]), msg="Point is not in the DiscreteScenarioSet." - ) - - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0) - - scenarios = [(0, 0), (1, 0), (0, 1), (1, 1), (2, 0)] - _set = DiscreteScenarioSet(scenarios=scenarios) - config = Block() - config.uncertainty_set = _set - - DiscreteScenarioSet.add_bounds_on_uncertain_parameters(model=m, config=config) - - self.assertNotEqual( - m.util.uncertain_param_vars[0].lb, - None, - "Bounds not added correctly for DiscreteScenarioSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[0].ub, - None, - "Bounds not added correctly for DiscreteScenarioSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].lb, - None, - "Bounds not added correctly for DiscreteScenarioSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].ub, - None, - "Bounds not added correctly for DiscreteScenarioSet", - ) - - @unittest.skipUnless( - baron_license_is_valid, "Global NLP solver is not available and licensed." - ) - def test_two_stg_model_discrete_set_single_scenario(self): - """ - Test two-stage model under discrete uncertainty with - a single scenario. - """ - m = ConcreteModel() - - # model params - m.u1 = Param(initialize=1.125, mutable=True) - m.u2 = Param(initialize=1, mutable=True) - - # model vars - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - - # model constraints - m.con1 = Constraint(expr=m.x1 * m.u1 ** (0.5) - m.x2 * m.u1 <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u1 == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - m.u2) ** 2) - - # uncertainty set - discrete_set = DiscreteScenarioSet(scenarios=[(1.125, 1)]) - - # Instantiate PyROS solver - pyros_solver = SolverFactory("pyros") - - # Define subsolvers utilized in the algorithm - local_subsolver = SolverFactory('baron') - global_subsolver = SolverFactory("baron") + # Define subsolvers utilized in the algorithm + local_subsolver = SolverFactory('baron') + global_subsolver = SolverFactory("baron") # Call the PyROS solver results = pyros_solver.solve( @@ -3012,948 +380,139 @@ def test_two_stg_model_discrete_set_single_scenario(self): global_solver=global_subsolver, options={ "objective_focus": ObjectiveType.worst_case, - "solve_master_globally": True, - }, - ) - - # check successful termination - self.assertEqual( - results.pyros_termination_condition, - pyrosTerminationCondition.robust_optimal, - msg="Did not identify robust optimal solution to problem instance.", - ) - - # only one iteration required - self.assertEqual( - results.iterations, - 1, - msg=( - "PyROS was unable to solve a singleton discrete set instance " - " successfully within a single iteration." - ), - ) - - @unittest.skipUnless( - baron_license_is_valid, "Global NLP solver is not available and licensed." - ) - def test_two_stg_model_discrete_set(self): - """ - Test PyROS successfully solves two-stage model with - multiple scenarios. - """ - m = ConcreteModel() - m.x1 = Var(bounds=(0, 10)) - m.x2 = Var(bounds=(0, 10)) - m.u = Param(mutable=True, initialize=1.125) - m.con = Constraint(expr=sqrt(m.u) * m.x1 - m.u * m.x2 <= 2) - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - m.u) ** 2) - - discrete_set = DiscreteScenarioSet(scenarios=[[0.25], [1.125], [2]]) - - global_solver = SolverFactory("baron") - pyros_solver = SolverFactory("pyros") - - res = pyros_solver.solve( - model=m, - first_stage_variables=[m.x1], - second_stage_variables=[m.x2], - uncertain_params=[m.u], - uncertainty_set=discrete_set, - local_solver=global_solver, - global_solver=global_solver, - decision_rule_order=0, - solve_master_globally=True, - objective_focus=ObjectiveType.worst_case, - ) - - self.assertEqual( - res.pyros_termination_condition, - pyrosTerminationCondition.robust_optimal, - msg=( - "Failed to solve discrete set multiple scenarios instance to " - "robust optimality" - ), - ) - - -class testFactorModelUncertaintySetClass(unittest.TestCase): - ''' - FactorModelSet uncertainty sets. Required inputs are psi_matrix, number_of_factors, origin and beta. - ''' - - def test_normal_factor_model_construction_and_update(self): - """ - Test FactorModelSet constructor and setter work normally - when attribute values are appropriate. - """ - # valid inputs - fset = FactorModelSet( - origin=[0, 0, 1], - number_of_factors=2, - psi_mat=[[1, 2], [0, 1], [1, 0]], - beta=0.1, - ) - - # check attributes are as expected - np.testing.assert_allclose(fset.origin, [0, 0, 1]) - np.testing.assert_allclose(fset.psi_mat, [[1, 2], [0, 1], [1, 0]]) - np.testing.assert_allclose(fset.number_of_factors, 2) - np.testing.assert_allclose(fset.beta, 0.1) - self.assertEqual(fset.dim, 3) - - # update the set - fset.origin = [1, 1, 0] - fset.psi_mat = [[1, 0], [0, 1], [1, 1]] - fset.beta = 0.5 - - # check updates work - np.testing.assert_allclose(fset.origin, [1, 1, 0]) - np.testing.assert_allclose(fset.psi_mat, [[1, 0], [0, 1], [1, 1]]) - np.testing.assert_allclose(fset.beta, 0.5) - - def test_error_on_factor_model_set_dim_change(self): - """ - Test ValueError raised when attempting to change FactorModelSet - dimension (by changing number of entries in origin - or number of rows of psi_mat). - """ - origin = [0, 0, 0] - number_of_factors = 2 - psi_mat = [[1, 0], [0, 1], [1, 1]] - beta = 0.5 - - # construct factor model set - fset = FactorModelSet(origin, number_of_factors, psi_mat, beta) - - # assert error on psi mat update - exc_str = ( - r"should be of shape \(3, 2\) to match.*dimensions " - r"\(provided shape \(2, 2\)\)" - ) - with self.assertRaisesRegex(ValueError, exc_str): - fset.psi_mat = [[1, 0], [1, 2]] - - # assert error on origin update - exc_str = r"Attempting.*factor model set of dimension 3 to value of dimension 2" - with self.assertRaisesRegex(ValueError, exc_str): - fset.origin = [1, 3] - - def test_error_on_invalid_number_of_factors(self): - """ - Test ValueError raised if number of factors - is negative int, or AttributeError - if attempting to update (should be immutable). - """ - exc_str = r".*'number_of_factors' must be a positive int \(provided value -1\)" - with self.assertRaisesRegex(ValueError, exc_str): - FactorModelSet(origin=[0], number_of_factors=-1, psi_mat=[[1, 1]], beta=0.1) - - fset = FactorModelSet( - origin=[0], number_of_factors=2, psi_mat=[[1, 1]], beta=0.1 - ) - - exc_str = r".*'number_of_factors' is immutable" - with self.assertRaisesRegex(AttributeError, exc_str): - fset.number_of_factors = 3 - - def test_error_on_invalid_beta(self): - """ - Test ValueError raised if beta is invalid (exceeds 1 or - is negative) - """ - origin = [0, 0, 0] - number_of_factors = 2 - psi_mat = [[1, 0], [0, 1], [1, 1]] - neg_beta = -0.5 - big_beta = 1.5 - - # assert error on construction - neg_exc_str = ( - r".*must be a real number between 0 and 1.*\(provided value -0.5\)" - ) - big_exc_str = r".*must be a real number between 0 and 1.*\(provided value 1.5\)" - with self.assertRaisesRegex(ValueError, neg_exc_str): - FactorModelSet(origin, number_of_factors, psi_mat, neg_beta) - with self.assertRaisesRegex(ValueError, big_exc_str): - FactorModelSet(origin, number_of_factors, psi_mat, big_beta) - - # create a valid factor model set - fset = FactorModelSet(origin, number_of_factors, psi_mat, 1) - - # assert error on update - with self.assertRaisesRegex(ValueError, neg_exc_str): - fset.beta = neg_beta - with self.assertRaisesRegex(ValueError, big_exc_str): - fset.beta = big_beta - - @unittest.skipUnless( - SolverFactory("cbc").available(exception_flag=False), - "LP solver CBC not available", - ) - def test_factor_model_parameter_bounds_correct(self): - """ - If LP solver is available, test parameter bounds method - for factor model set is correct (check against - results from an LP solver). - """ - solver = SolverFactory("cbc") - - # four cases where prior parameter bounds - # approximations were probably too tight - fset1 = FactorModelSet( - origin=[0, 0], - number_of_factors=3, - psi_mat=[[1, -1, 1], [1, 0.1, 1]], - beta=1 / 6, - ) - fset2 = FactorModelSet( - origin=[0], number_of_factors=3, psi_mat=[[1, 6, 8]], beta=1 / 2 - ) - fset3 = FactorModelSet( - origin=[1], number_of_factors=2, psi_mat=[[1, 2]], beta=1 / 4 - ) - fset4 = FactorModelSet( - origin=[1], number_of_factors=3, psi_mat=[[-1, -6, -8]], beta=1 / 2 - ) - - # check parameter bounds matches LP results - # exactly for each case - for fset in [fset1, fset2, fset3, fset4]: - param_bounds = fset.parameter_bounds - lp_param_bounds = eval_parameter_bounds(fset, solver) - - self.assertTrue( - np.allclose(param_bounds, lp_param_bounds), - msg=( - "Parameter bounds not consistent with LP values for " - "FactorModelSet with parameterization:\n" - f"F={fset.number_of_factors},\n" - f"beta={fset.beta},\n" - f"psi_mat={fset.psi_mat},\n" - f"origin={fset.origin}." - ), - ) - - @unittest.skipIf(not numpy_available, 'Numpy is not available.') - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.util = Block() - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - F = 1 - psi_mat = np.zeros(shape=(len(m.uncertain_params), F)) - for i in range(len(psi_mat)): - random_row_entries = list(np.random.uniform(low=0, high=0.2, size=F)) - for j in range(len(psi_mat[i])): - psi_mat[i][j] = random_row_entries[j] - _set = FactorModelSet( - origin=[0, 0], psi_mat=psi_mat, number_of_factors=F, beta=1 - ) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars, model=m - ) - uncertain_params_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if v in ComponentSet(identify_variables(expr=con.expr)): - if id(v) not in list(id(u) for u in uncertain_params_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - uncertain_params_in_expr.append(v) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars.values()], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - @unittest.skipIf(not numpy_available, 'Numpy is not available.') - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the set is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.util = Block() - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - F = 1 - psi_mat = np.zeros(shape=(len(m.uncertain_params), F)) - for i in range(len(psi_mat)): - random_row_entries = list(np.random.uniform(low=0, high=0.2, size=F)) - for j in range(len(psi_mat[i])): - psi_mat[i][j] = random_row_entries[j] - _set = FactorModelSet( - origin=[0, 0], psi_mat=psi_mat, number_of_factors=F, beta=1 - ) - m.uncertainty_set_contr = _set.set_as_constraint( - uncertain_params=m.uncertain_param_vars, model=m - ) - vars_in_expr = [] - vars_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if id(v) in [id(u) for u in list(identify_variables(expr=con.expr))]: - if id(v) not in list(id(u) for u in vars_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - vars_in_expr.append(v) - - self.assertEqual( - len(vars_in_expr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_point_in_set(self): - m = ConcreteModel() - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - - F = 1 - psi_mat = np.zeros(shape=(len(m.uncertain_params), F)) - for i in range(len(psi_mat)): - random_row_entries = list(np.random.uniform(low=0, high=0.2, size=F)) - for j in range(len(psi_mat[i])): - psi_mat[i][j] = random_row_entries[j] - _set = FactorModelSet( - origin=[0, 0], psi_mat=psi_mat, number_of_factors=F, beta=1 - ) - self.assertTrue( - _set.point_in_set([0, 0]), msg="Point is not in the FactorModelSet." - ) - - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0) - - F = 1 - psi_mat = np.zeros(shape=(len(list(m.util.uncertain_param_vars.values())), F)) - for i in range(len(psi_mat)): - random_row_entries = list(np.random.uniform(low=0, high=0.2, size=F)) - for j in range(len(psi_mat[i])): - psi_mat[i][j] = random_row_entries[j] - _set = FactorModelSet( - origin=[0, 0], psi_mat=psi_mat, number_of_factors=F, beta=1 - ) - config = Block() - config.uncertainty_set = _set - - FactorModelSet.add_bounds_on_uncertain_parameters(model=m, config=config) - - self.assertNotEqual( - m.util.uncertain_param_vars[0].lb, - None, - "Bounds not added correctly for FactorModelSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[0].ub, - None, - "Bounds not added correctly for FactorModelSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].lb, - None, - "Bounds not added correctly for FactorModelSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].ub, - None, - "Bounds not added correctly for FactorModelSet", - ) - - -class testIntersectionSetClass(unittest.TestCase): - """ - Unit tests for the IntersectionSet class. - Required input is set objects to intersect, - and set_as_constraint requires - an NLP solver to confirm the intersection is not empty. - """ - - def test_normal_construction_and_update(self): - """ - Test IntersectionSet constructor and setter - work normally when arguments are appropriate. - """ - bset = BoxSet(bounds=[[-1, 1], [-1, 1], [-1, 1]]) - aset = AxisAlignedEllipsoidalSet([0, 0, 0], [1, 1, 1]) - - iset = IntersectionSet(box_set=bset, axis_aligned_set=aset) - self.assertIn( - bset, - iset.all_sets, - msg=( - "IntersectionSet 'all_sets' attribute does not" - "contain expected BoxSet" - ), - ) - self.assertIn( - aset, - iset.all_sets, - msg=( - "IntersectionSet 'all_sets' attribute does not" - "contain expected AxisAlignedEllipsoidalSet" - ), - ) - - def test_error_on_intersecting_wrong_dims(self): - """ - Test ValueError raised if IntersectionSet sets - are not of same dimension. - """ - bset = BoxSet(bounds=[[-1, 1], [-1, 1]]) - aset = AxisAlignedEllipsoidalSet([0, 0], [2, 2]) - wrong_aset = AxisAlignedEllipsoidalSet([0, 0, 0], [1, 1, 1]) - - exc_str = r".*of dimension 2, but attempting to add set of dimension 3" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - IntersectionSet(box_set=bset, axis_set=aset, wrong_set=wrong_aset) - - # construct a valid intersection set - iset = IntersectionSet(box_set=bset, axis_set=aset) - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - iset.all_sets.append(wrong_aset) - - def test_type_error_on_invalid_arg(self): - """ - Test TypeError raised if an argument not of type - UncertaintySet is passed to the IntersectionSet - constructor or appended to 'all_sets'. - """ - bset = BoxSet(bounds=[[-1, 1], [-1, 1]]) - aset = AxisAlignedEllipsoidalSet([0, 0], [2, 2]) - - exc_str = ( - r"Entry '1' of the argument `all_sets` is not An `UncertaintySet` " - r"object.*\(provided type 'int'\)" - ) - - # assert error on construction - with self.assertRaisesRegex(TypeError, exc_str): - IntersectionSet(box_set=bset, axis_set=aset, invalid_arg=1) - - # construct a valid intersection set - iset = IntersectionSet(box_set=bset, axis_set=aset) - - # assert error on update - with self.assertRaisesRegex(TypeError, exc_str): - iset.all_sets.append(1) - - def test_error_on_intersection_dim_change(self): - """ - IntersectionSet dimension is considered immutable. - Test ValueError raised when attempting to set the - constituent sets to a different dimension. - """ - bset = BoxSet(bounds=[[-1, 1], [-1, 1]]) - aset = AxisAlignedEllipsoidalSet([0, 0], [2, 2]) - - # construct the set - iset = IntersectionSet(box_set=bset, axis_set=aset) - - exc_str = r"Attempting to set.*dimension 2 to a sequence.* of dimension 1" - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - # attempt to set to 1-dimensional sets - iset.all_sets = [BoxSet([[1, 1]]), AxisAlignedEllipsoidalSet([0], [1])] - - def test_error_on_too_few_sets(self): - """ - Check ValueError raised if too few sets are passed - to the intersection set. - """ - exc_str = r"Attempting.*minimum required length 2.*iterable of length 1" - - # assert error on construction - with self.assertRaisesRegex(ValueError, exc_str): - IntersectionSet(bset=BoxSet([[1, 2]])) - - # construct a valid intersection set - iset = IntersectionSet( - box_set=BoxSet([[1, 2]]), axis_set=AxisAlignedEllipsoidalSet([0], [1]) - ) - - # assert error on update - with self.assertRaisesRegex(ValueError, exc_str): - # attempt to set to 1-dimensional sets - iset.all_sets = [BoxSet([[1, 1]])] - - def test_intersection_uncertainty_set_list_behavior(self): - """ - Test the 'all_sets' attribute of the IntersectionSet - class behaves like a regular Python list. - """ - iset = IntersectionSet( - bset=BoxSet([[0, 2]]), aset=AxisAlignedEllipsoidalSet([0], [1]) - ) - - # an UncertaintySetList of length 2. - # should behave like a list of length 2 - all_sets = iset.all_sets - - # test append - all_sets.append(BoxSet([[1, 2]])) - del all_sets[2:] - - # test extend - all_sets.extend([BoxSet([[1, 2]]), EllipsoidalSet([0], [[1]], 2)]) - del all_sets[2:] - - # index in range. Allow slicing as well - # none of these should result in exception - all_sets[0] - all_sets[1] - all_sets[100:] - all_sets[0:2:20] - all_sets[0:2:1] - all_sets[-20:-1:2] - - # index out of range - self.assertRaises(IndexError, lambda: all_sets[2]) - self.assertRaises(IndexError, lambda: all_sets[-3]) - - # assert min length ValueError if attempting to clear - # list to length less than 2 - with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): - all_sets[:] = all_sets[0] - with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): - del all_sets[1] - with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): - del all_sets[1:] - with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): - del all_sets[:] - with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): - all_sets.clear() - with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): - all_sets[0:] = [] - - # assignment out of range - with self.assertRaisesRegex(IndexError, r"assignment index out of range"): - all_sets[-3] = BoxSet([[1, 1.5]]) - with self.assertRaisesRegex(IndexError, r"assignment index out of range"): - all_sets[2] = BoxSet([[1, 1.5]]) - - # assigning to slices should work fine - all_sets[3:] = [BoxSet([[1, 1.5]]), BoxSet([[1, 3]])] - - @unittest.skipUnless(ipopt_available, "IPOPT is not available.") - def test_uncertainty_set_with_correct_params(self): - ''' - Case in which the UncertaintySet is constructed using the uncertain_param objects from the model to - which the uncertainty set constraint is being added. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - bounds = [(-1, 1), (-1, 1)] - Q1 = BoxSet(bounds=bounds) - Q2 = AxisAlignedEllipsoidalSet(center=[0, 0], half_lengths=[2, 1]) - Q = IntersectionSet(Q1=Q1, Q2=Q2) - - config = ConfigBlock() - solver = SolverFactory("ipopt") - config.declare("global_solver", ConfigValue(default=solver)) - - m.uncertainty_set_contr = Q.set_as_constraint( - uncertain_params=m.uncertain_param_vars, config=config - ) - uncertain_params_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if v in ComponentSet(identify_variables(expr=con.expr)): - if id(v) not in list(id(u) for u in uncertain_params_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - uncertain_params_in_expr.append(v) - - self.assertEqual( - [id(u) for u in uncertain_params_in_expr], - [id(u) for u in m.uncertain_param_vars.values()], - msg="Uncertain param Var objects used to construct uncertainty set constraint must" - " be the same uncertain param Var objects in the original model.", - ) - - @unittest.skipUnless(ipopt_available, "IPOPT is not available.") - def test_uncertainty_set_with_incorrect_params(self): - ''' - Case in which the set is constructed using uncertain_param objects which are Params instead of - Vars. Leads to a constraint this is not potentially variable. - ''' - m = ConcreteModel() - # At this stage, the separation problem has uncertain_params which are now Var objects - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Param( - range(len(m.uncertain_params)), initialize=0, mutable=True - ) - bounds = [(-1, 1), (-1, 1)] - - Q1 = BoxSet(bounds=bounds) - Q2 = AxisAlignedEllipsoidalSet(center=[0, 0], half_lengths=[2, 1]) - Q = IntersectionSet(Q1=Q1, Q2=Q2) - - solver = SolverFactory("ipopt") - config = ConfigBlock() - config.declare("global_solver", ConfigValue(default=solver)) - - m.uncertainty_set_contr = Q.set_as_constraint( - uncertain_params=m.uncertain_param_vars, config=config - ) - vars_in_expr = [] - for con in m.uncertainty_set_contr.values(): - for v in m.uncertain_param_vars.values(): - if id(v) in [id(u) for u in list(identify_variables(expr=con.expr))]: - if id(v) not in list(id(u) for u in vars_in_expr): - # Not using ID here leads to it thinking both are in the list already when they aren't - vars_in_expr.append(v) - - self.assertEqual( - len(vars_in_expr), - 0, - msg="Uncertainty set constraint contains no Var objects, consists of a not potentially" - " variable expression.", - ) - - def test_point_in_set(self): - m = ConcreteModel() - m.p1 = Var(initialize=0) - m.p2 = Var(initialize=0) - m.uncertain_params = [m.p1, m.p2] - m.uncertain_param_vars = Var(range(len(m.uncertain_params)), initialize=0) - - bounds = [(-1, 1), (-1, 1)] - Q1 = BoxSet(bounds=bounds) - Q2 = BoxSet(bounds=[(-2, 1), (-1, 2)]) - Q = IntersectionSet(Q1=Q1, Q2=Q2) - self.assertTrue( - Q.point_in_set([0, 0]), msg="Point is not in the IntersectionSet." - ) - - @unittest.skipUnless(baron_available, "Global NLP solver is not available.") - def test_add_bounds_on_uncertain_parameters(self): - m = ConcreteModel() - m.util = Block() - m.util.uncertain_param_vars = Var([0, 1], initialize=0.5) - - bounds = [(-1, 1), (-1, 1)] - Q1 = BoxSet(bounds=bounds) - Q2 = AxisAlignedEllipsoidalSet(center=[0, 0], half_lengths=[5, 5]) - Q = IntersectionSet(Q1=Q1, Q2=Q2) - config = Block() - config.uncertainty_set = Q - config.global_solver = SolverFactory("baron") - - IntersectionSet.add_bounds_on_uncertain_parameters(m, config) - - self.assertNotEqual( - m.util.uncertain_param_vars[0].lb, - None, - "Bounds not added correctly for IntersectionSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[0].ub, - None, - "Bounds not added correctly for IntersectionSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].lb, - None, - "Bounds not added correctly for IntersectionSet", - ) - self.assertNotEqual( - m.util.uncertain_param_vars[1].ub, - None, - "Bounds not added correctly for IntersectionSet", - ) - - -# === master_problem_methods.py -class testInitialConstructMaster(unittest.TestCase): - def test_initial_construct_master(self): - model_data = MasterProblemData() - model_data.timing = None - model_data.working_model = ConcreteModel() - master_data = initial_construct_master(model_data) - self.assertTrue( - hasattr(master_data, "master_model"), - msg="Initial construction of master problem " - "did not create a master problem ConcreteModel object.", - ) - - -class testAddScenarioToMaster(unittest.TestCase): - def test_add_scenario_to_master(self): - working_model = ConcreteModel() - working_model.p = Param([1, 2], initialize=0, mutable=True) - working_model.x = Var() - model_data = MasterProblemData() - model_data.working_model = working_model - model_data.timing = None - master_data = initial_construct_master(model_data) - master_data.master_model.scenarios[0, 0].transfer_attributes_from( - working_model.clone() - ) - master_data.master_model.scenarios[0, 0].util = Block() - master_data.master_model.scenarios[0, 0].util.first_stage_variables = [ - master_data.master_model.scenarios[0, 0].x - ] - master_data.master_model.scenarios[0, 0].util.uncertain_params = [ - master_data.master_model.scenarios[0, 0].p[1], - master_data.master_model.scenarios[0, 0].p[2], - ] - add_scenario_to_master(master_data, violations=[1, 1]) - - self.assertEqual( - len(master_data.master_model.scenarios), - 2, - msg="Scenario not added to master correctly. Expected 2 scenarios.", - ) - - -global_solver = "baron" - - -class testSolveMaster(unittest.TestCase): - @unittest.skipUnless(baron_available, "Global NLP solver is not available.") - def test_solve_master(self): - working_model = m = ConcreteModel() - m.x = Var(initialize=0.5, bounds=(0, 10)) - m.y = Var(initialize=1.0, bounds=(0, 5)) - m.z = Var(initialize=0, bounds=(None, None)) - m.p = Param(initialize=1, mutable=True) - m.obj = Objective(expr=m.x) - m.con = Constraint(expr=m.x + m.y + m.z <= 3) - model_data = MasterProblemData() - model_data.working_model = working_model - model_data.timing = None - model_data.iteration = 0 - master_data = initial_construct_master(model_data) - master_data.master_model.scenarios[0, 0].transfer_attributes_from( - working_model.clone() - ) - master_data.master_model.scenarios[0, 0].util = Block() - master_data.master_model.scenarios[0, 0].util.first_stage_variables = [ - master_data.master_model.scenarios[0, 0].x - ] - master_data.master_model.scenarios[0, 0].util.decision_rule_vars = [] - master_data.master_model.scenarios[0, 0].util.second_stage_variables = [] - master_data.master_model.scenarios[0, 0].util.uncertain_params = [ - master_data.master_model.scenarios[0, 0].p - ] - master_data.master_model.scenarios[0, 0].first_stage_objective = 0 - master_data.master_model.scenarios[0, 0].second_stage_objective = Expression( - expr=master_data.master_model.scenarios[0, 0].x - ) - master_data.master_model.scenarios[0, 0].util.dr_var_to_exponent_map = ( - ComponentMap() - ) - master_data.iteration = 0 - master_data.timing = TimingData() - - box_set = BoxSet(bounds=[(0, 2)]) - solver = SolverFactory(global_solver) - config = ConfigBlock() - config.declare("backup_global_solvers", ConfigValue(default=[])) - config.declare("backup_local_solvers", ConfigValue(default=[])) - config.declare("solve_master_globally", ConfigValue(default=True)) - config.declare("global_solver", ConfigValue(default=solver)) - config.declare("tee", ConfigValue(default=False)) - config.declare("decision_rule_order", ConfigValue(default=1)) - config.declare("objective_focus", ConfigValue(default=ObjectiveType.worst_case)) - config.declare( - "second_stage_variables", - ConfigValue( - default=master_data.master_model.scenarios[ - 0, 0 - ].util.second_stage_variables - ), - ) - config.declare("subproblem_file_directory", ConfigValue(default=None)) - config.declare("time_limit", ConfigValue(default=None)) - config.declare( - "progress_logger", ConfigValue(default=logging.getLogger(__name__)) - ) - config.declare("symbolic_solver_labels", ConfigValue(default=False)) - - with time_code(master_data.timing, "main", is_main_timer=True): - master_soln = solve_master(master_data, config) - self.assertEqual( - master_soln.termination_condition, - TerminationCondition.optimal, - msg=( - "Could not solve simple master problem with solve_master " - "function." - ), - ) - - -# === regression test for the solver -class coefficientMatchingTests(unittest.TestCase): - def test_coefficient_matching_correct_num_constraints_added(self): - # Write the deterministic Pyomo model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con = Constraint(expr=m.u ** (0.5) * m.x1 - m.u * m.x2 <= 2) - m.eq_con = Constraint( - expr=m.u**2 * (m.x2 - 1) - + m.u * (m.x1**3 + 0.5) - - 5 * m.u * m.x1 * m.x2 - + m.u * (m.x1 + 2) - == 0 - ) - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) - - config = Block() - config.uncertainty_set = Block() - config.uncertainty_set.parameter_bounds = [(0.25, 2)] - - m.util = Block() - m.util.first_stage_variables = [m.x1, m.x2] - m.util.second_stage_variables = [] - m.util.uncertain_params = [m.u] - - config.decision_rule_order = 0 - - m.util.h_x_q_constraints = ComponentSet() - - coeff_matching_success, robust_infeasible = coefficient_matching( - m, m.eq_con, [m.u], config + "solve_master_globally": True, + }, ) + # check successful termination self.assertEqual( - coeff_matching_success, True, msg="Coefficient matching was unsuccessful." - ) - self.assertEqual( - robust_infeasible, - False, - msg="Coefficient matching detected a robust infeasible constraint (1 == 0).", + results.pyros_termination_condition, + pyrosTerminationCondition.robust_optimal, + msg="Did not identify robust optimal solution to problem instance.", ) + + # only one iteration required self.assertEqual( - len(m.coefficient_matching_constraints), - 2, - msg="Coefficient matching produced incorrect number of h(x,q)=0 constraints.", + results.iterations, + 1, + msg=( + "PyROS was unable to solve a singleton discrete set instance " + " successfully within a single iteration." + ), ) - config.decision_rule_order = 1 - model_data = Block() - model_data.working_model = m + @unittest.skipUnless( + baron_license_is_valid, "Global NLP solver is not available and licensed." + ) + def test_two_stg_model_discrete_set(self): + """ + Test PyROS successfully solves two-stage model with + multiple scenarios. + """ + m = build_leyffer() - m.util.first_stage_variables = [m.x1] - m.util.second_stage_variables = [m.x2] + discrete_set = DiscreteScenarioSet(scenarios=[[0.25], [1.125], [2]]) - add_decision_rule_variables(model_data=model_data, config=config) - add_decision_rule_constraints(model_data=model_data, config=config) + global_solver = SolverFactory("baron") + pyros_solver = SolverFactory("pyros") - coeff_matching_success, robust_infeasible = coefficient_matching( - m, m.eq_con, [m.u], config - ) - self.assertEqual( - coeff_matching_success, - False, - msg="Coefficient matching should have been " - "unsuccessful for higher order polynomial expressions.", - ) - self.assertEqual( - robust_infeasible, - False, - msg="Coefficient matching is not successful, " - "but should not be proven robust infeasible.", + res = pyros_solver.solve( + model=m, + first_stage_variables=[m.x1], + second_stage_variables=[m.x2], + uncertain_params=[m.u], + uncertainty_set=discrete_set, + local_solver=global_solver, + global_solver=global_solver, + decision_rule_order=0, + solve_master_globally=True, + objective_focus=ObjectiveType.worst_case, ) - def test_coefficient_matching_robust_infeasible_proof(self): - # Write the deterministic Pyomo model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con = Constraint(expr=m.u ** (0.5) * m.x1 - m.u * m.x2 <= 2) - m.eq_con = Constraint( - expr=m.u * (m.x1**3 + 0.5) - - 5 * m.u * m.x1 * m.x2 - + m.u * (m.x1 + 2) - + m.u**2 - == 0 + self.assertEqual( + res.pyros_termination_condition, + pyrosTerminationCondition.robust_optimal, + msg=( + "Failed to solve discrete set multiple scenarios instance to " + "robust optimality" + ), ) - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) - config = Block() - config.uncertainty_set = Block() - config.uncertainty_set.parameter_bounds = [(0.25, 2)] - m.util = Block() - m.util.first_stage_variables = [m.x1, m.x2] - m.util.second_stage_variables = [] - m.util.uncertain_params = [m.u] - - config.decision_rule_order = 0 - - m.util.h_x_q_constraints = ComponentSet() - - coeff_matching_success, robust_infeasible = coefficient_matching( - m, m.eq_con, [m.u], config +class TestPyROSRobustInfeasible(unittest.TestCase): + @unittest.skipUnless(baron_available, "BARON is not available and licensed") + def test_pyros_robust_infeasible(self): + """ + Test PyROS behavior when robust infeasibility detected + from a master problem. + """ + m = ConcreteModel() + m.q = Param(initialize=0.5, mutable=True) + m.x = Var(bounds=(m.q, 1)) + # makes model infeasible since 2 is outside bounds + m.con1 = Constraint(expr=m.x == 2) + m.obj = Objective(expr=m.x) + baron = SolverFactory("baron") + pyros = SolverFactory("pyros") + results = pyros.solve( + model=m, + first_stage_variables=[m.x], + second_stage_variables=[], + uncertain_params=m.q, + uncertainty_set=BoxSet([[0, 1]]), + local_solver=baron, + global_solver=baron, + solve_master_globally=True, ) self.assertEqual( - coeff_matching_success, - False, - msg="Coefficient matching should have been unsuccessful.", - ) - self.assertEqual( - robust_infeasible, - True, - msg="Coefficient matching should be proven robust infeasible.", + results.pyros_termination_condition, + pyrosTerminationCondition.robust_infeasible, ) + self.assertEqual(results.iterations, 1) + # since x was not initialized + self.assertEqual(results.final_objective_value, None) + + +global_solver = "baron" # === regression test for the solver @unittest.skipUnless(baron_available, "Global NLP solver is not available.") class RegressionTest(unittest.TestCase): - def regression_test_constant_drs(self): - model = m = ConcreteModel() + """ + Collection of regression tests. + """ + + def build_regression_test_model(self): + """ + Create model used for regression tests. + """ + m = ConcreteModel() m.name = "s381" + m.set_params = Set(initialize=list(range(4))) + m.p = Param(m.set_params, initialize=2, mutable=True) + m.x1 = Var(within=Reals, bounds=(0, None), initialize=0.1) m.x2 = Var(within=Reals, bounds=(0, None), initialize=0.1) m.x3 = Var(within=Reals, bounds=(0, None), initialize=0.1) - # === State Vars = [x13] - # === Decision Vars === + m.con1 = Constraint(expr=m.p[1] * m.x1 + m.x2 + m.x3 <= 2) + + m.obj = Objective(expr=(m.x1 - 1) * 2, sense=minimize) + m.decision_vars = [m.x1, m.x2, m.x3] - # === Uncertain Params === - m.set_params = Set(initialize=list(range(4))) - m.p = Param(m.set_params, initialize=2, mutable=True) m.uncertain_params = [m.p] - m.obj = Objective(expr=(m.x1 - 1) * 2, sense=minimize) - m.con1 = Constraint(expr=m.p[1] * m.x1 + m.x2 + m.x3 <= 2) + return m + + @unittest.skipUnless( + baron_license_is_valid, "Global NLP solver is not available and licensed." + ) + def test_regression_constant_drs(self): + m = self.build_regression_test_model() box_set = BoxSet(bounds=[(1.8, 2.2)]) solver = SolverFactory("baron") @@ -3973,25 +532,11 @@ def regression_test_constant_drs(self): pyrosTerminationCondition.robust_feasible, ) - def regression_test_affine_drs(self): - model = m = ConcreteModel() - m.name = "s381" - - m.x1 = Var(within=Reals, bounds=(0, None), initialize=0.1) - m.x2 = Var(within=Reals, bounds=(0, None), initialize=0.1) - m.x3 = Var(within=Reals, bounds=(0, None), initialize=0.1) - - # === State Vars = [x13] - # === Decision Vars === - m.decision_vars = [m.x1, m.x2, m.x3] - - # === Uncertain Params === - m.set_params = Set(initialize=list(range(4))) - m.p = Param(m.set_params, initialize=2, mutable=True) - m.uncertain_params = [m.p] - - m.obj = Objective(expr=(m.x1 - 1) * 2, sense=minimize) - m.con1 = Constraint(expr=m.p[1] * m.x1 + m.x2 + m.x3 <= 2) + @unittest.skipUnless( + baron_license_is_valid, "Global NLP solver is not available and licensed." + ) + def test_regression_affine_drs(self): + m = self.build_regression_test_model() box_set = BoxSet(bounds=[(1.8, 2.2)]) solver = SolverFactory("baron") @@ -4014,25 +559,11 @@ def regression_test_affine_drs(self): pyrosTerminationCondition.robust_feasible, ) - def regression_test_quad_drs(self): - model = m = ConcreteModel() - m.name = "s381" - - m.x1 = Var(within=Reals, bounds=(0, None), initialize=0.1) - m.x2 = Var(within=Reals, bounds=(0, None), initialize=0.1) - m.x3 = Var(within=Reals, bounds=(0, None), initialize=0.1) - - # === State Vars = [x13] - # === Decision Vars === - m.decision_vars = [m.x1, m.x2, m.x3] - - # === Uncertain Params === - m.set_params = Set(initialize=list(range(4))) - m.p = Param(m.set_params, initialize=2, mutable=True) - m.uncertain_params = [m.p] - - m.obj = Objective(expr=(m.x1 - 1) * 2, sense=minimize) - m.con1 = Constraint(expr=m.p[1] * m.x1 + m.x2 + m.x3 <= 2) + @unittest.skipUnless( + baron_license_is_valid, "Global NLP solver is not available and licensed." + ) + def test_regression_quadratic_drs(self): + m = self.build_regression_test_model() box_set = BoxSet(bounds=[(1.8, 2.2)]) solver = SolverFactory("baron") @@ -4055,80 +586,11 @@ def regression_test_quad_drs(self): pyrosTerminationCondition.robust_feasible, ) - @unittest.skipUnless( - baron_license_is_valid, "Global NLP solver is not available and licensed." - ) - def test_minimize_dr_norm(self): - m = ConcreteModel() - m.p1 = Param(initialize=0, mutable=True) - m.p2 = Param(initialize=0, mutable=True) - m.z1 = Var(initialize=0, bounds=(0, 1)) - m.z2 = Var(initialize=0, bounds=(0, 1)) - - m.working_model = ConcreteModel() - m.working_model.util = Block() - - m.working_model.util.second_stage_variables = [m.z1, m.z2] - m.working_model.util.uncertain_params = [m.p1, m.p2] - m.working_model.util.first_stage_variables = [] - m.working_model.util.state_vars = [] - - m.working_model.util.first_stage_variables = [] - config = Bunch() - config.decision_rule_order = 1 - config.objective_focus = ObjectiveType.nominal - config.global_solver = SolverFactory('baron') - config.uncertain_params = m.working_model.util.uncertain_params - config.tee = False - config.solve_master_globally = True - config.time_limit = None - config.progress_logger = logging.getLogger(__name__) - - add_decision_rule_variables(model_data=m, config=config) - add_decision_rule_constraints(model_data=m, config=config) - - # === Make master_type model - master = ConcreteModel() - master.scenarios = Block(NonNegativeIntegers, NonNegativeIntegers) - master.scenarios[0, 0].transfer_attributes_from(m.working_model.clone()) - master.scenarios[0, 0].first_stage_objective = 0 - master.scenarios[0, 0].second_stage_objective = Expression( - expr=(master.scenarios[0, 0].util.second_stage_variables[0] - 1) ** 2 - + (master.scenarios[0, 0].util.second_stage_variables[1] - 1) ** 2 - ) - master.obj = Objective(expr=master.scenarios[0, 0].second_stage_objective) - master_data = MasterProblemData() - master_data.master_model = master - master_data.master_model.const_efficiency_applied = False - master_data.master_model.linear_efficiency_applied = False - master_data.iteration = 0 - - master_data.timing = TimingData() - with time_code(master_data.timing, "main", is_main_timer=True): - results, success = minimize_dr_vars(model_data=master_data, config=config) - self.assertEqual( - results.solver.termination_condition, - TerminationCondition.optimal, - msg="Minimize dr norm did not solve to optimality.", - ) - self.assertTrue( - success, msg=f"DR polishing success {success}, expected True." - ) - @unittest.skipUnless( baron_license_is_valid, "Global NLP solver is not available and licensed." ) def test_identifying_violating_param_realization(self): - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # Define the uncertainty set interval = BoxSet(bounds=[(0.25, 2)]) @@ -4174,16 +636,7 @@ def test_identifying_violating_param_realization(self): "Test known to fail for BARON 23.1.5 and versions preceding 23.6.23", ) def test_terminate_with_max_iter(self): - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # Define the uncertainty set interval = BoxSet(bounds=[(0.25, 2)]) @@ -4231,16 +684,7 @@ def test_terminate_with_max_iter(self): baron_license_is_valid, "Global NLP solver is not available and licensed." ) def test_terminate_with_time_limit(self): - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # Define the uncertainty set interval = BoxSet(bounds=[(0.25, 2)]) @@ -4290,6 +734,69 @@ def test_terminate_with_time_limit(self): ), ) + @unittest.skipUnless( + baron_license_is_valid, "Global NLP solver is not available and licensed." + ) + def test_pyros_backup_solvers(self): + m = ConcreteModel() + m.name = "s381" + + class BadSolver: + def __init__(self, max_num_calls): + self.max_num_calls = max_num_calls + self.num_calls = 0 + + def available(self, exception_flag=True): + return True + + def solve(self, *args, **kwargs): + if self.num_calls < self.max_num_calls: + self.num_calls += 1 + return SolverFactory("baron").solve(*args, **kwargs) + res = SolverResults() + res.solver.termination_condition = TerminationCondition.maxIterations + res.solver.status = SolverStatus.warning + return res + + m.x1 = Var(within=Reals, bounds=(0, None), initialize=0.1) + m.x2 = Var(within=Reals, bounds=(0, None), initialize=0.1) + m.x3 = Var(within=Reals, bounds=(0, None), initialize=0.1) + + # === State Vars = [x13] + # === Decision Vars === + m.decision_vars = [m.x1, m.x2, m.x3] + + # === Uncertain Params === + m.set_params = Set(initialize=list(range(4))) + m.p = Param(m.set_params, initialize=2, mutable=True) + m.uncertain_params = [m.p] + + m.obj = Objective(expr=(m.x1 - 1) * 2, sense=minimize) + m.con1 = Constraint(expr=m.p[1] * m.x1 + m.x2 + m.x3 <= 2) + + box_set = BoxSet(bounds=[(1.8, 2.2)]) + pyros = SolverFactory("pyros") + results = pyros.solve( + model=m, + first_stage_variables=m.decision_vars, + second_stage_variables=[], + uncertain_params=[m.p[1]], + uncertainty_set=box_set, + # note: allow 4 calls to work normally + # to permit successful solution of uncertainty + # bounding problems + local_solver=BadSolver(4), + global_solver=BadSolver(4), + backup_local_solvers=[SolverFactory("baron")], + backup_global_solvers=[SolverFactory("baron")], + options={"objective_focus": ObjectiveType.nominal}, + solve_master_globally=True, + ) + self.assertTrue( + results.pyros_termination_condition, + pyrosTerminationCondition.robust_feasible, + ) + @unittest.skipUnless( SolverFactory('baron').license_is_valid(), "Global NLP solver is not available and licensed.", @@ -4299,16 +806,7 @@ def test_separation_terminate_time_limit(self): Test PyROS time limit status returned in event separation problem times out. """ - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # Define the uncertainty set interval = BoxSet(bounds=[(0.25, 2)]) @@ -4354,16 +852,7 @@ def test_pyros_subsolver_time_limit_adjustment(self): Check that PyROS does not ultimately alter state of subordinate solver options due to time limit adjustments. """ - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # Define the uncertainty set interval = BoxSet(bounds=[(0.25, 2)]) @@ -4542,7 +1031,6 @@ def test_separation_subsolver_error(self): m.obj = Objective(expr=m.x1 + m.x2) box_set = BoxSet(bounds=[(0, 1)]) - d_set = DiscreteScenarioSet(scenarios=[(1,), (0,)]) local_solver = SolverFactory("ipopt") global_solver = SolverFactory("baron") @@ -4568,24 +1056,76 @@ def test_separation_subsolver_error(self): ), ) - # FIXME: This test is expected to fail now, as writing out invalid - # models generates an exception in the problem writer (and is never - # actually sent to the solver) + @unittest.skipUnless(ipopt_available, "IPOPT is not available.") + @unittest.skipUnless(baron_license_is_valid, "BARON is not available and licensed.") + def test_discrete_separation_subsolver_error(self): + """ + Test PyROS for two-stage problem with discrete type set, + subsolver error status. + """ + + class BadSeparationSolver: + def __init__(self, solver): + self.solver = solver + + def available(self, exception_flag=False): + return self.solver.available(exception_flag=exception_flag) + + def solve(self, model, *args, **kwargs): + is_separation = hasattr(model, "uncertainty") + if is_separation: + res = SolverResults() + res.solver.termination_condition = TerminationCondition.unknown + else: + res = self.solver.solve(model, *args, **kwargs) + return res + + m = ConcreteModel() + + m.q = Param(initialize=1, mutable=True) + m.x1 = Var(initialize=1, bounds=(0, 1)) + m.x2 = Var(initialize=2, bounds=(0, m.q)) + m.obj = Objective(expr=m.x1 + m.x2, sense=maximize) + + discrete_set = DiscreteScenarioSet(scenarios=[(1,), (0,)]) + + local_solver = SolverFactory("ipopt") + global_solver = SolverFactory("baron") + pyros_solver = SolverFactory("pyros") + + with LoggingIntercept(level=logging.WARNING) as LOG: + res = pyros_solver.solve( + model=m, + first_stage_variables=[m.x1], + second_stage_variables=[m.x2], + uncertain_params=[m.q], + uncertainty_set=discrete_set, + local_solver=BadSeparationSolver(local_solver), + global_solver=BadSeparationSolver(global_solver), + decision_rule_order=1, + tee=True, + ) + + self.assertRegex(LOG.getvalue(), "Could not.*separation.*iteration 0.*") + self.assertEqual( + res.pyros_termination_condition, pyrosTerminationCondition.subsolver_error + ) + self.assertEqual(res.iterations, 1) + + @unittest.skipUnless(ipopt_available, "IPOPT is not available.") @unittest.skipUnless( baron_license_is_valid, "Global NLP solver is not available and licensed." ) - @unittest.expectedFailure - def test_discrete_separation_subsolver_error(self): + def test_discrete_separation_invalid_value_error(self): """ - Test PyROS for two-stage problem with discrete type set, - subsolver error status. + Test PyROS properly handles InvalidValueError. """ m = ConcreteModel() m.q = Param(initialize=1, mutable=True) m.x1 = Var(initialize=1, bounds=(0, 1)) - # upper bound induces subsolver error: separation + # upper bound induces invalid value error: separation # max(x2 - log(m.q)) will force subsolver to q = 0 m.x2 = Var(initialize=2, bounds=(None, log(m.q))) @@ -4597,31 +1137,30 @@ def test_discrete_separation_subsolver_error(self): global_solver = SolverFactory("baron") pyros_solver = SolverFactory("pyros") - res = pyros_solver.solve( - model=m, - first_stage_variables=[m.x1], - second_stage_variables=[m.x2], - uncertain_params=[m.q], - uncertainty_set=discrete_set, - local_solver=local_solver, - global_solver=global_solver, - decision_rule_order=1, - tee=True, - ) - self.assertEqual( - res.pyros_termination_condition, - pyrosTerminationCondition.subsolver_error, - msg=( - "Returned termination condition for separation error" - f"test is not {pyrosTerminationCondition.subsolver_error}." - ), + with LoggingIntercept(level=logging.ERROR) as LOG: + with self.assertRaises(InvalidValueError): + pyros_solver.solve( + model=m, + first_stage_variables=[m.x1], + second_stage_variables=[m.x2], + uncertain_params=[m.q], + uncertainty_set=discrete_set, + local_solver=local_solver, + global_solver=global_solver, + decision_rule_order=1, + tee=True, + ) + + err_str = LOG.getvalue() + self.assertRegex( + err_str, "Optimizer.*exception.*separation problem.*iteration 0" ) @unittest.skipUnless(ipopt_available, "IPOPT is not available.") - def test_pyros_nl_writer_tol(self): + def test_pyros_nl_and_ampl_writer_tol(self): """ Test PyROS subsolver call routine behavior - with respect to the NL writer tolerance is as + with respect to the NL and AMPL writer tolerances is as expected. """ m = ConcreteModel() @@ -4633,7 +1172,7 @@ def test_pyros_nl_writer_tol(self): # fixed just inside the PyROS-specified NL writer tolerance. m.x1.fix(m.x1.upper + 9.9e-5) - current_nl_writer_tol = pyomo_nl_writer.TOL + current_nl_writer_tol = pyomo_nl_writer.TOL, pyomo_ampl_repn.TOL ipopt_solver = SolverFactory("ipopt") pyros_solver = SolverFactory("pyros") @@ -4651,12 +1190,12 @@ def test_pyros_nl_writer_tol(self): ) self.assertEqual( - pyomo_nl_writer.TOL, + (pyomo_nl_writer.TOL, pyomo_ampl_repn.TOL), current_nl_writer_tol, - msg="Pyomo NL writer tolerance not restored as expected.", + msg="Pyomo writer tolerances not restored as expected.", ) - # fixed just outside the PyROS-specified NL writer tolerance. + # fixed just outside the PyROS-specified writer tolerances. # this should be exceptional. m.x1.fix(m.x1.upper + 1.01e-4) @@ -4679,10 +1218,10 @@ def test_pyros_nl_writer_tol(self): ) self.assertEqual( - pyomo_nl_writer.TOL, + (pyomo_nl_writer.TOL, pyomo_ampl_repn.TOL), current_nl_writer_tol, msg=( - "Pyomo NL writer tolerance not restored as expected " + "Pyomo writer tolerances not restored as expected " "after exceptional test." ), ) @@ -4694,7 +1233,7 @@ def test_pyros_math_domain_error(self): """ Test PyROS on a two-stage problem, discrete set type with a math domain error evaluating - performance constraint expressions in separation. + second-stage inequality constraint expressions in separation. """ m = ConcreteModel() m.q = Param(initialize=1, mutable=True) @@ -4711,7 +1250,7 @@ def test_pyros_math_domain_error(self): with self.assertRaisesRegex( expected_exception=ArithmeticError, expected_regex=( - "Evaluation of performance constraint.*math domain error.*" + "Evaluation of second-stage inequality constraint.*math domain error.*" ), msg="ValueError arising from math domain error not raised", ): @@ -4737,8 +1276,8 @@ def test_pyros_math_domain_error(self): def test_pyros_no_perf_cons(self): """ Ensure PyROS properly accommodates models with no - performance constraints (such as effectively deterministic - models). + second-stage inequality constraints + (such as effectively deterministic models). """ m = ConcreteModel() m.x = Var(bounds=(0, 1)) @@ -4774,15 +1313,7 @@ def test_nominal_focus_robust_feasible(self): Test problem under nominal objective focus terminates successfully. """ - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # singleton set, guaranteed robust feasibility discrete_scenarios = DiscreteScenarioSet(scenarios=[[1.125]]) @@ -4821,16 +1352,7 @@ def test_nominal_focus_robust_feasible(self): baron_license_is_valid, "Global NLP solver is not available and licensed." ) def test_discrete_separation(self): - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # Define the uncertainty set discrete_scenarios = DiscreteScenarioSet(scenarios=[[0.25], [2.0], [1.125]]) @@ -4864,22 +1386,10 @@ def test_discrete_separation(self): ) @unittest.skipUnless( - baron_license_is_valid, "Global NLP solver is not available and licensed." - ) - @unittest.skipUnless( - baron_version == (23, 1, 5), "Test runs >90 minutes with Baron 22.9.30" + scip_available and scip_license_is_valid, "SCIP is not available and licensed." ) def test_higher_order_decision_rules(self): - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # Define the uncertainty set interval = BoxSet(bounds=[(0.25, 2)]) @@ -4888,8 +1398,8 @@ def test_higher_order_decision_rules(self): pyros_solver = SolverFactory("pyros") # Define subsolvers utilized in the algorithm - local_subsolver = SolverFactory('baron') - global_subsolver = SolverFactory("baron") + local_subsolver = SolverFactory("scip") + global_subsolver = SolverFactory("scip") # Call the PyROS solver results = pyros_solver.solve( @@ -4916,12 +1426,7 @@ def test_higher_order_decision_rules(self): @unittest.skipUnless(scip_available, "Global NLP solver is not available.") def test_coefficient_matching_solve(self): # Write the deterministic Pyomo model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con = Constraint(expr=m.u ** (0.5) * m.x1 - m.u * m.x2 <= 2) + m = build_leyffer() m.eq_con = Constraint( expr=m.u**2 * (m.x2 - 1) + m.u * (m.x1**3 + 0.5) @@ -4929,7 +1434,6 @@ def test_coefficient_matching_solve(self): + m.u * (m.x1 + 2) == 0 ) - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) interval = BoxSet(bounds=[(0.25, 2)]) @@ -4958,7 +1462,10 @@ def test_coefficient_matching_solve(self): self.assertEqual( results.pyros_termination_condition, pyrosTerminationCondition.robust_optimal, - msg="Non-optimal termination condition from robust feasible coefficient matching problem.", + msg=( + "Non-optimal termination condition from robust" + "feasible coefficient matching problem." + ), ) self.assertAlmostEqual( results.final_objective_value, @@ -4967,7 +1474,7 @@ def test_coefficient_matching_solve(self): msg="Incorrect objective function value.", ) - def create_mitsos_4_3(self): + def build_mitsos_4_3(self): """ Create instance of Problem 4_3 from Mitsos (2011)'s Test Set of semi-infinite programs. @@ -4996,13 +1503,17 @@ def create_mitsos_4_3(self): baron_license_is_valid and scip_available and scip_license_is_valid, "Global solvers BARON and SCIP not both available and licensed", ) + @unittest.skipIf( + (24, 1, 5) <= baron_version and baron_version <= (24, 5, 8), + f"Test expected to fail for BARON version {baron_version}", + ) def test_coeff_matching_solver_insensitive(self): """ Check that result for instance with constraint subject to coefficient matching is insensitive to subsolver settings. Based on Mitsos (2011) semi-infinite programming instance 4_3. """ - m = self.create_mitsos_4_3() + m = self.build_mitsos_4_3() # instantiate BARON subsolver and PyROS solver baron = SolverFactory("baron") @@ -5035,6 +1546,8 @@ def test_coeff_matching_solver_insensitive(self): ) np.testing.assert_allclose( actual=res.final_objective_value, + # this value can be hand-calculated by analyzing the + # initial master problem desired=0.9781633, rtol=0, atol=5e-3, @@ -5044,7 +1557,9 @@ def test_coeff_matching_solver_insensitive(self): ), ) - @unittest.skipUnless(scip_available, "NLP solver is not available.") + @unittest.skipUnless( + scip_available and scip_license_is_valid, "SCIP is not available and licensed." + ) def test_coefficient_matching_partitioning_insensitive(self): """ Check that result for instance with constraint subject to @@ -5052,10 +1567,9 @@ def test_coefficient_matching_partitioning_insensitive(self): is based on Mitsos (2011) semi-infinite programming instance 4_3. """ - m = self.create_mitsos_4_3() + m = self.build_mitsos_4_3() - # instantiate BARON subsolver and PyROS solver - baron = SolverFactory("scip") + global_solver = SolverFactory("scip") pyros_solver = SolverFactory("pyros") # solve with PyROS @@ -5070,8 +1584,8 @@ def test_coefficient_matching_partitioning_insensitive(self): second_stage_variables=partitioning["ssv"], uncertain_params=[m.u], uncertainty_set=BoxSet(bounds=[[0, 1]]), - local_solver=baron, - global_solver=baron, + local_solver=global_solver, + global_solver=global_solver, objective_focus=ObjectiveType.worst_case, solve_master_globally=True, bypass_local_separation=True, @@ -5100,65 +1614,10 @@ def test_coefficient_matching_partitioning_insensitive(self): ), ) - def test_coefficient_matching_raises_error_4_3(self): - """ - Check that result for instance with constraint subject to - coefficient matching results in exception certifying robustness - cannot be certified where expected. Model - is based on Mitsos (2011) semi-infinite programming instance - 4_3. - """ - m = self.create_mitsos_4_3() - - # instantiate BARON subsolver and PyROS solver - baron = SolverFactory("baron") - pyros_solver = SolverFactory("pyros") - - # solve with PyROS - dr_orders = [1, 2] - for dr_order in dr_orders: - regex_assert_mgr = self.assertRaisesRegex( - ValueError, - expected_regex=( - "Coefficient matching unsuccessful. See the solver logs." - ), - ) - logging_intercept_mgr = LoggingIntercept(level=logging.ERROR) - - with regex_assert_mgr, logging_intercept_mgr as LOG: - pyros_solver.solve( - model=m, - first_stage_variables=[], - second_stage_variables=[m.x1, m.x2, m.x3], - uncertain_params=[m.u], - uncertainty_set=BoxSet(bounds=[[0, 1]]), - local_solver=baron, - global_solver=baron, - objective_focus=ObjectiveType.worst_case, - decision_rule_order=dr_order, - solve_master_globally=True, - bypass_local_separation=True, - robust_feasibility_tolerance=1e-4, - ) - - detailed_error_msg = LOG.getvalue() - self.assertRegex( - detailed_error_msg[:-1], - ( - r"Equality constraint.*cannot be guaranteed to " - r"be robustly feasible.*" - r"Consider editing this constraint.*" - ), - ) - + @unittest.skipUnless(baron_available, "BARON is not available.") def test_coefficient_matching_robust_infeasible_proof_in_pyros(self): # Write the deterministic Pyomo model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con = Constraint(expr=m.u ** (0.5) * m.x1 - m.u * m.x2 <= 2) + m = build_leyffer() m.eq_con = Constraint( expr=m.u * (m.x1**3 + 0.5) - 5 * m.u * m.x1 * m.x2 @@ -5166,7 +1625,6 @@ def test_coefficient_matching_robust_infeasible_proof_in_pyros(self): + m.u**2 == 0 ) - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) interval = BoxSet(bounds=[(0.25, 2)]) @@ -5174,7 +1632,7 @@ def test_coefficient_matching_robust_infeasible_proof_in_pyros(self): pyros_solver = SolverFactory("pyros") # Define subsolvers utilized in the algorithm - local_subsolver = SolverFactory('baron') + local_subsolver = SolverFactory("baron") global_subsolver = SolverFactory("baron") # Call the PyROS solver @@ -5199,22 +1657,15 @@ def test_coefficient_matching_robust_infeasible_proof_in_pyros(self): msg="Robust infeasible problem not identified via coefficient matching.", ) + @unittest.skipUnless(ipopt_available, "IPOPT not available.") def test_coefficient_matching_nonlinear_expr(self): - # Write the deterministic Pyomo model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con = Constraint(expr=m.u ** (0.5) * m.x1 - m.u * m.x2 <= 2) - m.eq_con = Constraint( - expr=m.u**2 * (m.x2 - 1) - + m.u * (m.x1**3 + 0.5) - - 5 * m.u * m.x1 * m.x2 - + m.u * (m.x1 + 2) - == 0 - ) - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + """ + Test behavior of PyROS solver for model with + equality constraint that cannot be reformulated via + coefficient matching due to nonlinearity. + """ + m = build_leyffer() + m.eq_con = Constraint(expr=m.u**2 * (m.x2 - 1) == 0) interval = BoxSet(bounds=[(0.25, 2)]) @@ -5222,15 +1673,11 @@ def test_coefficient_matching_nonlinear_expr(self): pyros_solver = SolverFactory("pyros") # Define subsolvers utilized in the algorithm - local_subsolver = SolverFactory('baron') - global_subsolver = SolverFactory("baron") + local_subsolver = SolverFactory("ipopt") + global_subsolver = SolverFactory("ipopt") # Call the PyROS solver - with self.assertRaises( - ValueError, - msg="ValueError should be raised for general " - "nonlinear expressions in h(x,z,q)=0 constraints.", - ): + with LoggingIntercept(module="pyomo.contrib.pyros", level=logging.DEBUG) as LOG: results = pyros_solver.solve( model=m, first_stage_variables=[m.x1], @@ -5241,26 +1688,30 @@ def test_coefficient_matching_nonlinear_expr(self): global_solver=global_subsolver, options={ "objective_focus": ObjectiveType.worst_case, - "solve_master_globally": True, + "solve_master_globally": False, + "bypass_global_separation": True, "decision_rule_order": 1, }, ) + pyros_log = LOG.getvalue() + self.assertRegex( + pyros_log, r".*Equality constraint '.*eq_con.*'.*cannot be written.*" + ) + + self.assertEqual( + results.pyros_termination_condition, + pyrosTerminationCondition.robust_feasible, + ) + @unittest.skipUnless(scip_available, "Global NLP solver is not available.") class testBypassingSeparation(unittest.TestCase): + @unittest.skipUnless(scip_available, "SCIP is not available.") + @unittest.skipUnless(ipopt_available, "IPOPT is not available.") def test_bypass_global_separation(self): """Test bypassing of global separation solve calls.""" - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m = build_leyffer_two_cons() # Define the uncertainty set interval = BoxSet(bounds=[(0.25, 2)]) @@ -5398,18 +1849,7 @@ class testModelMultipleObjectives(unittest.TestCase): def test_multiple_objs(self): """Test bypassing of global separation solve calls.""" - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) - - # add another objective + m = build_leyffer_two_cons() m.obj2 = Objective(expr=m.obj.expr / 2) # add block, with another objective @@ -5491,188 +1931,19 @@ def test_multiple_objs(self): ) -class testModelIdentifyObjectives(unittest.TestCase): - """ - This class contains tests for validating routines used to - determine the first-stage and second-stage portions of a - two-stage expression. - """ - - def test_identify_objectives(self): - """ - Test first and second-stage objective identification - for a simple two-stage model. - """ - # model - m = ConcreteModel() - - # parameters - m.p = Param(range(4), initialize=1, mutable=True) - m.q = Param(initialize=1) - - # variables - m.x = Var(range(4)) - m.z = Var() - m.y = Var(initialize=2) - - # objective - m.obj = Objective( - expr=( - (m.x[0] + m.y) - * ( - sum(m.x[idx] * m.p[idx] for idx in range(3)) - + m.q * m.z - + m.x[0] * m.q - ) - + sin(m.x[0] + m.q) - + cos(m.x[2] + m.z) - ) - ) - - # util block for specifying DOF and uncertainty - m.util = Block() - m.util.first_stage_variables = list(m.x.values()) - m.util.second_stage_variables = [m.z] - m.util.uncertain_params = [m.p[0], m.p[1]] - - identify_objective_functions(m, m.obj) - - fsv_set = ComponentSet(m.util.first_stage_variables) - uncertain_param_set = ComponentSet(m.util.uncertain_params) - - # determine vars and uncertain params participating in - # objective - fsv_in_obj = ComponentSet( - var for var in identify_variables(m.obj) if var in fsv_set - ) - ssv_in_obj = ComponentSet( - var for var in identify_variables(m.obj) if var not in fsv_set - ) - uncertain_params_in_obj = ComponentSet( - param - for param in identify_mutable_parameters(m.obj) - if param in uncertain_param_set - ) - - # determine vars and uncertain params participating in - # first-stage objective - fsv_in_first_stg_cost = ComponentSet( - var for var in identify_variables(m.first_stage_objective) if var in fsv_set - ) - ssv_in_first_stg_cost = ComponentSet( - var - for var in identify_variables(m.first_stage_objective) - if var not in fsv_set - ) - uncertain_params_in_first_stg_cost = ComponentSet( - param - for param in identify_mutable_parameters(m.first_stage_objective) - if param in uncertain_param_set - ) - - # determine vars and uncertain params participating in - # second-stage objective - fsv_in_second_stg_cost = ComponentSet( - var - for var in identify_variables(m.second_stage_objective) - if var in fsv_set - ) - ssv_in_second_stg_cost = ComponentSet( - var - for var in identify_variables(m.second_stage_objective) - if var not in fsv_set - ) - uncertain_params_in_second_stg_cost = ComponentSet( - param - for param in identify_mutable_parameters(m.second_stage_objective) - if param in uncertain_param_set - ) - - # now perform checks - self.assertTrue( - fsv_in_first_stg_cost | fsv_in_second_stg_cost == fsv_in_obj, - f"{{var.name for var in fsv_in_first_stg_cost | fsv_in_second_stg_cost}} " - f"is not {{var.name for var in fsv_in_obj}}", - ) - self.assertFalse( - ssv_in_first_stg_cost, - f"First-stage expression {str(m.first_stage_objective.expr)}" - f" consists of non first-stage variables " - f"{{var.name for var in fsv_in_second_stg_cost}}", - ) - self.assertTrue( - ssv_in_second_stg_cost == ssv_in_obj, - f"{[var.name for var in ssv_in_second_stg_cost]} is not" - f"{{var.name for var in ssv_in_obj}}", - ) - self.assertFalse( - uncertain_params_in_first_stg_cost, - f"First-stage expression {str(m.first_stage_objective.expr)}" - " consists of uncertain params" - f" {{p.name for p in uncertain_params_in_first_stg_cost}}", - ) - self.assertTrue( - uncertain_params_in_second_stg_cost == uncertain_params_in_obj, - f"{{p.name for p in uncertain_params_in_second_stg_cost}} is not " - f"{{p.name for p in uncertain_params_in_obj}}", - ) - - def test_identify_objectives_var_expr(self): - """ - Test first and second-stage objective identification - for an objective expression consisting only of a Var. - """ - # model - m = ConcreteModel() - - # parameters - m.p = Param(range(4), initialize=1, mutable=True) - m.q = Param(initialize=1) - - # variables - m.x = Var(range(4)) - - # objective - m.obj = Objective(expr=m.x[1]) - - # util block for specifying DOF and uncertainty - m.util = Block() - m.util.first_stage_variables = list(m.x.values()) - m.util.second_stage_variables = list() - m.util.uncertain_params = list() - - identify_objective_functions(m, m.obj) - fsv_in_second_stg_obj = list( - v.name for v in identify_variables(m.second_stage_objective) - ) - - # perform checks - self.assertTrue(list(identify_variables(m.first_stage_objective)) == [m.x[1]]) - self.assertFalse( - fsv_in_second_stg_obj, - "Second stage objective contains variable(s) " f"{fsv_in_second_stg_obj}", - ) - - -class testMasterFeasibilityUnitConsistency(unittest.TestCase): +class TestMasterFeasibilityUnitConsistency(unittest.TestCase): """ Test cases for models with unit-laden model components. """ @unittest.skipUnless( - baron_license_is_valid, "Global NLP solver is not available and licensed." - ) - @unittest.skipUnless( - baron_version < (23, 1, 5), "Test known to fail beginning with Baron 23.1.5" + scip_available and scip_license_is_valid, "SCIP is not available and licensed." ) def test_two_stg_mod_with_axis_aligned_set(self): """ Test two-stage model with `AxisAlignedEllipsoidalSet` as the uncertainty set. """ - from pyomo.environ import units as u - - # define model m = ConcreteModel() m.x1 = Var(initialize=0, bounds=(0, None)) m.x2 = Var(initialize=0, bounds=(0, None), units=u.m) @@ -5693,8 +1964,8 @@ def test_two_stg_mod_with_axis_aligned_set(self): pyros_solver = SolverFactory("pyros") # Define subsolvers utilized in the algorithm - local_subsolver = SolverFactory('baron') - global_subsolver = SolverFactory("baron") + local_subsolver = SolverFactory("scip") + global_subsolver = SolverFactory("scip") # Call the PyROS solver # note: second-stage variable and uncertain params have units @@ -5742,20 +2013,7 @@ def simple_nlp_model(self): Create simple NLP for the unit tests defined within this class """ - # define model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u1 = Param(initialize=1.125, mutable=True) - m.u2 = Param(initialize=1, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u1 ** (0.5) - m.x2 * m.u1 <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u1 == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - m.u2) ** 2) - - return m + return build_leyffer_two_cons_two_params() @unittest.skipUnless( SolverFactory('appsi_ipopt').available(exception_flag=False), @@ -5856,25 +2114,14 @@ def test_pyros_gams_ipopt(self): ) @unittest.skipUnless( - baron_license_is_valid, "Global NLP solver is not available and licensed." + scip_available and scip_license_is_valid, "SCIP is not available and licensed." ) def test_two_stg_mod_with_intersection_set(self): """ Test two-stage model with `AxisAlignedEllipsoidalSet` as the uncertainty set. """ - # define model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u1 = Param(initialize=1.125, mutable=True) - m.u2 = Param(initialize=1, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u1 ** (0.5) - m.x2 * m.u1 <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u1 == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - m.u2) ** 2) + m = self.simple_nlp_model() # construct the IntersectionSet ellipsoid = AxisAlignedEllipsoidalSet(center=[1.125, 1], half_lengths=[1, 0]) @@ -5885,8 +2132,8 @@ def test_two_stg_mod_with_intersection_set(self): pyros_solver = SolverFactory("pyros") # Define subsolvers utilized in the algorithm - local_subsolver = SolverFactory('baron') - global_subsolver = SolverFactory("baron") + local_subsolver = SolverFactory("scip") + global_subsolver = SolverFactory("scip") # Call the PyROS solver results = pyros_solver.solve( @@ -6049,8 +2296,8 @@ def test_log_iter_record_not_all_sep_solved(self): solver time limit was reached, or the user-provides subordinate optimizer(s) were unable to solve a separation subproblem to an acceptable level. - A '+' should be appended to the number of performance constraints - found to be violated. + A '+' should be appended to the number of second-stage + inequality constraints found to be violated. """ # for some fields, we choose floats with more than four # four decimal points to ensure rounding also matches @@ -6441,18 +2688,7 @@ def test_pyros_kwargs_with_overlap(self): keyword arguments passed explicitly and implicitly through `options`. """ - # define model - m = ConcreteModel() - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.x3 = Var(initialize=0, bounds=(None, None)) - m.u1 = Param(initialize=1.125, mutable=True) - m.u2 = Param(initialize=1, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u1 ** (0.5) - m.x2 * m.u1 <= 2) - m.con2 = Constraint(expr=m.x1**2 - m.x2**2 * m.u1 == m.x3) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - m.u2) ** 2) + m = build_leyffer_two_cons_two_params() # Define the uncertainty set # we take the parameter `u2` to be 'fixed' @@ -6557,17 +2793,7 @@ def build_simple_test_model(self): """ Build simple valid test model. """ - m = ConcreteModel(name="test_model") - - m.x1 = Var(initialize=0, bounds=(0, None)) - m.x2 = Var(initialize=0, bounds=(0, None)) - m.u = Param(initialize=1.125, mutable=True) - - m.con1 = Constraint(expr=m.x1 * m.u ** (0.5) - m.x2 * m.u <= 2) - - m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) - - return m + return build_leyffer() def test_pyros_invalid_model_type(self): """ @@ -6711,47 +2937,33 @@ def test_pyros_vars_not_in_model(self): global_solver = SimpleTestSolver() pyros = SolverFactory("pyros") - mdl.bad_con = Constraint(expr=mdl2.x1 + mdl2.x2 >= 1) - - desc_dof_map = [ - ("first-stage", [mdl2.x1], [], 2), - ("second-stage", [], [mdl2.x2], 2), - ("state", [mdl.x1], [], 3), - ] + mdl.bad_con = Constraint(expr=mdl.x1 + mdl2.x2 >= 1) + mdl2.x3 = Var(initialize=1) # now perform checks - for vardesc, first_stage_vars, second_stage_vars, numlines in desc_dof_map: - with LoggingIntercept(level=logging.ERROR) as LOG: - exc_str = ( - "Found entries of " - f"{vardesc} variables not descended from.*model.*" + with LoggingIntercept(level=logging.ERROR) as LOG: + exc_str = "Found Vars.*active.*" "not descended from.*model.*" + with self.assertRaisesRegex(ValueError, exc_str): + pyros.solve( + model=mdl, + first_stage_variables=[mdl.x1, mdl.x2], + second_stage_variables=[mdl2.x3], + uncertain_params=[mdl.u], + uncertainty_set=BoxSet([[1 / 4, 2]]), + local_solver=local_solver, + global_solver=global_solver, ) - with self.assertRaisesRegex(ValueError, exc_str): - pyros.solve( - model=mdl, - first_stage_variables=first_stage_vars, - second_stage_variables=second_stage_vars, - uncertain_params=[mdl.u], - uncertainty_set=BoxSet([[1 / 4, 2]]), - local_solver=local_solver, - global_solver=global_solver, - ) - - log_msgs = LOG.getvalue().split("\n")[:-1] - - # check detailed log message is as expected - self.assertEqual( - len(log_msgs), - numlines, - "Error-level log message does not contain expected number of lines.", - ) - self.assertRegex( - text=log_msgs[0], - expected_regex=( - f"The following {vardesc} variables" - ".*not descended from.*model with name 'model1'" - ), - ) + + log_msgs = LOG.getvalue().split("\n") + invalid_vars_strs_list = log_msgs[1:-1] + self.assertEqual( + len(invalid_vars_strs_list), + 1, + msg="Number of lines referencing name of invalid Vars not as expected.", + ) + self.assertRegex( + text=invalid_vars_strs_list[0], expected_regex=f"{mdl2.x2.name!r}" + ) def test_pyros_non_continuous_vars(self): """ @@ -6761,6 +2973,7 @@ def test_pyros_non_continuous_vars(self): # build model; make one variable discrete mdl = self.build_simple_test_model() mdl.x2.domain = NonNegativeIntegers + mdl.name = "test_model" # prepare solvers pyros = SolverFactory("pyros") diff --git a/pyomo/contrib/pyros/tests/test_master.py b/pyomo/contrib/pyros/tests/test_master.py new file mode 100644 index 00000000000..b87495cbcd8 --- /dev/null +++ b/pyomo/contrib/pyros/tests/test_master.py @@ -0,0 +1,718 @@ +# ___________________________________________________________________________ +# +# Pyomo: Python Optimization Modeling Objects +# Copyright (c) 2008-2024 +# National Technology and Engineering Solutions of Sandia, LLC +# Under the terms of Contract DE-NA0003525 with National Technology and +# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain +# rights in this software. +# This software is distributed under the 3-clause BSD License. +# ___________________________________________________________________________ + +""" +Test methods for construction and solution of master problem +objects. +""" + + +import logging +import time +import pyomo.common.unittest as unittest + +from pyomo.common.collections import Bunch +from pyomo.common.dependencies import numpy_available, scipy_available +from pyomo.core.base import ConcreteModel, Constraint, minimize, Objective, Param, Var +from pyomo.core.expr import exp +from pyomo.core.expr.compare import assertExpressionsEqual +from pyomo.environ import SolverFactory +from pyomo.opt import TerminationCondition + +from pyomo.contrib.pyros.master_problem_methods import ( + add_scenario_block_to_master_problem, + construct_initial_master_problem, + construct_master_feasibility_problem, + construct_dr_polishing_problem, + MasterProblemData, + higher_order_decision_rule_efficiency, +) +from pyomo.contrib.pyros.util import ( + ModelData, + preprocess_model_data, + ObjectiveType, + time_code, + TimingData, + VariablePartitioning, + pyrosTerminationCondition, +) + + +if not (numpy_available and scipy_available): + raise unittest.SkipTest("Packages numpy and scipy must both be available.") + +_baron = SolverFactory("baron") +baron_available = _baron.available() +baron_license_is_valid = _baron.license_is_valid() + + +logger = logging.getLogger(__name__) + + +def build_simple_model_data(objective_focus="worst_case", decision_rule_order=1): + """ + Test construction of master problem. + """ + m = ConcreteModel() + m.u = Param(initialize=0.5, mutable=True) + m.x1 = Var(bounds=[-1000, 1000], initialize=1) + m.x2 = Var(bounds=[-1000, 1000], initialize=1) + m.x3 = Var(bounds=[-1000, 1000], initialize=-3) + m.con = Constraint(expr=exp(m.u - 1) - m.x1 - m.x2 * m.u - m.x3 * m.u**2 <= 0) + m.eq_con = Constraint(expr=m.x2 - 1 == 0) + + m.obj = Objective(expr=m.x1 + m.x2 / 2 + m.x3 / 3) + + config = Bunch( + uncertain_params=[m.u], + objective_focus=ObjectiveType[objective_focus], + decision_rule_order=decision_rule_order, + progress_logger=logger, + nominal_uncertain_param_vals=[0.4], + separation_priority_order=dict(), + ) + model_data = ModelData(original_model=m, timing=TimingData(), config=config) + user_var_partitioning = VariablePartitioning( + first_stage_variables=[m.x1], + second_stage_variables=[m.x2, m.x3], + state_variables=[], + ) + + preprocess_model_data(model_data, user_var_partitioning) + + return model_data + + +class TestConstructMasterProblem(unittest.TestCase): + """ + Tests for construction of the master problem and + scenario sub-blocks. + """ + + def test_initial_construct_master(self): + """ + Test initial construction of the master problem + from the preprocesed working model. + """ + model_data = build_simple_model_data() + master_model = construct_initial_master_problem(model_data) + + self.assertTrue(hasattr(master_model, "scenarios")) + self.assertIsNot(master_model.scenarios[0, 0], model_data.working_model) + self.assertTrue(master_model.epigraph_obj.active) + self.assertIs( + master_model.epigraph_obj.expr, + master_model.scenarios[0, 0].first_stage.epigraph_var, + ) + + # check all the variables (including first-stage ones) + # were cloned + nadj_var_zip = zip( + master_model.scenarios[0, 0].all_nonadjustable_variables, + model_data.working_model.all_nonadjustable_variables, + ) + for master_var, wm_var in nadj_var_zip: + self.assertIsNot( + master_var, + wm_var, + f"Variable with name {wm_var.name!r} not cloned as expected.", + ) + + # check parameter value is set to the nominal realization + self.assertEqual( + master_model.scenarios[0, 0].user_model.u.value, + model_data.config.nominal_uncertain_param_vals[0], + ) + + def test_add_scenario_block_to_master(self): + """ + Test method for adding scenario block to an already + constructed master problem, without cloning of the + first-stage variables. + """ + model_data = build_simple_model_data() + master_model = construct_initial_master_problem(model_data) + add_scenario_block_to_master_problem( + master_model=master_model, + scenario_idx=[0, 1], + param_realization=[0.6], + from_block=master_model.scenarios[0, 0], + clone_first_stage_components=False, + ) + + self.assertEqual(master_model.scenarios[0, 1].user_model.u.value, 0.6) + + nadj_var_zip = zip( + master_model.scenarios[0, 0].all_nonadjustable_variables, + master_model.scenarios[0, 1].all_nonadjustable_variables, + ) + for var_00, var_01 in nadj_var_zip: + self.assertIs( + var_00, + var_01, + msg=f"Variable {var_00.name} was cloned across scenario blocks.", + ) + + # the first-stage inequality and equality constraints + # should be cloned. we do this to avoid issues with the solver + # interfaces (such as issues with manipulating symbol maps) + nadj_ineq_con_zip = zip( + master_model.scenarios[0, 0].first_stage.inequality_cons.values(), + master_model.scenarios[0, 1].first_stage.inequality_cons.values(), + ) + for ineq_con_00, ineq_con_01 in nadj_ineq_con_zip: + self.assertIsNot( + ineq_con_00, + ineq_con_01, + msg=( + f"first-stage inequality con {ineq_con_00.name!r} was not " + "cloned across scenario blocks." + ), + ) + self.assertTrue( + ineq_con_00.active, + msg=( + "First-stage inequality constraint " + f"{ineq_con_00.name!r} should be active." + ), + ) + self.assertFalse( + ineq_con_01.active, + msg=( + "Duplicate first-stage inequality constraint " + f"{ineq_con_01.name!r} should be deactivated" + ), + ) + + nadj_eq_con_zip = zip( + master_model.scenarios[0, 0].first_stage.equality_cons.values(), + master_model.scenarios[0, 1].first_stage.equality_cons.values(), + ) + for eq_con_00, eq_con_01 in nadj_eq_con_zip: + self.assertIsNot( + eq_con_00, + eq_con_01, + msg=( + f"first-stage equality con {eq_con_00.name} was not cloned " + "across scenario blocks." + ), + ) + self.assertTrue( + eq_con_00.active, + msg=( + "First-stage equality constraint " + f"{eq_con_00.name!r} should be active." + ), + ) + self.assertFalse( + eq_con_01.active, + msg=( + "Duplicate first-stage equality constraint " + f"{eq_con_01.name!r} should be deactivated" + ), + ) + + +class TestNewConstructMasterFeasibilityProblem(unittest.TestCase): + """ + Test construction of the master feasibility problem. + """ + + def build_simple_master_data(self): + """ + Construct master data-like object for feasibility problem + tests. + """ + model_data = build_simple_model_data() + master_model = construct_initial_master_problem(model_data) + add_scenario_block_to_master_problem( + master_model=master_model, + scenario_idx=[1, 0], + param_realization=[1], + from_block=master_model.scenarios[0, 0], + clone_first_stage_components=False, + ) + master_data = Bunch( + master_model=master_model, iteration=1, config=model_data.config + ) + + return master_data + + def test_construct_master_feasibility_problem_var_map(self): + """ + Test construction of feasibility problem var map. + """ + master_data = self.build_simple_master_data() + slack_model = construct_master_feasibility_problem(master_data) + + self.assertTrue(master_data.feasibility_problem_varmap) + for mvar, feasvar in master_data.feasibility_problem_varmap: + self.assertIs( + mvar, + master_data.master_model.find_component(feasvar), + msg=f"{mvar.name!r} is not same as find_component({feasvar.name!r})", + ) + self.assertIs( + feasvar, + slack_model.find_component(mvar), + msg=f"{feasvar.name!r} is not same as find_component({mvar.name!r})", + ) + + def test_construct_master_feasibility_problem_slack_vars(self): + """ + Check master feasibility slack variables. + """ + master_data = self.build_simple_master_data() + slack_model = construct_master_feasibility_problem(master_data) + + slack_var_blk = slack_model._core_add_slack_variables + scenario_10_blk = slack_model.scenarios[1, 0] + + # test a few of the constraints + slack_user_model_x3_lb_con = scenario_10_blk.second_stage.inequality_cons[ + "var_x3_certain_lower_bound_con" + ] + slack_user_model_x3_lb_con_var = slack_var_blk.find_component( + "'_slack_minus_scenarios[1,0].second_stage.inequality_cons[" + "var_x3_certain_lower_bound_con]'" + ) + assertExpressionsEqual( + self, + slack_user_model_x3_lb_con.body <= slack_user_model_x3_lb_con.upper, + -scenario_10_blk.user_model.x3 - slack_user_model_x3_lb_con_var <= 1000.0, + ) + self.assertEqual(slack_user_model_x3_lb_con_var.value, 0) + + slack_user_model_x3_ub_con = scenario_10_blk.second_stage.inequality_cons[ + "var_x3_certain_upper_bound_con" + ] + slack_user_model_x3_ub_con_var = slack_var_blk.find_component( + "'_slack_minus_scenarios[1,0].second_stage.inequality_cons[" + "var_x3_certain_upper_bound_con]'" + ) + assertExpressionsEqual( + self, + slack_user_model_x3_ub_con.body <= slack_user_model_x3_ub_con.upper, + scenario_10_blk.user_model.x3 - slack_user_model_x3_ub_con_var <= 1000.0, + ) + self.assertEqual(slack_user_model_x3_lb_con_var.value, 0) + + # constraint 'con' is violated when u = 0.8; + # check slack initialization + slack_user_model_con_var = slack_var_blk.find_component( + "'_slack_minus_scenarios[1,0].second_stage.inequality_cons" + "[ineq_con_con_upper_bound_con]'" + ) + self.assertEqual( + slack_user_model_con_var.value, + -master_data.master_model.scenarios[1, 0].user_model.con.uslack(), + ) + + def test_construct_master_feasibility_problem_obj(self): + """ + Check master feasibility slack variables. + """ + master_data = self.build_simple_master_data() + slack_model = construct_master_feasibility_problem(master_data) + + self.assertFalse(slack_model.epigraph_obj.active) + self.assertTrue(slack_model._core_add_slack_variables._slack_objective.active) + + +class TestDRPolishingProblem(unittest.TestCase): + """ + Tests for the PyROS DR polishing problem. + """ + + def build_simple_master_data(self): + """ + Construct master data-like object for feasibility problem + tests. + """ + model_data = build_simple_model_data() + master_model = construct_initial_master_problem(model_data) + add_scenario_block_to_master_problem( + master_model=master_model, + scenario_idx=[1, 0], + param_realization=[0.1], + from_block=master_model.scenarios[0, 0], + clone_first_stage_components=False, + ) + master_data = Bunch( + master_model=master_model, iteration=1, config=model_data.config + ) + + return master_data + + def test_construct_dr_polishing_problem_nonadj_components(self): + """ + Test state of the nonadjustable components + of the DR polishing problem. + """ + master_data = self.build_simple_master_data() + polishing_model = construct_dr_polishing_problem(master_data) + eff_first_stage_vars = polishing_model.scenarios[ + 0, 0 + ].effective_var_partitioning.first_stage_variables + for effective_first_stage_var in eff_first_stage_vars: + self.assertTrue( + effective_first_stage_var.fixed, + msg=( + "Effective first-stage variable " + f"{effective_first_stage_var.name!r} " + "not fixed." + ), + ) + + nom_polishing_block = polishing_model.scenarios[0, 0] + self.assertTrue(nom_polishing_block.first_stage.epigraph_var.fixed) + self.assertFalse(nom_polishing_block.first_stage.decision_rule_vars[0][0].fixed) + self.assertFalse(nom_polishing_block.first_stage.decision_rule_vars[0][1].fixed) + + # ensure constraints in fixed vars were deactivated + self.assertFalse(nom_polishing_block.user_model.eq_con.active) + + # these have either unfixed DR or adjustable variables, + # so they should remain active + # self.assertTrue(nom_polishing_block.user_model.con.active) + self.assertTrue( + nom_polishing_block.second_stage.inequality_cons[ + "ineq_con_con_upper_bound_con" + ].active + ) + self.assertTrue(nom_polishing_block.second_stage.decision_rule_eqns[0].active) + + def test_construct_dr_polishing_problem_polishing_components(self): + """ + Test auxiliary Var/Constraint components of the DR polishing + problem. + """ + master_data = self.build_simple_master_data() + # DR order is 1, and x3 is second-stage. + # to test fixing efficiency, fix the affine DR variable + decision_rule_vars = master_data.master_model.scenarios[ + 0, 0 + ].first_stage.decision_rule_vars + decision_rule_vars[0][1].fix() + polishing_model = construct_dr_polishing_problem(master_data) + nom_polishing_block = polishing_model.scenarios[0, 0] + + self.assertFalse(decision_rule_vars[0][0].fixed) + self.assertTrue(polishing_model.polishing_vars[0][0].fixed) + self.assertFalse(polishing_model.polishing_abs_val_lb_con_0[0].active) + self.assertFalse(polishing_model.polishing_abs_val_ub_con_0[0].active) + + # polishing components for the affine DR term should be + # fixed/deactivated since the DR variable was fixed + self.assertTrue(decision_rule_vars[0][1].fixed) + self.assertTrue(polishing_model.polishing_vars[0][1].fixed) + self.assertFalse(polishing_model.polishing_abs_val_lb_con_0[1].active) + self.assertFalse(polishing_model.polishing_abs_val_ub_con_0[1].active) + + # check initialization of polishing vars + self.assertEqual( + polishing_model.polishing_vars[0][0].value, + abs(nom_polishing_block.first_stage.decision_rule_vars[0][0].value), + ) + self.assertEqual( + polishing_model.polishing_vars[0][1].value, + abs(nom_polishing_block.first_stage.decision_rule_vars[0][1].value), + ) + + assertExpressionsEqual( + self, + polishing_model.polishing_obj.expr, + polishing_model.polishing_vars[0][0] + polishing_model.polishing_vars[0][1], + ) + self.assertEqual(polishing_model.polishing_obj.sense, minimize) + + def test_construct_dr_polishing_problem_objectives(self): + """ + Test states of the Objective components of the DR + polishing model. + """ + master_data = self.build_simple_master_data() + polishing_model = construct_dr_polishing_problem(master_data) + self.assertFalse(polishing_model.epigraph_obj.active) + self.assertTrue(polishing_model.polishing_obj.active) + + def test_construct_dr_polishing_problem_params_zero(self): + """ + Check that DR polishing fixes/deactivates components + for DR expression terms where the product of uncertain + parameters is below tolerance. + """ + master_data = self.build_simple_master_data() + + # trigger fixing of the corresponding polishing vars + master_data.master_model.scenarios[0, 0].user_model.u.set_value(1e-10) + master_data.master_model.scenarios[1, 0].user_model.u.set_value(1e-11) + + polishing_model = construct_dr_polishing_problem(master_data) + + dr_vars = polishing_model.scenarios[0, 0].first_stage.decision_rule_vars + + # since static DR terms should not be polished + self.assertTrue(polishing_model.polishing_vars[0][0].fixed) + self.assertFalse(polishing_model.polishing_abs_val_lb_con_0[0].active) + self.assertFalse(polishing_model.polishing_abs_val_ub_con_0[0].active) + + # affine term should be fixed to 0, + # since the uncertain param values are small enough. + # polishing constraints are deactivated since we don't need them + self.assertTrue(dr_vars[0][1].fixed) + self.assertEqual(dr_vars[0][1].value, 0) + self.assertTrue(polishing_model.polishing_vars[0][1].fixed) + self.assertFalse(polishing_model.polishing_abs_val_lb_con_0[1].active) + self.assertFalse(polishing_model.polishing_abs_val_ub_con_0[1].active) + + +class TestHigherOrderDecisionRuleEfficiency(unittest.TestCase): + """ + Test efficiency for decision rules. + """ + + def test_higher_order_decision_rule_efficiency(self): + """ + Test higher-order decision rule efficiency. + """ + model_data = build_simple_model_data(decision_rule_order=2) + master_model = construct_initial_master_problem(model_data) + master_data = Bunch( + master_model=master_model, iteration=0, config=model_data.config + ) + decision_rule_vars = master_data.master_model.scenarios[ + 0, 0 + ].first_stage.decision_rule_vars[0] + + for iter_num in range(4): + master_data.iteration = iter_num + higher_order_decision_rule_efficiency(master_data) + self.assertFalse( + decision_rule_vars[0].fixed, + msg=( + f"DR Var {decision_rule_vars[1].name!r} should not " + f"be fixed by efficiency in iteration {iter_num}" + ), + ) + if iter_num == 0: + self.assertTrue( + decision_rule_vars[1].fixed, + msg=( + f"DR Var {decision_rule_vars[1].name!r} should " + f"be fixed by efficiency in iteration {iter_num}" + ), + ) + self.assertTrue( + decision_rule_vars[2].fixed, + msg=( + f"DR Var {decision_rule_vars[2].name!r} should " + f"be fixed by efficiency in iteration {iter_num}" + ), + ) + elif iter_num <= len(master_data.config.uncertain_params): + self.assertFalse( + decision_rule_vars[1].fixed, + msg=( + f"DR Var {decision_rule_vars[1].name!r} should not " + f"be fixed by efficiency in iteration {iter_num}" + ), + ) + self.assertTrue( + decision_rule_vars[2].fixed, + msg=( + f"DR Var {decision_rule_vars[2].name!r} should " + f"be fixed by efficiency in iteration {iter_num}" + ), + ) + else: + self.assertFalse( + decision_rule_vars[1].fixed, + msg=( + f"DR Var {decision_rule_vars[1].name!r} should not " + f"be fixed by efficiency in iteration {iter_num}" + ), + ) + self.assertFalse( + decision_rule_vars[2].fixed, + msg=( + f"DR Var {decision_rule_vars[2].name!r} should not " + f"be fixed by efficiency in iteration {iter_num}" + ), + ) + + +class TestSolveMaster(unittest.TestCase): + """ + Test method for solving master problem + """ + + @unittest.skipUnless(baron_available, "Global NLP solver is not available.") + def test_solve_master(self): + model_data = build_simple_model_data() + model_data.timing = TimingData() + baron = SolverFactory("baron") + model_data.config.update( + dict( + local_solver=baron, + global_solver=baron, + backup_local_solvers=[], + backup_global_solvers=[], + tee=False, + ) + ) + master_data = MasterProblemData(model_data) + with time_code(master_data.timing, "main", is_main_timer=True): + master_soln = master_data.solve_master() + self.assertEqual(len(master_soln.master_results_list), 1) + self.assertIsNone(master_soln.feasibility_problem_results) + self.assertIsNone(master_soln.pyros_termination_condition) + self.assertIs(master_soln.master_model, master_data.master_model) + self.assertEqual( + master_soln.master_results_list[0].solver.termination_condition, + TerminationCondition.optimal, + msg=( + "Could not solve simple master problem with solve_master " + "function." + ), + ) + + @unittest.skipUnless(baron_available, "Global NLP solver is not available") + def test_solve_master_timeout_on_master(self): + """ + Test method for solution of master problems times out + on feasibility problem. + """ + model_data = build_simple_model_data() + model_data.timing = TimingData() + baron = SolverFactory("baron") + model_data.config.update( + dict( + local_solver=baron, + global_solver=baron, + backup_local_solvers=[], + backup_global_solvers=[], + tee=False, + time_limit=1, + ) + ) + master_data = MasterProblemData(model_data) + with time_code(master_data.timing, "main", is_main_timer=True): + time.sleep(1) + master_soln = master_data.solve_master() + self.assertIsNone(master_soln.feasibility_problem_results) + self.assertEqual(master_soln.master_model, master_data.master_model) + self.assertEqual(len(master_soln.master_results_list), 1) + self.assertEqual( + master_soln.master_results_list[0].solver.termination_condition, + TerminationCondition.optimal, + msg=( + "Could not solve simple master problem with solve_master " + "function." + ), + ) + self.assertEqual( + master_soln.pyros_termination_condition, + pyrosTerminationCondition.time_out, + ) + + @unittest.skipUnless(baron_available, "Global NLP solver is not available") + def test_solve_master_timeout_on_master_feasibility(self): + """ + Test method for solution of master problems times out + on feasibility problem. + """ + model_data = build_simple_model_data() + model_data.timing = TimingData() + baron = SolverFactory("baron") + model_data.config.update( + dict( + local_solver=baron, + global_solver=baron, + backup_local_solvers=[], + backup_global_solvers=[], + tee=False, + time_limit=1, + ) + ) + master_data = MasterProblemData(model_data) + add_scenario_block_to_master_problem( + master_data.master_model, + scenario_idx=[1, 0], + param_realization=[0.6], + from_block=master_data.master_model.scenarios[0, 0], + clone_first_stage_components=False, + ) + master_data.iteration = 1 + with time_code(master_data.timing, "main", is_main_timer=True): + time.sleep(1) + master_soln = master_data.solve_master() + self.assertIsNotNone(master_soln.feasibility_problem_results) + self.assertFalse(master_soln.master_results_list) + self.assertIs(master_soln.master_model, master_data.master_model) + self.assertEqual( + master_soln.pyros_termination_condition, + pyrosTerminationCondition.time_out, + ) + + +class TestPolishDRVars(unittest.TestCase): + """ + Test DR polishing subroutine. + """ + + @unittest.skipUnless( + baron_license_is_valid, "Global NLP solver is not available and licensed." + ) + def test_polish_dr_vars(self): + model_data = build_simple_model_data() + model_data.timing = TimingData() + baron = SolverFactory("baron") + model_data.config.update( + dict( + local_solver=baron, + global_solver=baron, + backup_local_solvers=[], + backup_global_solvers=[], + tee=False, + ) + ) + master_data = MasterProblemData(model_data) + add_scenario_block_to_master_problem( + master_data.master_model, + scenario_idx=[1, 0], + param_realization=[0.6], + from_block=master_data.master_model.scenarios[0, 0], + clone_first_stage_components=False, + ) + master_data.iteration = 1 + + master_data.timing = TimingData() + with time_code(master_data.timing, "main", is_main_timer=True): + master_soln = master_data.solve_master() + self.assertEqual( + master_soln.master_results_list[0].solver.termination_condition, + TerminationCondition.optimal, + ) + + results, success = master_data.solve_dr_polishing() + self.assertEqual( + results.solver.termination_condition, + TerminationCondition.optimal, + msg="Minimize dr norm did not solve to optimality.", + ) + self.assertTrue( + success, msg=f"DR polishing success {success}, expected True." + ) + + +if __name__ == "__main__": + unittest.main() diff --git a/pyomo/contrib/pyros/tests/test_preprocessor.py b/pyomo/contrib/pyros/tests/test_preprocessor.py new file mode 100644 index 00000000000..0f5f131a4a5 --- /dev/null +++ b/pyomo/contrib/pyros/tests/test_preprocessor.py @@ -0,0 +1,2785 @@ +# ___________________________________________________________________________ +# +# Pyomo: Python Optimization Modeling Objects +# Copyright (c) 2008-2024 +# National Technology and Engineering Solutions of Sandia, LLC +# Under the terms of Contract DE-NA0003525 with National Technology and +# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain +# rights in this software. +# This software is distributed under the 3-clause BSD License. +# ___________________________________________________________________________ + +""" +Tests for the PyROS preprocessor. +""" + + +import logging +import textwrap +import pyomo.common.unittest as unittest + +from pyomo.common.collections import Bunch, ComponentSet, ComponentMap +from pyomo.common.dependencies import numpy_available +from pyomo.common.dependencies import scipy as sp, scipy_available +from pyomo.common.dependencies import attempt_import +from pyomo.common.log import LoggingIntercept +from pyomo.core.base import ( + Any, + Var, + Constraint, + Objective, + ConcreteModel, + Param, + RangeSet, + maximize, + Block, +) +from pyomo.core.base.set_types import NonNegativeReals, NonPositiveReals, Reals +from pyomo.core.expr import log, sin, exp, RangedExpression +from pyomo.core.expr.compare import assertExpressionsEqual + +from pyomo.contrib.pyros.util import ( + ModelData, + ObjectiveType, + get_effective_var_partitioning, + get_var_certain_uncertain_bounds, + get_var_bound_pairs, + turn_nonadjustable_var_bounds_to_constraints, + turn_adjustable_var_bounds_to_constraints, + standardize_inequality_constraints, + standardize_equality_constraints, + standardize_active_objective, + declare_objective_expressions, + add_decision_rule_constraints, + add_decision_rule_variables, + reformulate_state_var_independent_eq_cons, + setup_working_model, + VariablePartitioning, + preprocess_model_data, + log_model_statistics, +) + +parameterized, param_available = attempt_import('parameterized') + +if not (numpy_available and scipy_available and param_available): + raise unittest.SkipTest( + 'PyROS preprocessor unit tests require parameterized, numpy, and scipy' + ) +parameterized = parameterized.parameterized + + +logger = logging.getLogger(__name__) + + +class TestEffectiveVarPartitioning(unittest.TestCase): + """ + Test method(s) for identification of nonadjustable variables + which are not necessarily in the user-provided sequence of + first-stage variables. + """ + + def build_simple_test_model_data(self): + """ + Build simple model for effective variable partitioning tests. + """ + m = ConcreteModel() + m.x1 = Var(bounds=(2, 2)) + m.x2 = Var() + m.z = Var() + m.y = Var(range(1, 5)) + m.q = Param(mutable=True, initialize=1) + + m.c0 = Constraint(expr=m.q + m.x1 + m.z == 0) + m.c1 = Constraint(expr=(0, m.x1 - m.z, 0)) + m.c2 = Constraint(expr=m.x1**2 - m.z + m.y[1] == 0) + m.c2_dupl = Constraint(expr=m.x1**2 - m.z + m.y[1] == 0) + m.c3 = Constraint(expr=m.x1**3 + m.y[1] + 2 * m.y[2] == 0) + m.c4 = Constraint(expr=m.x2**2 + m.y[1] + m.y[2] + m.y[3] + m.y[4] == 0) + m.c5 = Constraint(expr=m.x2 + 2 * m.y[2] + m.y[3] + 2 * m.y[4] == 0) + + model_data = Bunch() + model_data.config = Bunch() + model_data.working_model = ConcreteModel() + model_data.working_model.user_model = mdl = m.clone() + model_data.working_model.uncertain_params = [mdl.q] + + user_var_partitioning = model_data.working_model.user_var_partitioning = Bunch() + user_var_partitioning.first_stage_variables = [mdl.x1, mdl.x2] + user_var_partitioning.second_stage_variables = [mdl.z] + user_var_partitioning.state_variables = list(mdl.y.values()) + + return model_data + + def test_effective_partitioning_system(self): + """ + Test effective partitioning on an example system of + constraints. + """ + model_data = self.build_simple_test_model_data() + m = model_data.working_model.user_model + + config = model_data.config + config.decision_rule_order = 0 + config.progress_logger = logger + + expected_partitioning = { + "first_stage_variables": [m.x1, m.x2, m.z, m.y[1], m.y[2]], + "second_stage_variables": [], + "state_variables": [m.y[3], m.y[4]], + } + for dr_order in [0, 1, 2]: + config.decision_rule_order = dr_order + actual_partitioning = get_effective_var_partitioning(model_data=model_data) + for vartype, expected_vars in expected_partitioning.items(): + actual_vars = getattr(actual_partitioning, vartype) + self.assertEqual( + ComponentSet(expected_vars), + ComponentSet(actual_vars), + msg=( + f"Effective {vartype!r} are not as expected " + f"for decision rule order {config.decision_rule_order}. " + "\n" + f"Expected: {[var.name for var in expected_vars]}" + "\n" + f"Actual: {[var.name for var in actual_vars]}" + ), + ) + + # linear coefficient below tolerance; + # that should prevent pretriangularization + m.c2.set_value(m.x1**2 + m.z + 1e-10 * m.y[1] == 0) + m.c2_dupl.set_value(m.x1**2 + m.z + 1e-10 * m.y[1] == 0) + expected_partitioning = { + "first_stage_variables": [m.x1, m.x2, m.z], + "second_stage_variables": [], + "state_variables": list(m.y.values()), + } + for dr_order in [0, 1, 2]: + config.decision_rule_order = dr_order + actual_partitioning = get_effective_var_partitioning(model_data) + for vartype, expected_vars in expected_partitioning.items(): + actual_vars = getattr(actual_partitioning, vartype) + self.assertEqual( + ComponentSet(expected_vars), + ComponentSet(actual_vars), + msg=( + f"Effective {vartype!r} are not as expected " + f"for decision rule order {config.decision_rule_order}. " + "\n" + f"Expected: {[var.name for var in expected_vars]}" + "\n" + f"Actual: {[var.name for var in actual_vars]}" + ), + ) + + # put linear coefs above tolerance again: + # original behavior expected + m.c2.set_value(1e-6 * m.y[1] + m.x1**2 + m.z + 1e-10 * m.y[1] == 0) + m.c2_dupl.set_value(1e-6 * m.y[1] + m.x1**2 + m.z + 1e-10 * m.y[1] == 0) + expected_partitioning = { + "first_stage_variables": [m.x1, m.x2, m.z, m.y[1], m.y[2]], + "second_stage_variables": [], + "state_variables": [m.y[3], m.y[4]], + } + for dr_order in [0, 1, 2]: + config.decision_rule_order = dr_order + actual_partitioning = get_effective_var_partitioning(model_data) + for vartype, expected_vars in expected_partitioning.items(): + actual_vars = getattr(actual_partitioning, vartype) + self.assertEqual( + ComponentSet(expected_vars), + ComponentSet(actual_vars), + msg=( + f"Effective {vartype!r} are not as expected " + f"for decision rule order {config.decision_rule_order}. " + "\n" + f"Expected: {[var.name for var in expected_vars]}" + "\n" + f"Actual: {[var.name for var in actual_vars]}" + ), + ) + + # introducing this simple nonlinearity prevents + # y[2] from being identified as pretriangular + expected_partitioning = { + "first_stage_variables": [m.x1, m.x2, m.z, m.y[1]], + "second_stage_variables": [], + "state_variables": [m.y[2], m.y[3], m.y[4]], + } + m.c3.set_value(m.x1**3 + m.y[1] + 2 * m.y[1] * m.y[2] == 0) + for dr_order in [0, 1, 2]: + config.decision_rule_order = dr_order + actual_partitioning = get_effective_var_partitioning(model_data) + for vartype, expected_vars in expected_partitioning.items(): + actual_vars = getattr(actual_partitioning, vartype) + self.assertEqual( + ComponentSet(expected_vars), + ComponentSet(actual_vars), + msg=( + f"Effective {vartype!r} are not as expected " + f"for decision rule order {config.decision_rule_order}. " + "\n" + f"Expected: {[var.name for var in expected_vars]}" + "\n" + f"Actual: {[var.name for var in actual_vars]}" + ), + ) + + # fixing y[2] should make y[2] nonadjustable regardless + m.y[2].fix(10) + expected_partitioning = { + "first_stage_variables": [m.x1, m.x2, m.z, m.y[1], m.y[2]], + "second_stage_variables": [], + "state_variables": [m.y[3], m.y[4]], + } + for dr_order in [0, 1, 2]: + config.decision_rule_order = dr_order + actual_partitioning = get_effective_var_partitioning(model_data) + for vartype, expected_vars in expected_partitioning.items(): + actual_vars = getattr(actual_partitioning, vartype) + self.assertEqual( + ComponentSet(expected_vars), + ComponentSet(actual_vars), + msg=( + f"Effective {vartype!r} are not as expected " + f"for decision rule order {config.decision_rule_order}. " + "\n" + f"Expected: {[var.name for var in expected_vars]}" + "\n" + f"Actual: {[var.name for var in actual_vars]}" + ), + ) + + def test_effective_partitioning_modified_linear_system(self): + """ + Test effective partitioning on modified system of equations. + """ + model_data = self.build_simple_test_model_data() + m = model_data.working_model.user_model + + # now the second-stage variable can't be determined uniquely; + # can't pretriangularize this unless z already known to be + # nonadjustable + m.c1.set_value((0, m.x1 + m.z**2, 0)) + + config = model_data.config + config.decision_rule_order = 0 + config.progress_logger = logger + + expected_partitioning_static_dr = { + "first_stage_variables": [m.x1, m.x2, m.z, m.y[1], m.y[2]], + "second_stage_variables": [], + "state_variables": [m.y[3], m.y[4]], + } + actual_partitioning_static_dr = get_effective_var_partitioning(model_data) + for vartype, expected_vars in expected_partitioning_static_dr.items(): + actual_vars = getattr(actual_partitioning_static_dr, vartype) + self.assertEqual( + ComponentSet(expected_vars), + ComponentSet(actual_vars), + msg=( + f"Effective {vartype!r} are not as expected " + f"for decision rule order {config.decision_rule_order}. " + "\n" + f"Expected: {[var.name for var in expected_vars]}" + "\n" + f"Actual: {[var.name for var in actual_vars]}" + ), + ) + + config.decision_rule_order = 1 + expected_partitioning_nonstatic_dr = { + "first_stage_variables": [m.x1, m.x2], + "second_stage_variables": [m.z], + "state_variables": list(m.y.values()), + } + for dr_order in [1, 2]: + actual_partitioning_nonstatic_dr = get_effective_var_partitioning( + model_data + ) + for vartype, expected_vars in expected_partitioning_nonstatic_dr.items(): + actual_vars = getattr(actual_partitioning_nonstatic_dr, vartype) + self.assertEqual( + ComponentSet(expected_vars), + ComponentSet(actual_vars), + msg=( + f"Effective {vartype!r} are not as expected " + f"for decision rule order {config.decision_rule_order}. " + "\n" + f"Expected: {[var.name for var in expected_vars]}" + "\n" + f"Actual: {[var.name for var in actual_vars]}" + ), + ) + + +class TestSetupModelData(unittest.TestCase): + """ + Test method for setting up the working model works as expected. + """ + + def build_test_model_data(self): + """ + Build model data object for the preprocessor. + """ + model_data = Bunch() + model_data.config = Bunch() + model_data.original_model = m = ConcreteModel() + + # PARAMS: one uncertain, one certain + m.p = Param(initialize=2, mutable=True) + m.q = Param(initialize=4.5, mutable=True) + + # first-stage variables + m.x1 = Var(bounds=(0, m.q), initialize=1) + m.x2 = Var(domain=NonNegativeReals, bounds=[m.p, m.p], initialize=m.p) + + # second-stage variables + m.z1 = Var(domain=RangeSet(2, 4, 0), bounds=[-m.p, m.q], initialize=2) + m.z2 = Var(bounds=(-2 * m.q**2, None), initialize=1) + m.z3 = Var(bounds=(-m.q, 0), initialize=0) + m.z4 = Var(initialize=5) + m.z5 = Var(domain=NonNegativeReals, bounds=(m.q, m.q)) + + # state variables + m.y1 = Var(domain=NonNegativeReals, initialize=0) + m.y2 = Var(initialize=10) + # note: y3 out-of-scope, as it will not appear in the active + # Objective and Constraint objects + m.y3 = Var(domain=RangeSet(0, 1, 0), bounds=(0.2, 0.5)) + + # fix some variables + m.z4.fix() + m.y2.fix() + + # EQUALITY CONSTRAINTS + m.eq1 = Constraint(expr=m.q * (m.z3 + m.x2) == 0) + m.eq2 = Constraint(expr=m.x1 - m.z1 == 0) + m.eq3 = Constraint(expr=m.x1**2 + m.x2 + m.p * m.z2 == m.p) + m.eq4 = Constraint(expr=m.z3 + m.y1 == m.q) + + # INEQUALITY CONSTRAINTS + m.ineq1 = Constraint(expr=(-m.p, m.x1 + m.z1, exp(m.q))) + m.ineq2 = Constraint(expr=(0, m.x1 + m.x2, 10)) + m.ineq3 = Constraint(expr=(2 * m.q, 2 * (m.z3 + m.y1), 2 * m.q)) + m.ineq4 = Constraint(expr=-m.q <= m.y2**2 + log(m.y2)) + + # out of scope: deactivated + m.ineq5 = Constraint(expr=m.y3 <= m.q) + m.ineq5.deactivate() + + # OBJECTIVE + # contains a rich combination of first-stage and second-stage terms + m.obj = Objective( + expr=( + m.p**2 + + 2 * m.p * m.q + + log(m.x1) + + 2 * m.p * m.x1 + + m.q**2 * m.x1 + + m.p**3 * (m.z1 + m.z2 + m.y1) + + m.z4 + + m.z5 + ) + ) + + # set up the var partitioning + user_var_partitioning = VariablePartitioning( + first_stage_variables=[m.x1, m.x2], + second_stage_variables=[m.z1, m.z2, m.z3, m.z4, m.z5], + # note: y3 out of scope, so excluded + state_variables=[m.y1, m.y2], + ) + + return model_data, user_var_partitioning + + def test_setup_working_model(self): + """ + Test method for setting up the working model is as expected. + """ + model_data, user_var_partitioning = self.build_test_model_data() + om = model_data.original_model + config = model_data.config + config.uncertain_params = [om.q] + + setup_working_model(model_data, user_var_partitioning) + working_model = model_data.working_model + + # active constraints + m = model_data.working_model.user_model + self.assertEqual( + ComponentSet(working_model.original_active_equality_cons), + ComponentSet([m.eq1, m.eq2, m.eq3, m.eq4]), + ) + self.assertEqual( + ComponentSet(working_model.original_active_inequality_cons), + ComponentSet([m.ineq1, m.ineq2, m.ineq3, m.ineq4]), + ) + + # active objective + self.assertTrue(m.obj.active) + + # user var partitioning + up = working_model.user_var_partitioning + self.assertEqual( + ComponentSet(up.first_stage_variables), ComponentSet([m.x1, m.x2]) + ) + self.assertEqual( + ComponentSet(up.second_stage_variables), + ComponentSet([m.z1, m.z2, m.z3, m.z4, m.z5]), + ) + self.assertEqual(ComponentSet(up.state_variables), ComponentSet([m.y1, m.y2])) + + # uncertain params + self.assertEqual( + ComponentSet(working_model.uncertain_params), ComponentSet([m.q]) + ) + + # ensure original model unchanged + self.assertFalse( + hasattr(om, "util"), msg="Original model still has temporary util block" + ) + + # constraint partitioning initialization + self.assertFalse(working_model.first_stage.inequality_cons) + self.assertFalse(working_model.first_stage.equality_cons) + self.assertFalse(working_model.second_stage.inequality_cons) + self.assertFalse(working_model.second_stage.equality_cons) + + +class TestResolveVarBounds(unittest.TestCase): + """ + Tests for resolution of variable bounds. + """ + + def test_resolve_var_bounds(self): + """ + Test resolve variable bounds. + """ + m = ConcreteModel() + m.q1 = Param(initialize=1, mutable=True) + m.q2 = Param(initialize=1, mutable=True) + m.p1 = Param(initialize=5, mutable=True) + m.p2 = Param(initialize=0, mutable=True) + m.z1 = Var(bounds=(0, 1)) + m.z2 = Var(bounds=(1, 1)) + m.z3 = Var(domain=NonNegativeReals, bounds=(2, 4)) + m.z4 = Var(domain=NonNegativeReals, bounds=(m.q1, 0)) + m.z5 = Var(domain=RangeSet(2, 4, 0), bounds=(4, 6)) + m.z6 = Var(domain=NonNegativeReals, bounds=(m.q1, m.q1)) + m.z7 = Var(domain=NonNegativeReals, bounds=(m.q1, 1 * m.q1)) + m.z8 = Var(domain=RangeSet(0, 5, 0), bounds=[m.q1, m.q2]) + m.z9 = Var(domain=RangeSet(0, 5, 0), bounds=[m.q1, m.p1]) + m.z10 = Var(domain=RangeSet(0, 5, 0), bounds=[m.q1, m.p2]) + + # useful for checking domains later + original_var_domains = ComponentMap( + ( + (var, var.domain) + for var in (m.z1, m.z2, m.z3, m.z4, m.z5, m.z6, m.z7, m.z8, m.z9, m.z10) + ) + ) + + expected_bounds = ( + (m.z1, (0, None, 1), (None, None, None)), + (m.z2, (None, 1, None), (None, None, None)), + (m.z3, (2, None, 4), (None, None, None)), + (m.z4, (None, 0, None), (m.q1, None, None)), + (m.z5, (None, 4, None), (None, None, None)), + (m.z6, (0, None, None), (None, m.q1, None)), + # the 1 * q expression is simplified to just q + # when variable bounds are specified + (m.z7, (0, None, None), (None, m.q1, None)), + (m.z8, (0, None, 5), (m.q1, None, m.q2)), + (m.z9, (0, None, m.p1), (m.q1, None, None)), + (m.z10, (0, None, m.p2), (m.q1, None, None)), + ) + for var, exp_cert_bounds, exp_uncert_bounds in expected_bounds: + actual_cert_bounds, actual_uncert_bounds = get_var_certain_uncertain_bounds( + var, [m.q1, m.q2] + ) + for btype, exp_bound in zip(("lower", "eq", "upper"), exp_cert_bounds): + actual_bound = getattr(actual_cert_bounds, btype) + self.assertIs( + exp_bound, + actual_bound, + msg=( + f"Resolved certain {btype} bound for variable " + f"{var.name!r} is not as expected. " + "\n Expected certain bounds: " + f"lower={str(exp_cert_bounds[0])}, " + f"eq={str(exp_cert_bounds[1])}, " + f"upper={str(exp_cert_bounds[2])} " + "\n Actual certain bounds: " + f"lower={str(actual_cert_bounds.lower)}, " + f"eq={str(actual_cert_bounds.eq)}, " + f"upper={str(actual_cert_bounds.upper)} " + ), + ) + + for btype, exp_bound in zip(("lower", "eq", "upper"), exp_uncert_bounds): + actual_bound = getattr(actual_uncert_bounds, btype) + self.assertIs( + exp_bound, + actual_bound, + msg=( + f"Resolved uncertain {btype} bound for variable " + f"{var.name!r} is not as expected. " + "\n Expected uncertain bounds: " + f"lower={str(exp_uncert_bounds[0])}, " + f"eq={str(exp_uncert_bounds[1])}, " + f"upper={str(exp_uncert_bounds[2])} " + "\n Actual uncertain bounds: " + f"lower={str(actual_uncert_bounds.lower)}, " + f"eq={str(actual_uncert_bounds.eq)}, " + f"upper={str(actual_uncert_bounds.upper)} " + ), + ) + + # the bounds resolution method should leave domains unaltered + for var, orig_domain in original_var_domains.items(): + self.assertIs( + var.domain, + orig_domain, + msg=( + f"Domain for var {var.name!r} appears to have been changed " + f"from {orig_domain} to {var.domain} " + "by the bounds resolution method " + f"{get_var_certain_uncertain_bounds.__name__!r}." + ), + ) + + +class TestTurnVarBoundsToConstraints(unittest.TestCase): + """ + Tests for reformulating variable bounds to explicit + inequality/equality constraints. + """ + + def build_simple_test_model_data(self): + """ + Build simple model data object for turning bounds + to constraints. + """ + model_data = Bunch() + model_data.config = Bunch() + + model_data.working_model = ConcreteModel() + model_data.working_model.user_model = m = ConcreteModel() + + m.q1 = Param(initialize=1, mutable=True) + m.q2 = Param(initialize=1, mutable=True) + m.p1 = Param(initialize=5, mutable=True) + m.p2 = Param(initialize=0, mutable=True) + + m.z1 = Var(bounds=(None, None)) + m.z2 = Var(bounds=(1, 1)) + m.z3 = Var(domain=NonNegativeReals, bounds=(2, m.p1)) + m.z4 = Var(domain=NonNegativeReals, bounds=(m.q1, 0)) + m.z5 = Var(domain=RangeSet(2, 4, 0), bounds=(4, m.q2)) + m.z6 = Var(domain=NonNegativeReals, bounds=(m.q1, m.q1)) + m.z7 = Var(domain=NonPositiveReals, bounds=(m.q1, 1 * m.q1)) + m.z8 = Var(domain=RangeSet(0, 5, 0), bounds=[m.q1, m.q2]) + m.z9 = Var(domain=RangeSet(0, 5, 0), bounds=[m.q1, m.p1]) + m.z10 = Var(domain=RangeSet(0, 5, 0), bounds=[m.q1, m.p2]) + + model_data.working_model.uncertain_params = [m.q1, m.q2] + + model_data.working_model.second_stage = Block() + model_data.working_model.second_stage.inequality_cons = Constraint(Any) + model_data.working_model.second_stage.equality_cons = Constraint(Any) + model_data.separation_priority_order = dict() + + return model_data + + def test_turn_nonadjustable_bounds_to_constraints(self): + """ + Test subroutine for reformulating bounds on nonadjustable + variables to constraints. + + This subroutine should reformulate only the uncertain + declared bounds for the nonadjustable variables. + All other variable bounds should be left unchanged. + All variable domains should remain unchanged. + """ + model_data = self.build_simple_test_model_data() + + working_model = model_data.working_model + m = model_data.working_model.user_model + + # mock effective partitioning for testing + ep = model_data.working_model.effective_var_partitioning = Bunch() + ep.first_stage_variables = [m.z1, m.z2, m.z3, m.z4, m.z5, m.z6, m.z7, m.z8] + ep.second_stage_variables = [m.z9] + ep.state_variables = [m.z10] + effective_first_stage_var_set = ComponentSet(ep.first_stage_variables) + + original_var_domains_and_bounds = ComponentMap( + (var, (var.domain, get_var_bound_pairs(var)[1])) + for var in model_data.working_model.user_model.component_data_objects(Var) + ) + + # expected final bounds and bound constraint types + expected_final_nonadj_var_bounds = ComponentMap( + ( + (m.z1, (get_var_bound_pairs(m.z1)[1], [])), + (m.z2, (get_var_bound_pairs(m.z2)[1], [])), + (m.z3, (get_var_bound_pairs(m.z3)[1], [])), + (m.z4, ((None, 0), ["lower"])), + (m.z5, ((4, None), ["upper"])), + (m.z6, ((None, None), ["eq"])), + (m.z7, ((None, None), ["eq"])), + (m.z8, ((None, None), ["lower", "upper"])), + ) + ) + + turn_nonadjustable_var_bounds_to_constraints(model_data) + + for var, (orig_domain, orig_bounds) in original_var_domains_and_bounds.items(): + # all var domains should remain unchanged + self.assertIs( + var.domain, + orig_domain, + msg=( + f"Domain of variable {var.name!r} was changed from " + f"{orig_domain} to {var.domain} by " + f"{turn_nonadjustable_var_bounds_to_constraints.__name__!r}. " + ), + ) + _, (final_lb, final_ub) = get_var_bound_pairs(var) + + if var not in effective_first_stage_var_set: + # these are the adjustable variables. + # bounds should not have been changed + self.assertIs( + orig_bounds[0], + final_lb, + msg=( + f"Lower bound for adjustable variable {var.name!r} appears to " + f"have been changed from {orig_bounds[0]} to {final_lb}." + ), + ) + self.assertIs( + orig_bounds[1], + final_ub, + msg=( + f"Upper bound for adjustable variable {var.name!r} appears to " + f"have been changed from {orig_bounds[1]} to {final_ub}." + ), + ) + else: + # these are the nonadjustable variables. + # only the uncertain bounds should have been + # changed, and accompanying constraints added + + expected_bounds, con_bound_types = expected_final_nonadj_var_bounds[var] + expected_lb, expected_ub = expected_bounds + + self.assertIs( + expected_lb, + final_lb, + msg=( + f"Lower bound for nonadjustable variable {var.name!r} " + f"should be {expected_lb}, but was " + f"found to be {final_lb}." + ), + ) + self.assertIs( + expected_ub, + final_ub, + msg=( + f"Upper bound for nonadjustable variable {var.name!r} " + f"should be {expected_ub}, but was " + f"found to be {final_ub}." + ), + ) + + second_stage = working_model.second_stage + + # verify bound constraint expressions + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z4_uncertain_lower_bound_con"].expr, + -m.z4 <= -m.q1, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z5_uncertain_upper_bound_con"].expr, + m.z5 <= m.q2, + ) + assertExpressionsEqual( + self, + second_stage.equality_cons["var_z6_uncertain_eq_bound_con"].expr, + m.z6 == m.q1, + ) + assertExpressionsEqual( + self, + second_stage.equality_cons["var_z7_uncertain_eq_bound_con"].expr, + m.z7 == m.q1, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z8_uncertain_lower_bound_con"].expr, + -m.z8 <= -m.q1, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z8_uncertain_upper_bound_con"].expr, + m.z8 <= m.q2, + ) + + # check constraint partitioning + self.assertEqual( + len(working_model.second_stage.inequality_cons), + 4, + msg="Number of second-stage inequalities not as expected.", + ) + self.assertEqual( + len(working_model.second_stage.equality_cons), + 2, + msg="Number of second-stage equalities not as expected.", + ) + + # check separation priorities + for con_name in second_stage.inequality_cons: + self.assertEqual( + model_data.separation_priority_order[con_name], + 0, + msg=( + f"Separation priority for entry {con_name!r} of second-stage " + "inequalities not as expected." + ), + ) + + def test_turn_adjustable_bounds_to_constraints(self): + """ + Test subroutine for reformulating domains and bounds + on adjustable variables to constraints. + + This subroutine should reformulate the domain and + declared bounds for every adjustable + (i.e. effective second-stage and effective state) + variable. + The domains and bounds for all other variables + should be left unchanged. + """ + model_data = self.build_simple_test_model_data() + + m = model_data.working_model.user_model + + # simple mock partitioning for the test + ep = model_data.working_model.effective_var_partitioning = Bunch() + ep.first_stage_variables = [m.z9, m.z10] + ep.second_stage_variables = [m.z1, m.z2, m.z3, m.z4, m.z5, m.z6] + ep.state_variables = [m.z7, m.z8] + effective_first_stage_var_set = ComponentSet(ep.first_stage_variables) + + original_var_domains_and_bounds = ComponentMap( + (var, (var.domain, get_var_bound_pairs(var)[1])) + for var in model_data.working_model.user_model.component_data_objects(Var) + ) + + turn_adjustable_var_bounds_to_constraints(model_data) + + for var, (orig_domain, orig_bounds) in original_var_domains_and_bounds.items(): + _, (final_lb, final_ub) = get_var_bound_pairs(var) + if var not in effective_first_stage_var_set: + # these are the adjustable variables. + # domains should have been removed, + # i.e. changed to reals. + # bounds should also have been removed + self.assertIs( + var.domain, + Reals, + msg=( + f"Domain of adjustable variable {var.name!r} " + "should now be Reals, but was instead found to be " + f"{var.domain}" + ), + ) + self.assertIsNone( + final_lb, + msg=( + f"Declared lower bound for adjustable variable {var.name!r} " + "should now be None, as all adjustable variable bounds " + "should have been removed, but was instead found to be" + f"{final_lb}." + ), + ) + self.assertIsNone( + final_ub, + msg=( + f"Declared upper bound for adjustable variable {var.name!r} " + "should now be None, as all adjustable variable bounds " + "should have been removed, but was instead found to be" + f"{final_ub}." + ), + ) + else: + # these are the nonadjustable variables. + # domains and bounds should be left unchanged + self.assertIs( + var.domain, + orig_domain, + msg=( + f"Domain of adjustable variable {var.name!r} " + "should now be Reals, but was instead found to be " + f"{var.domain}" + ), + ) + self.assertIs( + orig_bounds[0], + final_lb, + msg=( + f"Lower bound for nonadjustable variable {var.name!r} " + "appears to " + f"have been changed from {orig_bounds[0]} to {final_lb}." + ), + ) + self.assertIs( + orig_bounds[1], + final_ub, + msg=( + f"Upper bound for nonadjustable variable {var.name!r} " + "appears to " + f"have been changed from {orig_bounds[1]} to {final_ub}." + ), + ) + + second_stage = model_data.working_model.second_stage + + self.assertEqual(len(second_stage.inequality_cons), 10) + self.assertEqual(len(second_stage.equality_cons), 5) + + # verify bound constraint expressions + assertExpressionsEqual( + self, + second_stage.equality_cons["var_z2_certain_eq_bound_con"].expr, + m.z2 == 1, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z3_certain_lower_bound_con"].expr, + -m.z3 <= -2, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z3_certain_upper_bound_con"].expr, + m.z3 <= m.p1, + ) + assertExpressionsEqual( + self, + second_stage.equality_cons["var_z4_certain_eq_bound_con"].expr, + m.z4 == 0, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z4_uncertain_lower_bound_con"].expr, + -m.z4 <= -m.q1, + ) + assertExpressionsEqual( + self, + second_stage.equality_cons["var_z5_certain_eq_bound_con"].expr, + m.z5 == 4, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z5_uncertain_upper_bound_con"].expr, + m.z5 <= m.q2, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z6_certain_lower_bound_con"].expr, + -m.z6 <= 0, + ) + assertExpressionsEqual( + self, + second_stage.equality_cons["var_z6_uncertain_eq_bound_con"].expr, + m.z6 == m.q1, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z7_certain_upper_bound_con"].expr, + m.z7 <= 0, + ) + assertExpressionsEqual( + self, + second_stage.equality_cons["var_z7_uncertain_eq_bound_con"].expr, + m.z7 == m.q1, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z8_certain_lower_bound_con"].expr, + -m.z8 <= 0, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z8_certain_upper_bound_con"].expr, + m.z8 <= 5, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z8_uncertain_lower_bound_con"].expr, + -m.z8 <= -m.q1, + ) + assertExpressionsEqual( + self, + second_stage.inequality_cons["var_z8_uncertain_upper_bound_con"].expr, + m.z8 <= m.q2, + ) + + # check separation priorities + for con_name in second_stage.inequality_cons: + self.assertEqual( + model_data.separation_priority_order[con_name], + 0, + msg=( + f"Separation priority for entry {con_name!r} of second-stage " + "inequalities not as expected." + ), + ) + + +class TestStandardizeInequalityConstraints(unittest.TestCase): + """ + Test standardization of inequality constraints. + """ + + def build_simple_test_model_data(self): + """ + Build model data object for testing constraint standardization + routines. + """ + model_data = Bunch() + model_data.config = Bunch() + model_data.working_model = ConcreteModel() + model_data.working_model.user_model = m = Block() + + m.x1 = Var() + m.x2 = Var() + m.z1 = Var() + m.z2 = Var() + m.y1 = Var() + + m.p = Param(initialize=2, mutable=True) + m.q = Param(mutable=True, initialize=1) + + m.c1 = Constraint(expr=m.x1 <= 1) + m.c2 = Constraint(expr=(1, m.x1, 2)) + m.c3 = Constraint(expr=m.q <= m.x1) + m.c3_up = Constraint(expr=m.x1 - 2 * m.q <= 0) + m.c4 = Constraint(expr=(log(m.p), m.x2, m.q)) + m.c5 = Constraint(expr=(m.q, m.x2, 2 * m.q)) + m.c6 = Constraint(expr=m.z1 <= 1) + m.c7 = Constraint(expr=(0, m.z2, 1)) + m.c8 = Constraint(expr=(m.p**0.5, m.y1, m.p)) + m.c9 = Constraint(expr=m.y1 - m.q <= 0) + m.c10 = Constraint(expr=m.y1 <= m.q**2) + m.c11 = Constraint(expr=m.z2 <= m.q) + m.c12 = Constraint(expr=(m.q**2, m.x1, sin(m.p))) + + m.c11.deactivate() + + model_data.working_model.uncertain_params = [m.q] + + model_data.working_model.first_stage = Block() + model_data.working_model.first_stage.inequality_cons = Constraint(Any) + model_data.working_model.second_stage = Block() + model_data.working_model.second_stage.inequality_cons = Constraint(Any) + + model_data.working_model.original_active_inequality_cons = [ + m.c1, + m.c2, + m.c3, + m.c3_up, + m.c4, + m.c5, + m.c6, + m.c7, + m.c8, + m.c9, + m.c10, + m.c12, + ] + + ep = model_data.working_model.effective_var_partitioning = Bunch() + ep.first_stage_variables = [m.x1, m.x2] + ep.second_stage_variables = [m.z1, m.z2] + ep.state_variables = [m.y1] + + model_data.separation_priority_order = dict() + + return model_data + + def test_standardize_inequality_constraints(self): + """ + Test inequality constraint standardization routine. + """ + model_data = self.build_simple_test_model_data() + working_model = model_data.working_model + m = working_model.user_model + + model_data.config.separation_priority_order = dict(c3=1, c5=2) + standardize_inequality_constraints(model_data) + + fs_ineq_cons = working_model.first_stage.inequality_cons + ss_ineq_cons = working_model.second_stage.inequality_cons + + self.assertEqual(len(fs_ineq_cons), 4) + self.assertEqual(len(ss_ineq_cons), 13) + + self.assertFalse(m.c1.active) + new_c1_con = fs_ineq_cons["ineq_con_c1"] + self.assertTrue(new_c1_con.active) + assertExpressionsEqual(self, new_c1_con.expr, m.x1 <= 1) + + # 1 <= m.x1 <= 2; first-stage constraint. no modification + self.assertFalse(m.c2.active) + new_c2_con = fs_ineq_cons["ineq_con_c2"] + self.assertTrue(new_c2_con.active) + assertExpressionsEqual( + self, new_c2_con.expr, RangedExpression((1, m.x1, 2), False) + ) + + # m.q <= m.x1; single second-stage inequality. modify in place + self.assertFalse(m.c3.active) + new_c3_con = ss_ineq_cons["ineq_con_c3_lower_bound_con"] + self.assertTrue(new_c3_con.active) + assertExpressionsEqual(self, new_c3_con.expr, -m.x1 <= -m.q) + self.assertEqual(model_data.separation_priority_order[new_c3_con.index()], 1) + + # m.x1 - 2 * m.q <= 0; + # single second-stage inequality. modify in place + # test case where uncertain param is in body, + # rather than bound, and rest of expression is first-stage + self.assertFalse(m.c3_up.active) + new_c3_up_con = ss_ineq_cons["ineq_con_c3_up_upper_bound_con"] + self.assertTrue(new_c3_up_con.active) + assertExpressionsEqual(self, new_c3_up_con.expr, m.x1 - 2 * m.q <= 0.0) + + # log(m.p) <= m.x2 <= m.q + # lower bound is first-stage, upper bound second-stage + self.assertFalse(m.c4.active) + new_c4_lower_bound_con = fs_ineq_cons["ineq_con_c4_lower_bound_con"] + new_c4_upper_bound_con = ss_ineq_cons["ineq_con_c4_upper_bound_con"] + self.assertTrue(new_c4_lower_bound_con.active) + self.assertTrue(new_c4_upper_bound_con.active) + assertExpressionsEqual(self, new_c4_lower_bound_con.expr, log(m.p) <= m.x2) + assertExpressionsEqual(self, new_c4_upper_bound_con.expr, m.x2 <= m.q) + + # m.q <= m.x2 <= 2 * m.q + # two second-stage constraints, one for each bound + self.assertFalse(m.c5.active) + new_c5_lower_bound_con = ss_ineq_cons["ineq_con_c5_lower_bound_con"] + new_c5_upper_bound_con = ss_ineq_cons["ineq_con_c5_upper_bound_con"] + self.assertTrue(new_c5_lower_bound_con.active) + self.assertTrue(new_c5_lower_bound_con.active) + assertExpressionsEqual(self, new_c5_lower_bound_con.expr, -m.x2 <= -m.q) + assertExpressionsEqual(self, new_c5_upper_bound_con.expr, m.x2 <= 2 * m.q) + self.assertEqual( + model_data.separation_priority_order[new_c5_lower_bound_con.index()], 2 + ) + self.assertEqual( + model_data.separation_priority_order[new_c5_upper_bound_con.index()], 2 + ) + + # single second-stage inequality + self.assertFalse(m.c6.active) + new_c6_upper_bound_con = ss_ineq_cons["ineq_con_c6_upper_bound_con"] + self.assertTrue(new_c6_upper_bound_con.active) + assertExpressionsEqual(self, new_c6_upper_bound_con.expr, m.z1 <= 1.0) + + # two new second-stage inequalities + self.assertFalse(m.c7.active) + new_c7_lower_bound_con = ss_ineq_cons["ineq_con_c7_lower_bound_con"] + new_c7_upper_bound_con = ss_ineq_cons["ineq_con_c7_upper_bound_con"] + self.assertTrue(new_c7_lower_bound_con.active) + self.assertTrue(new_c7_upper_bound_con.active) + assertExpressionsEqual(self, new_c7_lower_bound_con.expr, -m.z2 <= 0.0) + assertExpressionsEqual(self, new_c7_upper_bound_con.expr, m.z2 <= 1.0) + + # m.p ** 0.5 <= m.y1 <= m.p + # two second-stage inequalities + self.assertFalse(m.c8.active) + new_c8_lower_bound_con = ss_ineq_cons["ineq_con_c8_lower_bound_con"] + new_c8_upper_bound_con = ss_ineq_cons["ineq_con_c8_upper_bound_con"] + self.assertTrue(new_c8_lower_bound_con.active) + self.assertTrue(new_c8_upper_bound_con.active) + assertExpressionsEqual(self, new_c8_lower_bound_con.expr, -m.y1 <= -m.p**0.5) + assertExpressionsEqual(self, new_c8_upper_bound_con.expr, m.y1 <= m.p) + + # m.y1 - m.q <= 0 + # one second-stage inequality + self.assertFalse(m.c9.active) + new_c9_upper_bound_con = ss_ineq_cons["ineq_con_c9_upper_bound_con"] + self.assertTrue(new_c9_upper_bound_con.active) + assertExpressionsEqual(self, new_c9_upper_bound_con.expr, m.y1 - m.q <= 0.0) + + # m.y1 <= m.q ** 2 + # single second-stage inequality + self.assertFalse(m.c10.active) + new_c10_upper_bound_con = ss_ineq_cons["ineq_con_c10_upper_bound_con"] + self.assertTrue(new_c10_upper_bound_con.active) + assertExpressionsEqual(self, new_c10_upper_bound_con.expr, m.y1 <= m.q**2) + + # originally deactivated; + # no modification + self.assertFalse(m.c11.active) + assertExpressionsEqual(self, m.c11.expr, m.z2 <= m.q) + + # lower bound second-stage; upper bound first-stage + self.assertFalse(m.c12.active) + new_c12_lower_bound_con = ss_ineq_cons["ineq_con_c12_lower_bound_con"] + new_c12_upper_bound_con = fs_ineq_cons["ineq_con_c12_upper_bound_con"] + self.assertTrue(new_c12_lower_bound_con.active) + self.assertTrue(new_c12_upper_bound_con.active) + assertExpressionsEqual(self, new_c12_lower_bound_con.expr, -m.x1 <= -m.q**2) + assertExpressionsEqual(self, new_c12_upper_bound_con.expr, m.x1 <= sin(m.p)) + + # check separation priorities + for con_name in ss_ineq_cons: + if "c3" not in con_name and "c5" not in con_name: + self.assertEqual( + model_data.separation_priority_order[con_name], + 0, + msg=( + f"Separation priority for entry {con_name!r} of second-stage " + "inequalities not as expected." + ), + ) + + def test_standardize_inequality_error(self): + """ + Test exception raised by inequality constraint standardization + method if equality-type expression detected. + """ + model_data = self.build_simple_test_model_data() + model_data.config.separation_priority_order = dict() + working_model = model_data.working_model + m = working_model.user_model + + # change to equality constraint to trigger the exception + m.c6.set_value(m.z1 == 1) + + exc_str = r"Found an equality bound.*1.0.*for the constraint.*c6'" + with self.assertRaisesRegex(ValueError, exc_str): + standardize_inequality_constraints(model_data) + + +class TestStandardizeEqualityConstraints(unittest.TestCase): + """ + Test standardization of equality constraints. + """ + + def build_simple_test_model_data(self): + """ + Build model data object for testing constraint standardization + routines. + """ + model_data = Bunch() + model_data.config = Bunch() + model_data.working_model = ConcreteModel() + model_data.working_model.user_model = m = Block() + + m.x1 = Var() + m.x2 = Var() + m.z1 = Var() + m.z2 = Var() + m.y1 = Var() + + m.p = Param(initialize=2, mutable=True) + m.q = Param(mutable=True, initialize=1) + + # first-stage equalities + m.eq1 = Constraint(expr=m.x1 + log(m.p) == 1) + m.eq2 = Constraint(expr=(1, m.x2, 1)) + + # second-stage equalities + m.eq3 = Constraint(expr=m.x2 * m.q == 1) + m.eq4 = Constraint(expr=m.x2 - m.z1**2 == 0) + m.eq5 = Constraint(expr=m.q == m.y1) + m.eq6 = Constraint(expr=(m.q, m.y1, m.q)) + m.eq7 = Constraint(expr=m.z2 == 0) + + # make eq7 out of scope + m.eq7.deactivate() + + model_data.working_model.uncertain_params = [m.q] + + model_data.working_model.first_stage = Block() + model_data.working_model.first_stage.equality_cons = Constraint(Any) + model_data.working_model.second_stage = Block() + model_data.working_model.second_stage.equality_cons = Constraint(Any) + + model_data.working_model.original_active_equality_cons = [ + m.eq1, + m.eq2, + m.eq3, + m.eq4, + m.eq5, + m.eq6, + ] + + ep = model_data.working_model.effective_var_partitioning = Bunch() + ep.second_stage_variables = [m.x1, m.x2] + ep.second_stage_variables = [m.z1, m.z2] + ep.state_variables = [m.y1] + + return model_data + + def test_standardize_equality_constraints(self): + """ + Test inequality constraint standardization routine. + """ + model_data = self.build_simple_test_model_data() + working_model = model_data.working_model + m = working_model.user_model + + standardize_equality_constraints(model_data) + + first_stage_eq_cons = working_model.first_stage.equality_cons + second_stage_eq_cons = working_model.second_stage.equality_cons + + self.assertEqual(len(first_stage_eq_cons), 2) + self.assertEqual(len(second_stage_eq_cons), 4) + + self.assertFalse(m.eq1.active) + new_eq1_con = first_stage_eq_cons["eq_con_eq1"] + self.assertTrue(new_eq1_con.active) + assertExpressionsEqual(self, new_eq1_con.expr, m.x1 + log(m.p) == 1) + + self.assertFalse(m.eq2.active) + new_eq2_con = first_stage_eq_cons["eq_con_eq2"] + self.assertTrue(new_eq2_con.active) + assertExpressionsEqual( + self, new_eq2_con.expr, RangedExpression((1, m.x2, 1), False) + ) + + self.assertFalse(m.eq3.active) + new_eq3_con = second_stage_eq_cons["eq_con_eq3"] + self.assertTrue(new_eq3_con.active) + assertExpressionsEqual(self, new_eq3_con.expr, m.x2 * m.q == 1) + + self.assertFalse(m.eq4.active) + new_eq4_con = second_stage_eq_cons["eq_con_eq4"] + self.assertTrue(new_eq4_con) + assertExpressionsEqual(self, new_eq4_con.expr, m.x2 - m.z1**2 == 0) + + self.assertFalse(m.eq5.active) + new_eq5_con = second_stage_eq_cons["eq_con_eq5"] + self.assertTrue(new_eq5_con) + assertExpressionsEqual(self, new_eq5_con.expr, m.q == m.y1) + + self.assertFalse(m.eq6.active) + new_eq6_con = second_stage_eq_cons["eq_con_eq6"] + self.assertTrue(new_eq6_con.active) + assertExpressionsEqual( + self, new_eq6_con.expr, RangedExpression((m.q, m.y1, m.q), False) + ) + + # excluded from the list of active constraints; + # state should remain unchanged + self.assertFalse(m.eq7.active) + assertExpressionsEqual(self, m.eq7.expr, m.z2 == 0) + + +class TestStandardizeActiveObjective(unittest.TestCase): + """ + Test methods for standardization of the active objective. + """ + + def build_simple_test_model_data(self): + """ + Build simple model for testing active objective + standardization. + """ + model_data = Bunch() + model_data.config = Bunch() + model_data.working_model = ConcreteModel() + model_data.working_model.user_model = m = Block() + + m.x = Var(initialize=1) + m.z = Var(initialize=2) + m.y = Var() + + m.p = Param(initialize=1, mutable=True) + m.q = Param(initialize=1, mutable=True) + + m.obj1 = Objective( + expr=( + 10 + m.p + m.q + m.p * m.x + m.z * m.p + m.y**2 * m.q + m.y + log(m.x) + ) + ) + m.obj2 = Objective(expr=m.p + m.x * m.z + m.z**2) + + model_data.working_model.uncertain_params = [m.q] + + up = model_data.working_model.user_var_partitioning = Bunch() + up.first_stage_variables = [m.x] + up.second_stage_variables = [m.z] + up.state_variables = [m.y] + + ep = model_data.working_model.effective_var_partitioning = Bunch() + ep.first_stage_variables = [m.x, m.z] + ep.second_stage_variables = [] + ep.state_variables = [m.y] + + model_data.working_model.first_stage = Block() + model_data.working_model.first_stage.inequality_cons = Constraint(Any) + model_data.working_model.second_stage = Block() + model_data.working_model.second_stage.inequality_cons = Constraint(Any) + + model_data.separation_priority_order = dict() + + return model_data + + def test_declare_objective_expressions(self): + """ + Test method for identification/declaration + of per-stage objective summands. + """ + model_data = self.build_simple_test_model_data() + working_model = model_data.working_model + m = model_data.working_model.user_model + + declare_objective_expressions(working_model, m.obj1) + assertExpressionsEqual( + self, + working_model.first_stage_objective.expr, + 10 + m.p + m.p * m.x + log(m.x), + ) + assertExpressionsEqual( + self, + working_model.second_stage_objective.expr, + m.q + m.z * m.p + m.y**2 * m.q + m.y, + ) + assertExpressionsEqual(self, working_model.full_objective.expr, m.obj1.expr) + + def test_declare_objective_expressions_maximization_obj(self): + """ + Test per-stage objective summand expressions are constructed + as expected when the objective is of a maximization sense. + """ + model_data = self.build_simple_test_model_data() + working_model = model_data.working_model + m = model_data.working_model.user_model + m.obj1.sense = maximize + + declare_objective_expressions(working_model, m.obj1) + assertExpressionsEqual( + self, + working_model.first_stage_objective.expr, + -10 - m.p - m.p * m.x - log(m.x), + ) + assertExpressionsEqual( + self, + working_model.second_stage_objective.expr, + -m.q - m.z * m.p - m.y**2 * m.q - m.y, + ) + assertExpressionsEqual(self, working_model.full_objective.expr, -m.obj1.expr) + + def test_standardize_active_obj_worst_case_focus(self): + """ + Test preprocessing step for standardization + of the active model objective. + """ + model_data = self.build_simple_test_model_data() + working_model = model_data.working_model + m = model_data.working_model.user_model + model_data.config.objective_focus = ObjectiveType.worst_case + + m.obj1.activate() + m.obj2.deactivate() + + standardize_active_objective(model_data) + + self.assertFalse( + m.obj1.active, + msg=( + f"Objective {m.obj1.name!r} should have been deactivated by " + f"{standardize_active_objective}." + ), + ) + assertExpressionsEqual( + self, + working_model.second_stage.inequality_cons["epigraph_con"].expr, + m.obj1.expr - working_model.first_stage.epigraph_var <= 0, + ) + self.assertEqual(model_data.separation_priority_order["epigraph_con"], 0) + + def test_standardize_active_obj_nominal_focus(self): + """ + Test standardization of active objective under nominal + objective focus. + """ + model_data = self.build_simple_test_model_data() + working_model = model_data.working_model + m = model_data.working_model.user_model + model_data.config.objective_focus = ObjectiveType.nominal + + m.obj1.activate() + m.obj2.deactivate() + + standardize_active_objective(model_data) + + self.assertFalse( + m.obj1.active, + msg=( + f"Objective {m.obj1.name!r} should have been deactivated by " + f"{standardize_active_objective}." + ), + ) + assertExpressionsEqual( + self, + working_model.first_stage.inequality_cons["epigraph_con"].expr, + m.obj1.expr - working_model.first_stage.epigraph_var <= 0, + ) + self.assertNotIn("epigraph_con", model_data.separation_priority_order) + + def test_standardize_active_obj_unsupported_focus(self): + """ + Test standardization of active objective under + an objective focus currently not supported + """ + model_data = self.build_simple_test_model_data() + m = model_data.working_model.user_model + model_data.config.objective_focus = "bad_focus" + + m.obj1.activate() + m.obj2.deactivate() + + exc_str = r"Classification.*not implemented for objective focus 'bad_focus'" + with self.assertRaisesRegex(ValueError, exc_str): + standardize_active_objective(model_data) + + def test_standardize_active_obj_nonadjustable_max(self): + """ + Test standardize active objective for case in which + the objective is independent of the nonadjustable variables + and of a maximization sense. + """ + model_data = self.build_simple_test_model_data() + working_model = model_data.working_model + m = working_model.user_model + model_data.config.objective_focus = ObjectiveType.worst_case + + # assume all variables nonadjustable + ep = model_data.working_model.effective_var_partitioning + ep.first_stage_variables = [m.x, m.z] + ep.second_stage_variables = [] + ep.state_variables = [m.y] + + m.obj1.deactivate() + m.obj2.activate() + m.obj2.sense = maximize + + standardize_active_objective(model_data) + + self.assertFalse( + m.obj2.active, + msg=( + f"Objective {m.obj2.name!r} should have been deactivated by " + f"{standardize_active_objective}." + ), + ) + + assertExpressionsEqual( + self, + working_model.first_stage.inequality_cons["epigraph_con"].expr, + -m.obj2.expr - working_model.first_stage.epigraph_var <= 0, + ) + self.assertNotIn("epigraph_con", model_data.separation_priority_order) + + +class TestAddDecisionRuleVars(unittest.TestCase): + """ + Test method for adding decision rule variables to working model. + There should be one indexed decision rule variable for every + effective second-stage variable. + The number of decision rule variables per effective second-stage + variable should depend on: + + - the number of uncertain parameters in the model + - the decision rule order specified by the user. + """ + + def build_simple_test_model_data(self): + """ + Make simple model data object for DR variable + declaration testing. + """ + model_data = Bunch() + model_data.config = Bunch() + model_data.working_model = ConcreteModel() + model_data.working_model.user_model = m = Block() + + # uncertain parameters + m.q = Param(range(3), initialize=0, mutable=True) + + # second-stage variables + m.x = Var() + m.z1 = Var([0, 1], initialize=0) + m.z2 = Var() + m.y = Var() + + model_data.working_model.uncertain_params = list(m.q.values()) + + up = model_data.working_model.user_var_partitioning = Bunch() + up.first_stage_variables = [m.x] + up.second_stage_variables = [m.z1, m.z2] + up.state_variables = [m.y] + + ep = model_data.working_model.effective_var_partitioning = Bunch() + ep.first_stage_variables = [m.x, m.z1] + ep.second_stage_variables = [m.z2] + ep.state_variables = [m.y] + + model_data.working_model.first_stage = Block() + + return model_data + + def test_correct_num_dr_vars_static(self): + """ + Test DR variable setup routines declare the correct + number of DR coefficient variables, static DR case. + """ + model_data = self.build_simple_test_model_data() + model_data.config.decision_rule_order = 0 + + add_decision_rule_variables(model_data) + + for indexed_dr_var in model_data.working_model.first_stage.decision_rule_vars: + self.assertEqual( + len(indexed_dr_var), + 1, + msg=( + "Number of decision rule coefficient variables " + f"in indexed Var object {indexed_dr_var.name!r}" + "does not match correct value." + ), + ) + + effective_second_stage_vars = ( + model_data.working_model.effective_var_partitioning.second_stage_variables + ) + self.assertEqual( + len(ComponentSet(model_data.working_model.first_stage.decision_rule_vars)), + len(effective_second_stage_vars), + msg=( + "Number of unique indexed DR variable components should equal " + "number of second-stage variables." + ), + ) + + # check mapping is as expected + ess_dr_var_zip = zip( + effective_second_stage_vars, + model_data.working_model.first_stage.decision_rule_vars, + ) + for ess_var, indexed_dr_var in ess_dr_var_zip: + mapped_dr_var = model_data.working_model.eff_ss_var_to_dr_var_map[ess_var] + self.assertIs( + mapped_dr_var, + indexed_dr_var, + msg=( + f"Second-stage var {ess_var.name!r} " + f"is mapped to DR var {mapped_dr_var.name!r}, " + f"but expected mapping to DR var {indexed_dr_var.name!r}." + ), + ) + + def test_correct_num_dr_vars_affine(self): + """ + Test DR variable setup routines declare the correct + number of DR coefficient variables, affine DR case. + """ + model_data = self.build_simple_test_model_data() + model_data.config.decision_rule_order = 1 + + add_decision_rule_variables(model_data) + + for indexed_dr_var in model_data.working_model.first_stage.decision_rule_vars: + self.assertEqual( + len(indexed_dr_var), + 1 + len(model_data.working_model.uncertain_params), + msg=( + "Number of decision rule coefficient variables " + f"in indexed Var object {indexed_dr_var.name!r}" + "does not match correct value." + ), + ) + + effective_second_stage_vars = ( + model_data.working_model.effective_var_partitioning.second_stage_variables + ) + self.assertEqual( + len(ComponentSet(model_data.working_model.first_stage.decision_rule_vars)), + len(effective_second_stage_vars), + msg=( + "Number of unique indexed DR variable components should equal " + "number of second-stage variables." + ), + ) + + # check mapping is as expected + ess_dr_var_zip = zip( + effective_second_stage_vars, + model_data.working_model.first_stage.decision_rule_vars, + ) + for ess_var, indexed_dr_var in ess_dr_var_zip: + mapped_dr_var = model_data.working_model.eff_ss_var_to_dr_var_map[ess_var] + self.assertIs( + mapped_dr_var, + indexed_dr_var, + msg=( + f"Second-stage var {ess_var.name!r} " + f"is mapped to DR var {mapped_dr_var.name!r}, " + f"but expected mapping to DR var {indexed_dr_var.name!r}." + ), + ) + + def test_correct_num_dr_vars_quadratic(self): + """ + Test DR variable setup routines declare the correct + number of DR coefficient variables, quadratic DR case. + """ + model_data = self.build_simple_test_model_data() + model_data.config.decision_rule_order = 2 + + add_decision_rule_variables(model_data) + + num_params = len(model_data.working_model.uncertain_params) + + for indexed_dr_var in model_data.working_model.first_stage.decision_rule_vars: + self.assertEqual( + len(indexed_dr_var), + 1 + num_params # static term # affine terms + # quadratic terms + + sp.special.comb(num_params, 2, repetition=True, exact=True), + msg=( + "Number of decision rule coefficient variables " + f"in indexed Var object {indexed_dr_var.name!r}" + "does not match correct value." + ), + ) + + effective_second_stage_vars = ( + model_data.working_model.effective_var_partitioning.second_stage_variables + ) + self.assertEqual( + len(ComponentSet(model_data.working_model.first_stage.decision_rule_vars)), + len(effective_second_stage_vars), + msg=( + "Number of unique indexed DR variable components should equal " + "number of second-stage variables." + ), + ) + + # check mapping is as expected + ess_dr_var_zip = zip( + effective_second_stage_vars, + model_data.working_model.first_stage.decision_rule_vars, + ) + for ess_var, indexed_dr_var in ess_dr_var_zip: + mapped_dr_var = model_data.working_model.eff_ss_var_to_dr_var_map[ess_var] + self.assertIs( + mapped_dr_var, + indexed_dr_var, + msg=( + f"Second-stage var {ess_var.name!r} " + f"is mapped to DR var {mapped_dr_var.name!r}, " + f"but expected mapping to DR var {indexed_dr_var.name!r}." + ), + ) + + +class TestAddDecisionRuleConstraints(unittest.TestCase): + """ + Test method for adding decision rule equality constraints + to the working model. There should be as many decision + rule equality constraints as there are effective second-stage + variables, and each constraint should relate an effective + second-stage variable to the uncertain parameters and corresponding + decision rule variables. + """ + + def build_simple_test_model_data(self): + """ + Make simple test model for DR variable + declaration testing. + """ + model_data = Bunch() + model_data.config = Bunch() + model_data.working_model = ConcreteModel() + model_data.working_model.user_model = m = Block() + + # uncertain parameters + m.q = Param(range(3), initialize=0, mutable=True) + + # second-stage variables + m.x = Var() + m.z1 = Var([0, 1], initialize=0) + m.z2 = Var() + m.y = Var() + + model_data.working_model.uncertain_params = list(m.q.values()) + + up = model_data.working_model.user_var_partitioning = Bunch() + up.first_stage_variables = [m.x] + up.second_stage_variables = [m.z1, m.z2] + up.state_variables = [m.y] + + ep = model_data.working_model.effective_var_partitioning = Bunch() + ep.first_stage_variables = [m.x, m.z1] + ep.second_stage_variables = [m.z2] + ep.state_variables = [m.y] + + model_data.working_model.first_stage = Block() + model_data.working_model.second_stage = Block() + + return model_data + + def test_num_dr_eqns_added_correct(self): + """ + Check that number of DR equality constraints added + by constraint declaration routines matches the number + of second-stage variables in the model. + """ + model_data = self.build_simple_test_model_data() + model_data.config.decision_rule_order = 2 + + add_decision_rule_variables(model_data) + add_decision_rule_constraints(model_data) + + effective_second_stage_vars = ( + model_data.working_model.effective_var_partitioning.second_stage_variables + ) + self.assertEqual( + len(model_data.working_model.second_stage.decision_rule_eqns), + len(effective_second_stage_vars), + msg=( + "Number of decision rule equations should match number of " + "effective second-stage variables." + ), + ) + + # check second-stage var to DR equation mapping is as expected + ess_dr_var_zip = zip( + effective_second_stage_vars, + model_data.working_model.second_stage.decision_rule_eqns.values(), + ) + for ess_var, dr_eqn in ess_dr_var_zip: + mapped_dr_eqn = model_data.working_model.eff_ss_var_to_dr_eqn_map[ess_var] + self.assertIs( + mapped_dr_eqn, + dr_eqn, + msg=( + f"Second-stage var {ess_var.name!r} " + f"is mapped to DR equation {mapped_dr_eqn.name!r}, " + f"but expected mapping to DR equation {dr_eqn.name!r}." + ), + ) + self.assertTrue(mapped_dr_eqn.active) + + def test_dr_eqns_form_correct(self): + """ + Check that form of decision rule equality constraints + is as expected. + + Decision rule equations should be of the standard form: + (sum of DR monomial terms) - (second-stage variable) == 0 + where each monomial term should be of form: + (product of uncertain parameters) * (decision rule variable) + + This test checks that the equality constraints are of this + standard form. + """ + model_data = self.build_simple_test_model_data() + working_model = model_data.working_model + m = model_data.working_model.user_model + + # set up simple config-like object + model_data.config.decision_rule_order = 2 + + # add DR variables and constraints + add_decision_rule_variables(model_data) + add_decision_rule_constraints(model_data) + + dr_zip = zip( + model_data.working_model.effective_var_partitioning.second_stage_variables, + model_data.working_model.first_stage.decision_rule_vars, + model_data.working_model.second_stage.decision_rule_eqns.values(), + ) + for ss_var, indexed_dr_var, dr_eq in dr_zip: + expected_dr_eq_expression = ( + indexed_dr_var[0] + + indexed_dr_var[1] * m.q[0] + + indexed_dr_var[2] * m.q[1] + + indexed_dr_var[3] * m.q[2] + + indexed_dr_var[4] * m.q[0] * m.q[0] + + indexed_dr_var[5] * m.q[0] * m.q[1] + + indexed_dr_var[6] * m.q[0] * m.q[2] + + indexed_dr_var[7] * m.q[1] * m.q[1] + + indexed_dr_var[8] * m.q[1] * m.q[2] + + indexed_dr_var[9] * m.q[2] * m.q[2] + - ss_var + == 0 + ) + assertExpressionsEqual(self, dr_eq.expr, expected_dr_eq_expression) + + expected_dr_var_to_exponent_map = ComponentMap( + ( + (indexed_dr_var[0], 0), + (indexed_dr_var[1], 1), + (indexed_dr_var[2], 1), + (indexed_dr_var[3], 1), + (indexed_dr_var[4], 2), + (indexed_dr_var[5], 2), + (indexed_dr_var[6], 2), + (indexed_dr_var[7], 2), + (indexed_dr_var[8], 2), + (indexed_dr_var[9], 2), + ) + ) + self.assertEqual( + working_model.dr_var_to_exponent_map, + expected_dr_var_to_exponent_map, + msg="DR variable to exponent map not as expected.", + ) + + +class TestReformulateStateVarIndependentEqCons(unittest.TestCase): + """ + Unit tests for routine that reformulates + state variable-independent second-stage equality constraints. + """ + + def setup_test_model_data(self): + """ + Set up simple test model for testing the reformulation + routine. + """ + model_data = Bunch() + model_data.config = Bunch() + model_data.working_model = working_model = ConcreteModel() + model_data.working_model.user_model = m = Block() + + m.x1 = Var(initialize=0, bounds=(0, None)) + m.x2 = Var(initialize=0, bounds=(0, None)) + m.u = Param(initialize=1.125, mutable=True) + m.con = Constraint(expr=m.u ** (0.5) * m.x1 - m.u * m.x2 <= 2) + m.obj = Objective(expr=(m.x1 - 4) ** 2 + (m.x2 - 1) ** 2) + m.eq_con = Constraint( + expr=m.u**2 * (m.x2 - 1) + m.u * (m.x1**3 + 0.5) - 5 * m.u * m.x1 * m.x2 + == -m.u * (m.x1 + 2) + ) + + # mathematically redundant, but makes the tests more rigorous + # as we want to check that loops in the coefficient + # matching routine are exited appropriately + m.eq_con_2 = Constraint(expr=m.u * (m.x2 - 1) == 0) + + working_model.uncertain_params = [m.u] + + working_model.first_stage = Block() + working_model.first_stage.equality_cons = Constraint(Any) + working_model.second_stage = Block() + working_model.second_stage.equality_cons = Constraint(Any) + working_model.second_stage.inequality_cons = Constraint(Any) + + working_model.second_stage.equality_cons["eq_con"] = m.eq_con.expr + working_model.second_stage.equality_cons["eq_con_2"] = m.eq_con_2.expr + working_model.second_stage.inequality_cons["con"] = m.con.expr + + # deactivate constraints on user model, as these are not + # what the reformulation routine actually processes + m.eq_con.deactivate() + m.eq_con_2.deactivate() + m.con.deactivate() + + working_model.all_variables = [m.x1, m.x2] + ep = working_model.effective_var_partitioning = Bunch() + ep.first_stage_variables = [m.x1] + ep.second_stage_variables = [m.x2] + ep.state_variables = [] + + return model_data + + def test_coefficient_matching_correct_constraints_added(self): + """ + Test coefficient matching adds correct number of constraints + in event of successful use. + """ + model_data = self.setup_test_model_data() + m = model_data.working_model.user_model + + # all vars first-stage + ep = model_data.working_model.effective_var_partitioning + ep.first_stage_variables = [m.x1, m.x2] + ep.second_stage_variables = [] + + model_data.config.decision_rule_order = 1 + model_data.config.progress_logger = logger + + model_data.working_model.first_stage.decision_rule_vars = [] + model_data.working_model.second_stage.decision_rule_eqns = [] + model_data.working_model.all_nonadjustable_variables = list( + ep.first_stage_variables + ) + + robust_infeasible = reformulate_state_var_independent_eq_cons(model_data) + + self.assertFalse( + robust_infeasible, + msg=( + "Coefficient matching unexpectedly detected" + "a robust infeasible constraint" + ), + ) + + first_stage_eq_cons = model_data.working_model.first_stage.equality_cons + self.assertEqual( + len(first_stage_eq_cons), + 3, + msg="Number of coefficient matching constraints not as expected.", + ) + self.assertEqual(len(model_data.working_model.second_stage.equality_cons), 0) + # we originally declared an inequality constraint on the model + self.assertEqual(len(model_data.working_model.second_stage.inequality_cons), 1) + + assertExpressionsEqual( + self, + first_stage_eq_cons["coeff_matching_eq_con_coeff_1"].expr, + m.x1**3 + 0.5 + 5 * m.x1 * m.x2 * (-1) + (-1) * (m.x1 + 2) * (-1) == 0, + ) + assertExpressionsEqual( + self, + first_stage_eq_cons["coeff_matching_eq_con_coeff_2"].expr, + m.x2 - 1 == 0, + ) + assertExpressionsEqual( + self, + first_stage_eq_cons["coeff_matching_eq_con_2_coeff_1"].expr, + m.x2 - 1 == 0, + ) + + def test_reformulate_nonlinear_state_var_independent_eq_con(self): + """ + Test routine appropriately performs coefficient matching + of polynomial-like constraints, + and recasting of nonlinear constraints to opposing equalities. + """ + model_data = self.setup_test_model_data() + model_data.separation_priority_order = dict() + + model_data.config.decision_rule_order = 1 + model_data.config.progress_logger = logging.getLogger( + self.test_reformulate_nonlinear_state_var_independent_eq_con.__name__ + ) + model_data.config.progress_logger.setLevel(logging.DEBUG) + + add_decision_rule_variables(model_data) + add_decision_rule_constraints(model_data) + + ep = model_data.working_model.effective_var_partitioning + model_data.working_model.all_nonadjustable_variables = list( + ep.first_stage_variables + + list(model_data.working_model.first_stage.decision_rule_var_0.values()) + ) + + wm = model_data.working_model + m = model_data.working_model.user_model + + # we want only one of the constraints to be 'nonlinear' + # change eq_con_2 to give a valid matching constraint + wm.second_stage.equality_cons["eq_con_2"].set_value(m.u * (m.x1 - 1) == 0) + + with LoggingIntercept(level=logging.DEBUG) as LOG: + robust_infeasible = reformulate_state_var_independent_eq_cons(model_data) + + err_msg = LOG.getvalue() + self.assertRegex( + text=err_msg, + expected_regex=(r".*Equality constraint '.*eq_con.*'.*cannot be written.*"), + ) + + self.assertFalse( + robust_infeasible, + msg=( + "Coefficient matching unexpectedly detected" + "a robust infeasible constraint" + ), + ) + + # check constraint partitioning updated as expected + self.assertFalse(wm.second_stage.equality_cons) + self.assertEqual(len(wm.second_stage.inequality_cons), 3) + self.assertEqual(len(wm.first_stage.equality_cons), 1) + + second_stage_ineq_cons = wm.second_stage.inequality_cons + self.assertTrue(second_stage_ineq_cons["reform_lower_bound_from_eq_con"].active) + self.assertTrue(second_stage_ineq_cons["reform_upper_bound_from_eq_con"].active) + self.assertTrue( + wm.first_stage.equality_cons["coeff_matching_eq_con_2_coeff_1"].active + ) + + # expressions for the new opposing inequalities + # and coefficient matching constraint + assertExpressionsEqual( + self, + second_stage_ineq_cons["reform_lower_bound_from_eq_con"].expr, + -( + m.u**2 * (m.x2 - 1) + + m.u * (m.x1**3 + 0.5) + - ((5 * m.u * m.x1) * m.x2) + - (-m.u) * (m.x1 + 2) + ) + <= 0.0, + ) + assertExpressionsEqual( + self, + second_stage_ineq_cons["reform_upper_bound_from_eq_con"].expr, + ( + m.u**2 * (m.x2 - 1) + + m.u * (m.x1**3 + 0.5) + - ((5 * m.u * m.x1) * m.x2) + - (-m.u) * (m.x1 + 2) + <= 0.0 + ), + ) + assertExpressionsEqual( + self, + wm.first_stage.equality_cons["coeff_matching_eq_con_2_coeff_1"].expr, + m.x1 - 1 == 0, + ) + + # separation priorities were also updated + self.assertEqual( + model_data.separation_priority_order["reform_lower_bound_from_eq_con"], 0 + ) + self.assertEqual( + model_data.separation_priority_order["reform_upper_bound_from_eq_con"], 0 + ) + + def test_coefficient_matching_robust_infeasible_proof(self): + """ + Test coefficient matching detects robust infeasibility + as expected. + """ + # Write the deterministic Pyomo model + model_data = self.setup_test_model_data() + m = model_data.working_model.user_model + model_data.working_model.second_stage.equality_cons["eq_con"].set_value( + expr=m.u * (m.x1**3 + 0.5) + - 5 * m.u * m.x1 * m.x2 + + m.u * (m.x1 + 2) + + m.u**2 + == 0 + ) + ep = model_data.working_model.effective_var_partitioning + ep.first_stage_variables = [m.x1, m.x2] + ep.second_stage_variables = [] + + model_data.config.decision_rule_order = 1 + model_data.config.progress_logger = logger + + model_data.working_model.all_nonadjustable_variables = list( + ep.first_stage_variables + ) + + with LoggingIntercept(level=logging.INFO) as LOG: + robust_infeasible = reformulate_state_var_independent_eq_cons(model_data) + + self.assertTrue( + robust_infeasible, + msg="Coefficient matching should be proven robust infeasible.", + ) + robust_infeasible_msg = LOG.getvalue() + self.assertRegex( + text=robust_infeasible_msg, + expected_regex=( + r"PyROS has determined that the model is robust infeasible\. " + r"One reason for this.*equality constraint '.*eq_con.*'.*" + ), + ) + + +class TestPreprocessModelData(unittest.TestCase): + """ + Test the PyROS preprocessor. + """ + + def build_test_model_data(self): + """ + Build model data object for the preprocessor. + """ + m = ConcreteModel() + + # PARAMS: p uncertain, q certain + m.p = Param(initialize=2, mutable=True) + m.q = Param(initialize=4.5, mutable=True) + + # first-stage variables + m.x1 = Var(bounds=(0, m.q), initialize=1) + m.x2 = Var(domain=NonNegativeReals, bounds=[m.p, m.p], initialize=m.p) + + # second-stage variables + m.z1 = Var(domain=RangeSet(2, 4, 0), bounds=[-m.p, m.q], initialize=2) + m.z2 = Var(bounds=(-2 * m.q**2, None), initialize=1) + m.z3 = Var(bounds=(-m.q, 0), initialize=0) + m.z4 = Var(initialize=5) + # the bounds produce an equality constraint + # that then leads to coefficient matching. + # problem is robust infeasible if DR static, else + # matching constraints are added + m.z5 = Var(domain=NonNegativeReals, bounds=(m.q, m.q)) + + # state variables + m.y1 = Var(domain=NonNegativeReals, initialize=0) + m.y2 = Var(initialize=10) + # note: y3 out-of-scope, as it will not appear in the active + # Objective and Constraint objects + m.y3 = Var(domain=RangeSet(0, 1, 0), bounds=(0.2, 0.5)) + + # fix some variables + m.z4.fix() + m.y2.fix() + + # EQUALITY CONSTRAINTS + # this will be reformulated by coefficient matching + m.eq1 = Constraint(expr=m.q * (m.z3 + m.x2) == 0) + # ranged constraints with identical bounds are considered equalities + # this makes z1 nonadjustable + m.eq2 = Constraint(expr=m.x1 - m.z1 == 0) + # pretriangular: makes z2 nonadjustable, so first-stage + m.eq3 = Constraint(expr=m.x1**2 + m.x2 + m.p * m.z2 == m.p) + # second-stage equality + m.eq4 = Constraint(expr=m.z3 + m.y1 == m.q) + + # INEQUALITY CONSTRAINTS + # since x1, z1 nonadjustable, LB is first-stage, + # but UB second-stage due to uncertain param q + m.ineq1 = Constraint(expr=(-m.p, m.x1 + m.z1, exp(m.q))) + # two first-stage inequalities + m.ineq2 = Constraint(expr=(0, m.x1 + m.x2, 10)) + # though the bounds are structurally equal, they are not + # identical objects, so this constitutes + # two second-stage inequalities + # note: these inequalities redundant, + # as collectively these constraints + # are mathematically identical to eq4 + m.ineq3 = Constraint(expr=(2 * m.q, 2 * (m.z3 + m.y1), 2 * m.q)) + # second-stage inequality. trivially satisfied/infeasible, + # since y2 is fixed + m.ineq4 = Constraint(expr=-m.q <= m.y2**2 + log(m.y2)) + + # out of scope: deactivated + m.ineq5 = Constraint(expr=m.y3 <= m.q) + m.ineq5.deactivate() + + # OBJECTIVE + # contains a rich combination of first-stage and second-stage terms + m.obj = Objective( + expr=( + m.p**2 + + 2 * m.p * m.q + + log(m.x1) + + 2 * m.p * m.x1 + + m.q**2 * m.x1 + + m.p**3 * (m.z1 + m.z2 + m.y1) + + m.z4 + + m.z5 + ) + ) + + model_data = ModelData(original_model=m, timing=None, config=Bunch()) + + # set up the var partitioning + user_var_partitioning = VariablePartitioning( + first_stage_variables=[m.x1, m.x2], + second_stage_variables=[m.z1, m.z2, m.z3, m.z4, m.z5], + # note: y3 out of scope, so excluded + state_variables=[m.y1, m.y2], + ) + + return model_data, user_var_partitioning + + def test_preprocessor_effective_var_partitioning_static_dr(self): + """ + Test preprocessor repartitions the variables + as expected. + """ + # setup + model_data, user_var_partitioning = self.build_test_model_data() + om = model_data.original_model + config = model_data.config + config.update( + dict( + uncertain_params=[om.q], + objective_focus=ObjectiveType.worst_case, + decision_rule_order=0, + progress_logger=logger, + separation_priority_order=dict(), + ) + ) + preprocess_model_data(model_data, user_var_partitioning) + ep = model_data.working_model.effective_var_partitioning + ublk = model_data.working_model.user_model + self.assertEqual( + ComponentSet(ep.first_stage_variables), + ComponentSet( + [ + # all second-stage variables are nonadjustable + # due to the DR + ublk.x1, + ublk.x2, + ublk.z1, + ublk.z2, + ublk.z3, + ublk.z4, + ublk.z5, + ublk.y2, + ] + ), + ) + self.assertEqual(ep.second_stage_variables, []) + self.assertEqual(ep.state_variables, [ublk.y1]) + + working_model = model_data.working_model + self.assertEqual( + ComponentSet(working_model.all_nonadjustable_variables), + ComponentSet( + [ublk.x1, ublk.x2, ublk.z1, ublk.z2, ublk.z3, ublk.z4, ublk.z5, ublk.y2] + + [working_model.first_stage.epigraph_var] + ), + ) + self.assertEqual( + ComponentSet(working_model.all_variables), + ComponentSet( + [ + ublk.x1, + ublk.x2, + ublk.z1, + ublk.z2, + ublk.z3, + ublk.z4, + ublk.z5, + ublk.y1, + ublk.y2, + ] + + [working_model.first_stage.epigraph_var] + ), + ) + + @parameterized.expand([["affine", 1], ["quadratic", 2]]) + def test_preprocessor_effective_var_partitioning_nonstatic_dr(self, name, dr_order): + """ + Test preprocessor repartitions the variables + as expected. + """ + model_data, user_var_partitioning = self.build_test_model_data() + om = model_data.original_model + config = model_data.config + config.update( + dict( + uncertain_params=[om.q], + objective_focus=ObjectiveType.worst_case, + decision_rule_order=dr_order, + progress_logger=logger, + separation_priority_order=dict(), + ) + ) + preprocess_model_data(model_data, user_var_partitioning) + ep = model_data.working_model.effective_var_partitioning + ublk = model_data.working_model.user_model + self.assertEqual( + ComponentSet(ep.first_stage_variables), + ComponentSet([ublk.x1, ublk.x2, ublk.z1, ublk.z2, ublk.z4, ublk.y2]), + ) + self.assertEqual( + ComponentSet(ep.second_stage_variables), ComponentSet([ublk.z3, ublk.z5]) + ) + self.assertEqual(ComponentSet(ep.state_variables), ComponentSet([ublk.y1])) + working_model = model_data.working_model + self.assertEqual( + ComponentSet(working_model.all_nonadjustable_variables), + ComponentSet( + [ublk.x1, ublk.x2, ublk.z1, ublk.z2, ublk.z4, ublk.y2] + + [working_model.first_stage.epigraph_var] + + list(working_model.first_stage.decision_rule_var_0.values()) + + list(working_model.first_stage.decision_rule_var_1.values()) + ), + ) + self.assertEqual( + ComponentSet(working_model.all_variables), + ComponentSet( + [ + ublk.x1, + ublk.x2, + ublk.z1, + ublk.z2, + ublk.z3, + ublk.z4, + ublk.z5, + ublk.y1, + ublk.y2, + ] + + [working_model.first_stage.epigraph_var] + + list(working_model.first_stage.decision_rule_var_0.values()) + + list(working_model.first_stage.decision_rule_var_1.values()) + ), + ) + + @parameterized.expand( + [ + ["affine_nominal", 1, "nominal"], + ["affine_worst_case", 1, "worst_case"], + # eq1 doesn't get reformulated in coefficient matching + # as the polynomial degree is too high + ["quadratic_nominal", 2, "nominal"], + ["quadratic_worst_case", 2, "worst_case"], + ] + ) + def test_preprocessor_constraint_partitioning_nonstatic_dr( + self, name, dr_order, obj_focus + ): + """ + Test preprocessor partitions constraints as expected + for nonstatic DR. + """ + model_data, user_var_partitioning = self.build_test_model_data() + om = model_data.original_model + model_data.config.update( + dict( + uncertain_params=[om.q], + objective_focus=ObjectiveType[obj_focus], + decision_rule_order=dr_order, + progress_logger=logger, + separation_priority_order=dict(ineq3=2), + ) + ) + preprocess_model_data(model_data, user_var_partitioning) + + working_model = model_data.working_model + ublk = working_model.user_model + + # list of expected coefficient matching constraint names + # equality bound constraint for z5 and/or eq1 are subject + # to reformulation + if dr_order == 1: + coeff_matching_con_names = [ + "coeff_matching_var_z5_uncertain_eq_bound_con_coeff_0", + "coeff_matching_var_z5_uncertain_eq_bound_con_coeff_1", + 'coeff_matching_eq_con_eq1_coeff_1', + 'coeff_matching_eq_con_eq1_coeff_2', + ] + else: + coeff_matching_con_names = [ + "coeff_matching_var_z5_uncertain_eq_bound_con_coeff_0", + "coeff_matching_var_z5_uncertain_eq_bound_con_coeff_1", + "coeff_matching_var_z5_uncertain_eq_bound_con_coeff_2", + ] + + self.assertEqual( + list(working_model.first_stage.inequality_cons), + ( + ["ineq_con_ineq1_lower_bound_con", "ineq_con_ineq2"] + + (["epigraph_con"] if obj_focus == "nominal" else []) + ), + ) + self.assertEqual( + list(working_model.first_stage.equality_cons), + ["eq_con_eq2", "eq_con_eq3"] + coeff_matching_con_names, + ) + self.assertEqual( + list(working_model.second_stage.inequality_cons), + ( + [ + "var_x1_uncertain_upper_bound_con", + "var_z1_uncertain_upper_bound_con", + "var_z2_uncertain_lower_bound_con", + "var_z3_certain_upper_bound_con", + "var_z3_uncertain_lower_bound_con", + "var_z5_certain_lower_bound_con", + "var_y1_certain_lower_bound_con", + "ineq_con_ineq1_upper_bound_con", + "ineq_con_ineq3_lower_bound_con", + "ineq_con_ineq3_upper_bound_con", + "ineq_con_ineq4_lower_bound_con", + ] + + (["epigraph_con"] if obj_focus == "worst_case" else []) + + ( + # for quadratic DR, + # eq1 gets reformulated to two inequality constraints + # since it is state variable independent and + # too nonlinear for coefficient matching + [ + "reform_lower_bound_from_eq_con_eq1", + "reform_upper_bound_from_eq_con_eq1", + ] + if dr_order == 2 + else [] + ) + ), + ) + self.assertEqual( + list(working_model.second_stage.equality_cons), + # eq1 doesn't get reformulated in coefficient matching + # when DR order is 2 as the polynomial degree is too high + ["eq_con_eq4"], + ) + + # verify the constraints are active + for fs_eq_con in working_model.first_stage.equality_cons.values(): + self.assertTrue(fs_eq_con.active, msg=f"{fs_eq_con.name} inactive") + for fs_ineq_con in working_model.first_stage.inequality_cons.values(): + self.assertTrue(fs_ineq_con.active, msg=f"{fs_ineq_con.name} inactive") + for perf_eq_con in working_model.second_stage.equality_cons.values(): + self.assertTrue(perf_eq_con.active, msg=f"{perf_eq_con.name} inactive") + for perf_ineq_con in working_model.second_stage.inequality_cons.values(): + self.assertTrue(perf_ineq_con.active, msg=f"{perf_ineq_con.name} inactive") + + # verify the constraint expressions + m = ublk + fs = working_model.first_stage + ss = working_model.second_stage + assertExpressionsEqual(self, m.x1.lower, 0) + assertExpressionsEqual( + self, + ss.inequality_cons["var_x1_uncertain_upper_bound_con"].expr, + m.x1 <= m.q, + ) + + assertExpressionsEqual( + self, + ss.inequality_cons["var_z1_uncertain_upper_bound_con"].expr, + m.z1 <= m.q, + ) + assertExpressionsEqual( + self, + ss.inequality_cons["var_z2_uncertain_lower_bound_con"].expr, + -m.z2 <= -(-2 * m.q**2), + ) + assertExpressionsEqual( + self, + ss.inequality_cons["var_z3_uncertain_lower_bound_con"].expr, + -m.z3 <= -(-m.q), + ) + assertExpressionsEqual( + self, ss.inequality_cons["var_z3_certain_upper_bound_con"].expr, m.z3 <= 0 + ) + assertExpressionsEqual( + self, ss.inequality_cons["var_z5_certain_lower_bound_con"].expr, -m.z5 <= 0 + ) + assertExpressionsEqual( + self, ss.inequality_cons["var_y1_certain_lower_bound_con"].expr, -m.y1 <= 0 + ) + assertExpressionsEqual( + self, + fs.inequality_cons["ineq_con_ineq1_lower_bound_con"].expr, + -m.p <= m.x1 + m.z1, + ) + assertExpressionsEqual( + self, + ss.inequality_cons["ineq_con_ineq1_upper_bound_con"].expr, + m.x1 + m.z1 <= exp(m.q), + ) + assertExpressionsEqual( + self, + fs.inequality_cons["ineq_con_ineq2"].expr, + RangedExpression((0, m.x1 + m.x2, 10), False), + ) + assertExpressionsEqual( + self, + ss.inequality_cons["ineq_con_ineq3_lower_bound_con"].expr, + -(2 * (m.z3 + m.y1)) <= -(2 * m.q), + ) + assertExpressionsEqual( + self, + ss.inequality_cons["ineq_con_ineq3_upper_bound_con"].expr, + 2 * (m.z3 + m.y1) <= 2 * m.q, + ) + assertExpressionsEqual( + self, + ss.inequality_cons["ineq_con_ineq4_lower_bound_con"].expr, + -(m.y2**2 + log(m.y2)) <= -(-m.q), + ) + self.assertFalse(m.ineq5.active) + + assertExpressionsEqual( + self, fs.equality_cons["eq_con_eq2"].expr, m.x1 - m.z1 == 0 + ) + assertExpressionsEqual( + self, + fs.equality_cons["eq_con_eq3"].expr, + m.x1**2 + m.x2 + m.p * m.z2 == m.p, + ) + if dr_order < 2: + # due to coefficient matching, this should have been deleted + self.assertNotIn("eq_con_eq1", ss.equality_cons) + + # user model block should have no active constraints + self.assertFalse(list(m.component_data_objects(Constraint, active=True))) + + # check separation priorities + for con_name, order in model_data.separation_priority_order.items(): + expected_order = 2 if "ineq3" in con_name else 0 + self.assertEqual( + order, + expected_order, + msg=( + "Separation priority order for second-stage inequality " + f"{con_name!r} not as expected." + ), + ) + + @parameterized.expand( + [["static", 0, True], ["affine", 1, False], ["quadratic", 2, False]] + ) + def test_preprocessor_coefficient_matching( + self, name, dr_order, expected_robust_infeas + ): + """ + Check preprocessor robust infeasibility return status. + """ + model_data, user_var_partitioning = self.build_test_model_data() + om = model_data.original_model + config = model_data.config + config.update( + dict( + uncertain_params=[om.q], + objective_focus=ObjectiveType.worst_case, + decision_rule_order=dr_order, + progress_logger=logger, + separation_priority_order=dict(), + ) + ) + + # for static DR, problem should be robust infeasible + # due to the coefficient matching constraints derived + # from bounds on z5 + robust_infeasible = preprocess_model_data(model_data, user_var_partitioning) + self.assertIsInstance(robust_infeasible, bool) + self.assertEqual(robust_infeasible, expected_robust_infeas) + + # check the coefficient matching constraint expressions + working_model = model_data.working_model + m = model_data.working_model.user_model + fs = working_model.first_stage + fs_eqs = working_model.first_stage.equality_cons + ss_ineqs = working_model.second_stage.inequality_cons + if config.decision_rule_order == 1: + # check the constraint expressions of eq1 and z5 bound + assertExpressionsEqual( + self, + fs_eqs["coeff_matching_var_z5_uncertain_eq_bound_con_coeff_0"].expr, + fs.decision_rule_vars[1][0] == 0, + ) + assertExpressionsEqual( + self, + fs_eqs["coeff_matching_var_z5_uncertain_eq_bound_con_coeff_1"].expr, + fs.decision_rule_vars[1][1] - 1 == 0, + ) + assertExpressionsEqual( + self, + fs_eqs["coeff_matching_eq_con_eq1_coeff_1"].expr, + fs.decision_rule_vars[0][0] + m.x2 == 0, + ) + assertExpressionsEqual( + self, + fs_eqs["coeff_matching_eq_con_eq1_coeff_2"].expr, + fs.decision_rule_vars[0][1] == 0, + ) + if config.decision_rule_order == 2: + # eq1 should be deactivated and refomulated to 2 inequalities + assertExpressionsEqual( + self, + ss_ineqs["reform_lower_bound_from_eq_con_eq1"].expr, + -(m.q * (m.z3 + m.x2)) <= 0.0, + ) + assertExpressionsEqual( + self, + ss_ineqs["reform_upper_bound_from_eq_con_eq1"].expr, + m.q * (m.z3 + m.x2) <= 0.0, + ) + + # check coefficient matching constraint expressions + assertExpressionsEqual( + self, + fs_eqs["coeff_matching_var_z5_uncertain_eq_bound_con_coeff_0"].expr, + fs.decision_rule_vars[1][0] == 0, + ) + assertExpressionsEqual( + self, + fs_eqs["coeff_matching_var_z5_uncertain_eq_bound_con_coeff_1"].expr, + fs.decision_rule_vars[1][1] - 1 == 0, + ) + assertExpressionsEqual( + self, + fs_eqs["coeff_matching_var_z5_uncertain_eq_bound_con_coeff_2"].expr, + fs.decision_rule_vars[1][2] == 0, + ) + + @parameterized.expand([["static", 0], ["affine", 1], ["quadratic", 2]]) + def test_preprocessor_objective_standardization(self, name, dr_order): + """ + Test preprocessor standardizes the active objective as + expected. + """ + model_data, user_var_partitioning = self.build_test_model_data() + om = model_data.original_model + config = model_data.config + config.update( + dict( + uncertain_params=[om.q], + objective_focus=ObjectiveType.worst_case, + decision_rule_order=dr_order, + progress_logger=logger, + separation_priority_order=dict(), + ) + ) + preprocess_model_data(model_data, user_var_partitioning) + + ublk = model_data.working_model.user_model + working_model = model_data.working_model + + assertExpressionsEqual( + self, + working_model.second_stage.inequality_cons["epigraph_con"].expr, + ublk.obj.expr - working_model.first_stage.epigraph_var <= 0, + ) + assertExpressionsEqual(self, working_model.full_objective.expr, ublk.obj.expr) + + # recall: objective summands are classified according + # to dependence on uncertain parameters and variables + # the *user* considers adjustable, + # so the summands should be independent of the DR order + # (which itself affects the effective var partitioning) + assertExpressionsEqual( + self, + working_model.first_stage_objective.expr, + ublk.p**2 + log(ublk.x1) + 2 * ublk.p * ublk.x1, + ) + assertExpressionsEqual( + self, + working_model.second_stage_objective.expr, + ( + 2 * ublk.p * ublk.q + + ublk.q**2 * ublk.x1 + + ublk.p**3 * (ublk.z1 + ublk.z2 + ublk.y1) + + ublk.z4 + + ublk.z5 + ), + ) + + @parameterized.expand([["nominal"], ["worst_case"]]) + def test_preprocessor_log_model_statistics_affine_dr(self, obj_focus): + """ + Test statistics of the preprocessed working model are + logged as expected. + """ + model_data, user_var_partitioning = self.build_test_model_data() + om = model_data.original_model + config = model_data.config + config.update( + dict( + uncertain_params=[om.q], + objective_focus=ObjectiveType[obj_focus], + decision_rule_order=1, + progress_logger=logger, + separation_priority_order=dict(), + ) + ) + preprocess_model_data(model_data, user_var_partitioning) + + # expected model stats worked out by hand + expected_log_str = textwrap.dedent( + f""" + Model Statistics: + Number of variables : 14 + Epigraph variable : 1 + First-stage variables : 2 + Second-stage variables : 5 (2 adj.) + State variables : 2 (1 adj.) + Decision rule variables : 4 + Number of uncertain parameters : 1 + Number of constraints : 23 + Equality constraints : 9 + Coefficient matching constraints : 4 + Other first-stage equations : 2 + Second-stage equations : 1 + Decision rule equations : 2 + Inequality constraints : 14 + First-stage inequalities : {3 if obj_focus == 'nominal' else 2} + Second-stage inequalities : {11 if obj_focus == 'nominal' else 12} + """ + ) + + with LoggingIntercept(level=logging.INFO) as LOG: + log_model_statistics(model_data) + log_str = LOG.getvalue() + + log_lines = log_str.splitlines()[1:] + expected_log_lines = expected_log_str.splitlines()[1:] + + self.assertEqual(len(log_lines), len(expected_log_lines)) + for line, expected_line in zip(log_lines, expected_log_lines): + self.assertEqual(line, expected_line) + + @parameterized.expand([["nominal"], ["worst_case"]]) + def test_preprocessor_log_model_statistics_quadratic_dr(self, obj_focus): + """ + Test statistics of the preprocessed working model are + logged as expected. + """ + model_data, user_var_partitioning = self.build_test_model_data() + om = model_data.original_model + config = model_data.config + config.update( + dict( + uncertain_params=[om.q], + objective_focus=ObjectiveType[obj_focus], + decision_rule_order=2, + progress_logger=logger, + separation_priority_order=dict(), + ) + ) + preprocess_model_data(model_data, user_var_partitioning) + + # expected model stats worked out by hand + expected_log_str = textwrap.dedent( + f""" + Model Statistics: + Number of variables : 16 + Epigraph variable : 1 + First-stage variables : 2 + Second-stage variables : 5 (2 adj.) + State variables : 2 (1 adj.) + Decision rule variables : 6 + Number of uncertain parameters : 1 + Number of constraints : 24 + Equality constraints : 8 + Coefficient matching constraints : 3 + Other first-stage equations : 2 + Second-stage equations : 1 + Decision rule equations : 2 + Inequality constraints : 16 + First-stage inequalities : {3 if obj_focus == 'nominal' else 2} + Second-stage inequalities : {13 if obj_focus == 'nominal' else 14} + """ + ) + + with LoggingIntercept(level=logging.INFO) as LOG: + log_model_statistics(model_data) + log_str = LOG.getvalue() + + log_lines = log_str.splitlines()[1:] + expected_log_lines = expected_log_str.splitlines()[1:] + + self.assertEqual(len(log_lines), len(expected_log_lines)) + for line, expected_line in zip(log_lines, expected_log_lines): + self.assertEqual(line, expected_line) + + +if __name__ == "__main__": + unittest.main() diff --git a/pyomo/contrib/pyros/tests/test_separation.py b/pyomo/contrib/pyros/tests/test_separation.py new file mode 100644 index 00000000000..576a98c39fb --- /dev/null +++ b/pyomo/contrib/pyros/tests/test_separation.py @@ -0,0 +1,305 @@ +# ___________________________________________________________________________ +# +# Pyomo: Python Optimization Modeling Objects +# Copyright (c) 2008-2024 +# National Technology and Engineering Solutions of Sandia, LLC +# Under the terms of Contract DE-NA0003525 with National Technology and +# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain +# rights in this software. +# This software is distributed under the 3-clause BSD License. +# ___________________________________________________________________________ + +""" +Test separation problem construction methods. +""" + + +import logging +import pyomo.common.unittest as unittest + +from pyomo.common.collections import Bunch +from pyomo.common.dependencies import numpy as np, numpy_available, scipy_available +from pyomo.core.base import ConcreteModel, Constraint, Objective, Param, Var +from pyomo.core.expr import exp, RangedExpression +from pyomo.core.expr.compare import assertExpressionsEqual + +from pyomo.contrib.pyros.separation_problem_methods import ( + construct_separation_problem, + group_ss_ineq_constraints_by_priority, +) +from pyomo.contrib.pyros.uncertainty_sets import BoxSet, FactorModelSet +from pyomo.contrib.pyros.util import ( + ModelData, + preprocess_model_data, + ObjectiveType, + VariablePartitioning, +) + + +if not (numpy_available and scipy_available): + raise unittest.SkipTest("Packages numpy and scipy must both be available.") + + +logger = logging.getLogger(__name__) + + +def build_simple_model_data(objective_focus="worst_case"): + """ + Build simple model data object for master problem construction. + """ + m = ConcreteModel() + m.u = Param(initialize=0.5, mutable=True) + m.u2 = Param(initialize=0, mutable=True) + m.x1 = Var(bounds=[-1000, 1000]) + m.x2 = Var(bounds=[-1000, 1000]) + m.x3 = Var(bounds=[-1000, 1000]) + m.con = Constraint(expr=exp(m.u - 1) - m.x1 - m.x2 * m.u - m.x3 * m.u**2 <= 0) + + # this makes x2 nonadjustable + m.eq_con = Constraint(expr=m.x2 - 1 == 0) + + m.obj = Objective(expr=m.x1 + m.x2 / 2 + m.x3 / 3 + m.u + m.u2) + + config = Bunch( + uncertain_params=[m.u, m.u2], + objective_focus=ObjectiveType[objective_focus], + decision_rule_order=1, + progress_logger=logger, + nominal_uncertain_param_vals=[0.5, 0], + uncertainty_set=BoxSet([[0, 1], [0, 0]]), + separation_priority_order=dict(con=2), + ) + model_data = ModelData(original_model=m, timing=None, config=config) + user_var_partitioning = VariablePartitioning( + first_stage_variables=[m.x1], + second_stage_variables=[m.x2, m.x3], + state_variables=[], + ) + + preprocess_model_data(model_data, user_var_partitioning) + + return model_data + + +class TestConstructSeparationProblem(unittest.TestCase): + """ + Test method for construction of separation problem. + """ + + def test_construct_separation_problem_nonadj_components(self): + """ + Check first-stage variables and constraints of the + separation problem are fixed and deactivated, + respectively. + """ + model_data = build_simple_model_data(objective_focus="worst_case") + separation_model = construct_separation_problem(model_data) + + # check nonadjustable components fixed/deactivated + self.assertTrue(separation_model.user_model.x1.fixed) + self.assertTrue(separation_model.first_stage.epigraph_var.fixed) + for indexed_var in separation_model.first_stage.decision_rule_vars: + for dr_var in indexed_var.values(): + self.assertTrue(dr_var.fixed, msg=f"DR var {dr_var.name!r} not fixed") + + # first-stage equality constraints should be inactive + self.assertFalse(separation_model.user_model.eq_con.active) + for coeff_con in separation_model.first_stage.coefficient_matching_cons: + self.assertFalse( + coeff_con.active, + msg=f"Coefficient matching constraint {coeff_con.name!r} active.", + ) + + def test_construct_separation_problem_ss_ineq_cons(self): + """ + Check second-stage inequality constraints are deactivated + and replaced with objectives, as appropriate. + """ + model_data = build_simple_model_data(objective_focus="worst_case") + separation_model = construct_separation_problem(model_data) + + # check expression of second-stage ineq cons correct + # check these individually + # (i.e. uncertain params have been replaced) + m = separation_model.user_model + u1_var = separation_model.uncertainty.uncertain_param_var_list[0] + u2_var = separation_model.uncertainty.uncertain_param_var_list[1] + assertExpressionsEqual( + self, + separation_model.second_stage.inequality_cons["epigraph_con"].expr, + ( + m.x1 + + m.x2 / 2 + + m.x3 / 3 + + u1_var + + u2_var + - separation_model.first_stage.epigraph_var + <= 0 + ), + ) + + self.assertFalse( + separation_model.second_stage.inequality_cons["epigraph_con"].active + ) + self.assertFalse( + m.con.active, + separation_model.second_stage.inequality_cons[ + "ineq_con_con_upper_bound_con" + ].active, + ) + self.assertFalse( + m.con.active, + separation_model.second_stage.inequality_cons[ + "var_x3_certain_lower_bound_con" + ].active, + ) + self.assertFalse( + m.con.active, + separation_model.second_stage.inequality_cons[ + "var_x3_certain_upper_bound_con" + ].active, + ) + + # check second-stage ineq con expressions match obj expressions + # (loop through the con to obj map) + self.assertEqual( + len(separation_model.second_stage_ineq_con_to_obj_map), + len(separation_model.second_stage.inequality_cons), + ) + for ineq_con, obj in separation_model.second_stage_ineq_con_to_obj_map.items(): + assertExpressionsEqual(self, ineq_con.body - ineq_con.upper, obj.expr) + + def test_construct_separation_problem_ss_eq_and_dr_cons(self): + """ + Check second-stage and DR equations are appropriately handled + by the separation problems. + """ + # check DR equation is active + model_data = build_simple_model_data(objective_focus="worst_case") + separation_model = construct_separation_problem(model_data) + + self.assertTrue(separation_model.second_stage.decision_rule_eqns[0].active) + + u1_var = separation_model.uncertainty.uncertain_param_var_list[0] + u2_var = separation_model.uncertainty.uncertain_param_var_list[1] + assertExpressionsEqual( + self, + separation_model.second_stage.decision_rule_eqns[0].expr, + ( + separation_model.first_stage.decision_rule_vars[0][0] + + u1_var * separation_model.first_stage.decision_rule_vars[0][1] + + u2_var * separation_model.first_stage.decision_rule_vars[0][2] + - separation_model.user_model.x3 + == 0 + ), + ) + + def test_construct_separation_problem_uncertainty_components(self): + """ + Test separation problem handles uncertain parameter variable + components as expected. + """ + model_data = build_simple_model_data(objective_focus="worst_case") + separation_model = construct_separation_problem(model_data) + uncertainty_blk = separation_model.uncertainty + boxcon1, boxcon2 = uncertainty_blk.uncertainty_cons_list + paramvar1, paramvar2 = uncertainty_blk.uncertain_param_var_list + + self.assertEqual(uncertainty_blk.auxiliary_var_list, []) + self.assertEqual(len(uncertainty_blk.uncertainty_cons_list), 2) + assertExpressionsEqual( + self, + boxcon1.expr, + RangedExpression((np.int_(0), paramvar1, np.int_(1)), False), + ) + assertExpressionsEqual( + self, + boxcon2.expr, + RangedExpression((np.int_(0), paramvar2, np.int_(0)), False), + ) + self.assertTrue(boxcon1.active) + self.assertTrue(boxcon2.active) + + # u, bounds [0, 1] + self.assertFalse(paramvar1.fixed) + # bounds [0, 0]; separation constructor should fix the Var + self.assertTrue(paramvar2.fixed) + + self.assertEqual(paramvar1.bounds, (0, 1)) + self.assertEqual(paramvar2.bounds, (0, 0)) + + def test_construct_separation_problem_uncertain_factor_param_components(self): + """ + Test separation problem uncertainty components for uncertainty + set requiring auxiliary variables. + """ + model_data = build_simple_model_data(objective_focus="worst_case") + model_data.config.uncertainty_set = FactorModelSet( + origin=[1, 0], beta=1, number_of_factors=2, psi_mat=[[1, 2.5], [0, 1]] + ) + separation_model = construct_separation_problem(model_data) + uncertainty_blk = separation_model.uncertainty + *matrix_product_cons, aux_sum_con = uncertainty_blk.uncertainty_cons_list + paramvar1, paramvar2 = uncertainty_blk.uncertain_param_var_list + auxvar1, auxvar2 = uncertainty_blk.auxiliary_var_list + + self.assertEqual(len(matrix_product_cons), 2) + self.assertTrue(matrix_product_cons[0].active) + self.assertTrue(matrix_product_cons[1].active) + self.assertTrue(aux_sum_con.active) + assertExpressionsEqual( + self, aux_sum_con.expr, RangedExpression((-2, auxvar1 + auxvar2, 2), False) + ) + assertExpressionsEqual( + self, matrix_product_cons[0].expr, auxvar1 + 2.5 * auxvar2 + 1 == paramvar1 + ) + assertExpressionsEqual( + self, matrix_product_cons[1].expr, 0.0 * auxvar1 + auxvar2 == paramvar2 + ) + + # none of the vars should be fixed + self.assertFalse(paramvar1.fixed) + self.assertFalse(paramvar2.fixed) + self.assertFalse(auxvar1.fixed) + self.assertFalse(auxvar2.fixed) + + # factor set auxiliary variables + self.assertEqual(auxvar1.bounds, (-1, 1)) + self.assertEqual(auxvar2.bounds, (-1, 1)) + + # factor set bounds are tighter + self.assertEqual(paramvar1.bounds, (-2.5, 4.5)) + self.assertEqual(paramvar2.bounds, (-1.0, 1.0)) + + +class TestGroupSecondStageIneqConsByPriority(unittest.TestCase): + def test_group_ss_ineq_constraints_by_priority(self): + model_data = build_simple_model_data() + separation_model = construct_separation_problem(model_data) + + # build mock separation data-like object + # since we are testing only the grouping method + separation_data = Bunch( + separation_model=separation_model, + separation_priority_order=model_data.separation_priority_order, + ) + + priority_groups = group_ss_ineq_constraints_by_priority(separation_data) + + self.assertEqual(list(priority_groups.keys()), [2, 0]) + ss_ineq_cons = separation_model.second_stage.inequality_cons + self.assertEqual( + priority_groups[2], [ss_ineq_cons["ineq_con_con_upper_bound_con"]] + ) + self.assertEqual( + priority_groups[0], + [ + ss_ineq_cons["var_x3_certain_lower_bound_con"], + ss_ineq_cons["var_x3_certain_upper_bound_con"], + ss_ineq_cons["epigraph_con"], + ], + ) + + +if __name__ == "__main__": + unittest.main() diff --git a/pyomo/contrib/pyros/tests/test_uncertainty_sets.py b/pyomo/contrib/pyros/tests/test_uncertainty_sets.py new file mode 100644 index 00000000000..e6ad46ce740 --- /dev/null +++ b/pyomo/contrib/pyros/tests/test_uncertainty_sets.py @@ -0,0 +1,2415 @@ +# ___________________________________________________________________________ +# +# Pyomo: Python Optimization Modeling Objects +# Copyright (c) 2008-2024 +# National Technology and Engineering Solutions of Sandia, LLC +# Under the terms of Contract DE-NA0003525 with National Technology and +# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain +# rights in this software. +# This software is distributed under the 3-clause BSD License. +# ___________________________________________________________________________ + +""" +Tests for the PyROS UncertaintySet class and subclasses. +""" + +import itertools as it +import pyomo.common.unittest as unittest + +from pyomo.common.dependencies import ( + attempt_import, + numpy as np, + numpy_available, + scipy_available, +) +from pyomo.environ import SolverFactory +from pyomo.core.base import ConcreteModel, Param, Var +from pyomo.core.expr import RangedExpression +from pyomo.core.expr.compare import assertExpressionsEqual + +from pyomo.contrib.pyros.uncertainty_sets import ( + AxisAlignedEllipsoidalSet, + BoxSet, + BudgetSet, + CardinalitySet, + DiscreteScenarioSet, + EllipsoidalSet, + FactorModelSet, + IntersectionSet, + PolyhedralSet, + UncertaintySet, + UncertaintyQuantification, + Geometry, + _setup_standard_uncertainty_set_constraint_block, +) + +import logging + +logger = logging.getLogger(__name__) + +parameterized, param_available = attempt_import('parameterized') + +if not (numpy_available and scipy_available and param_available): + raise unittest.SkipTest( + 'PyROS preprocessor unit tests require parameterized, numpy, and scipy' + ) +parameterized = parameterized.parameterized + +# === Config args for testing +global_solver = 'baron' +global_solver_args = dict() + +_baron = SolverFactory('baron') +baron_available = _baron.available(exception_flag=False) +if baron_available: + baron_license_is_valid = _baron.license_is_valid() + baron_version = _baron.version() +else: + baron_license_is_valid = False + baron_version = (0, 0, 0) + + +class TestBoxSet(unittest.TestCase): + """ + Tests for the BoxSet. + """ + + def test_normal_construction_and_update(self): + """ + Test BoxSet constructor and setter work normally + when bounds are appropriate. + """ + bounds = [[1, 2], [3, 4]] + bset = BoxSet(bounds=bounds) + np.testing.assert_allclose( + bounds, bset.bounds, err_msg="BoxSet bounds not as expected" + ) + + # check bounds update + new_bounds = [[3, 4], [5, 6]] + bset.bounds = new_bounds + np.testing.assert_allclose( + new_bounds, bset.bounds, err_msg="BoxSet bounds not as expected" + ) + + def test_error_on_box_set_dim_change(self): + """ + BoxSet dimension is considered immutable. + Test ValueError raised when attempting to alter the + box set dimension (i.e. number of rows of `bounds`). + """ + bounds = [[1, 2], [3, 4]] + bset = BoxSet(bounds=bounds) # 2-dimensional set + + exc_str = r"Attempting to set.*dimension 2 to a value of dimension 3" + with self.assertRaisesRegex(ValueError, exc_str): + bset.bounds = [[1, 2], [3, 4], [5, 6]] + + def test_error_on_lb_exceeds_ub(self): + """ + Test exception raised when an LB exceeds a UB. + """ + bad_bounds = [[1, 2], [4, 3]] + + exc_str = r"Lower bound 4 exceeds upper bound 3" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BoxSet(bad_bounds) + + # construct a valid box set + bset = BoxSet([[1, 2], [3, 4]]) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + bset.bounds = bad_bounds + + def test_error_on_ragged_bounds_array(self): + """ + Test ValueError raised on attempting to set BoxSet bounds + to a ragged array. + + This test also validates `uncertainty_sets.is_ragged` for all + pre-defined array-like attributes of all set-types, as the + `is_ragged` method is used throughout. + """ + # example ragged arrays + ragged_arrays = ( + [[1, 2], 3], # list and int in same sequence + [[1, 2], [3, [4, 5]]], # 2nd row ragged (list and int) + [[1, 2], [3]], # variable row lengths + ) + + # construct valid box set + bset = BoxSet(bounds=[[1, 2], [3, 4]]) + + # exception message should match this regex + exc_str = r"Argument `bounds` should not be a ragged array-like.*" + for ragged_arr in ragged_arrays: + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BoxSet(bounds=ragged_arr) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + bset.bounds = ragged_arr + + def test_error_on_invalid_bounds_shape(self): + """ + Test ValueError raised when attempting to set + Box set bounds to array of incorrect shape + (should be a 2-D array with 2 columns). + """ + # 3d array + three_d_arr = [[[1, 2], [3, 4], [5, 6]]] + exc_str = ( + r"Argument `bounds` must be a 2-dimensional.*" + r"\(detected 3 dimensions.*\)" + ) + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BoxSet(three_d_arr) + + # construct valid box set + bset = BoxSet([[1, 2], [3, 4], [5, 6]]) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + bset.bounds = three_d_arr + + def test_error_on_wrong_number_columns(self): + """ + BoxSet bounds should be a 2D array-like with 2 columns. + ValueError raised if number columns wrong + """ + three_col_arr = [[1, 2, 3], [4, 5, 6]] + exc_str = ( + r"Attribute 'bounds' should be of shape \(\.{3},2\), " + r"but detected shape \(\.{3},3\)" + ) + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BoxSet(three_col_arr) + + # construct a valid box set + bset = BoxSet([[1, 2], [3, 4]]) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + bset.bounds = three_col_arr + + def test_error_on_empty_last_dimension(self): + """ + Check ValueError raised when last dimension of BoxSet bounds is + empty. + """ + empty_2d_arr = [[], [], []] + exc_str = ( + r"Last dimension of argument `bounds` must be non-empty " + r"\(detected shape \(3, 0\)\)" + ) + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BoxSet(bounds=empty_2d_arr) + + # create a valid box set + bset = BoxSet([[1, 2], [3, 4]]) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + bset.bounds = empty_2d_arr + + def test_error_on_non_numeric_bounds(self): + """ + Test that ValueError is raised if box set bounds + are set to array-like with entries of a non-numeric + type (such as int, float). + """ + # invalid bounds (contains an entry type str) + new_bounds = [[1, "test"], [3, 2]] + + exc_str = ( + r"Entry 'test' of the argument `bounds` " + r"is not a valid numeric type \(provided type 'str'\)" + ) + + # assert error on construction + with self.assertRaisesRegex(TypeError, exc_str): + BoxSet(new_bounds) + + # construct a valid box set + bset = BoxSet(bounds=[[1, 2], [3, 4]]) + + # assert error on update + with self.assertRaisesRegex(TypeError, exc_str): + bset.bounds = new_bounds + + def test_error_on_bounds_with_nan_or_inf(self): + """ + Box set bounds set to array-like with inf or nan. + """ + # construct a valid box set + bset = BoxSet(bounds=[[1, 2], [3, 4]]) + + for val_str in ["inf", "nan"]: + bad_bounds = [[1, float(val_str)], [2, 3]] + exc_str = ( + fr"Entry '{val_str}' of the argument `bounds` " + fr"is not a finite numeric value" + ) + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BoxSet(bad_bounds) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + bset.bounds = bad_bounds + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + box_set = BoxSet(bounds=[[1, 2], [3, 4]]) + uq = box_set.set_as_constraint(uncertain_params=None, block=m) + + self.assertEqual(uq.auxiliary_vars, []) + self.assertIs(uq.block, m) + con1, con2 = uq.uncertainty_cons + var1, var2 = uq.uncertain_param_vars + + assertExpressionsEqual( + self, con1.expr, RangedExpression((np.int_(1), var1, np.int_(2)), False) + ) + assertExpressionsEqual( + self, con2.expr, RangedExpression((np.int_(3), var2, np.int_(4)), False) + ) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain param vars + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + box_set = BoxSet(bounds=[[1, 2], [3, 4]]) + with self.assertRaisesRegex(ValueError, ".*dimension"): + box_set.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1], initialize=0, mutable=True) + box_set = BoxSet(bounds=[[1, 2], [3, 4]]) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + box_set.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + box_set.set_as_constraint(uncertain_params=m.p1, block=m) + + @unittest.skipUnless(baron_available, "BARON is not available.") + def test_compute_parameter_bounds(self): + """ + Test parameter bounds computations give expected results. + """ + box_set = BoxSet([[1, 2], [3, 4]]) + computed_bounds = box_set._compute_parameter_bounds(SolverFactory("baron")) + np.testing.assert_allclose(computed_bounds, [[1, 2], [3, 4]]) + np.testing.assert_allclose(computed_bounds, box_set.parameter_bounds) + + def test_point_in_set(self): + """ + Test point in set check works as expected. + """ + box_set = BoxSet(bounds=[[1, 2], [3, 4]]) + + in_set_points = [(1, 3), (1, 4), (2, 3), (2, 4), (1.5, 3.5)] + out_of_set_points = [(0, 0), (0, 3), (0, 4), (1, 2), (3, 4)] + for point in in_set_points: + self.assertTrue( + box_set.point_in_set(point), + msg=f"Point {point} should not be in uncertainty set {box_set}.", + ) + for point in out_of_set_points: + self.assertFalse( + box_set.point_in_set(point), + msg=f"Point {point} should not be in uncertainty set {box_set}.", + ) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + box_set.point_in_set([1, 2, 3]) + + def test_add_bounds_on_uncertain_parameters(self): + m = ConcreteModel() + m.uncertain_param_vars = Var([0, 1], initialize=0) + box_set = BoxSet(bounds=[(1, 2), (3, 4)]) + + box_set._add_bounds_on_uncertain_parameters( + global_solver=None, uncertain_param_vars=m.uncertain_param_vars + ) + self.assertEqual(m.uncertain_param_vars[0].bounds, (1, 2)) + self.assertEqual(m.uncertain_param_vars[1].bounds, (3, 4)) + + +class TestBudgetSet(unittest.TestCase): + """ + Tests for the BudgetSet. + """ + + def test_normal_budget_construction_and_update(self): + """ + Test BudgetSet constructor and attribute setters work + appropriately. + """ + budget_mat = [[1, 0, 1], [0, 1, 0]] + budget_rhs_vec = [1, 3] + + # check attributes are as expected + buset = BudgetSet(budget_mat, budget_rhs_vec) + + np.testing.assert_allclose(budget_mat, buset.budget_membership_mat) + np.testing.assert_allclose(budget_rhs_vec, buset.budget_rhs_vec) + np.testing.assert_allclose( + [[1, 0, 1], [0, 1, 0], [-1, 0, 0], [0, -1, 0], [0, 0, -1]], + buset.coefficients_mat, + ) + np.testing.assert_allclose([1, 3, 0, 0, 0], buset.rhs_vec) + np.testing.assert_allclose(np.zeros(3), buset.origin) + + # update the set + buset.budget_membership_mat = [[1, 1, 0], [0, 0, 1]] + buset.budget_rhs_vec = [3, 4] + + # check updates work + np.testing.assert_allclose([[1, 1, 0], [0, 0, 1]], buset.budget_membership_mat) + np.testing.assert_allclose([3, 4], buset.budget_rhs_vec) + np.testing.assert_allclose( + [[1, 1, 0], [0, 0, 1], [-1, 0, 0], [0, -1, 0], [0, 0, -1]], + buset.coefficients_mat, + ) + np.testing.assert_allclose([3, 4, 0, 0, 0], buset.rhs_vec) + + # update origin + buset.origin = [1, 0, -1.5] + np.testing.assert_allclose([1, 0, -1.5], buset.origin) + + def test_error_on_budget_set_dim_change(self): + """ + BudgetSet dimension is considered immutable. + Test ValueError raised when attempting to alter the + budget set dimension. + """ + budget_mat = [[1, 0, 1], [0, 1, 0]] + budget_rhs_vec = [1, 3] + bu_set = BudgetSet(budget_mat, budget_rhs_vec) + + # error on budget incidence matrix update + exc_str = ( + r".*must have 3 columns to match set dimension \(provided.*1 columns\)" + ) + with self.assertRaisesRegex(ValueError, exc_str): + bu_set.budget_membership_mat = [[1], [1]] + + # error on origin update + exc_str = ( + r".*must have 3 entries to match set dimension \(provided.*4 entries\)" + ) + with self.assertRaisesRegex(ValueError, exc_str): + bu_set.origin = [1, 2, 1, 0] + + def test_error_on_budget_member_mat_row_change(self): + """ + Number of rows of budget membership mat is immutable. + Hence, size of budget_rhs_vec is also immutable. + """ + budget_mat = [[1, 0, 1], [0, 1, 0]] + budget_rhs_vec = [1, 3] + bu_set = BudgetSet(budget_mat, budget_rhs_vec) + + exc_str = ( + r".*must have 2 rows to match shape of attribute 'budget_rhs_vec' " + r"\(provided.*1 rows\)" + ) + with self.assertRaisesRegex(ValueError, exc_str): + bu_set.budget_membership_mat = [[1, 0, 1]] + + exc_str = ( + r".*must have 2 entries to match shape of attribute " + r"'budget_membership_mat' \(provided.*1 entries\)" + ) + with self.assertRaisesRegex(ValueError, exc_str): + bu_set.budget_rhs_vec = [1] + + def test_error_on_neg_budget_rhs_vec_entry(self): + """ + Test ValueError raised if budget RHS vec has entry + with negative value entry. + """ + budget_mat = [[1, 0, 1], [1, 1, 0]] + neg_val_rhs_vec = [1, -1] + + exc_str = r"Entry -1 of.*'budget_rhs_vec' is negative*" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BudgetSet(budget_mat, neg_val_rhs_vec) + + # construct a valid budget set + buset = BudgetSet(budget_mat, [1, 1]) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + buset.budget_rhs_vec = neg_val_rhs_vec + + def test_error_on_non_bool_budget_mat_entry(self): + """ + Test ValueError raised if budget membership mat has + entry which is not a 0-1 value. + """ + invalid_budget_mat = [[1, 0, 1], [1, 1, 0.1]] + budget_rhs_vec = [1, 1] + + exc_str = r"Attempting.*entries.*not 0-1 values \(example: 0.1\).*" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BudgetSet(invalid_budget_mat, budget_rhs_vec) + + # construct a valid budget set + buset = BudgetSet([[1, 0, 1], [1, 1, 0]], budget_rhs_vec) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + buset.budget_membership_mat = invalid_budget_mat + + def test_error_on_budget_mat_all_zero_rows(self): + """ + Test ValueError raised if budget membership mat + has a row with all zeros. + """ + invalid_row_mat = [[0, 0, 0], [1, 1, 1], [0, 0, 0]] + budget_rhs_vec = [1, 1, 2] + + exc_str = r".*all entries zero in rows at indexes: 0, 2.*" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BudgetSet(invalid_row_mat, budget_rhs_vec) + + # construct a valid budget set + buset = BudgetSet([[1, 0, 1], [1, 1, 0], [1, 1, 1]], budget_rhs_vec) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + buset.budget_membership_mat = invalid_row_mat + + def test_error_on_budget_mat_all_zero_columns(self): + """ + Test ValueError raised if budget membership mat + has a column with all zeros. + """ + invalid_col_mat = [[0, 0, 1], [0, 0, 1], [0, 0, 1]] + budget_rhs_vec = [1, 1, 2] + + exc_str = r".*all entries zero in columns at indexes: 0, 1.*" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + BudgetSet(invalid_col_mat, budget_rhs_vec) + + # construct a valid budget set + buset = BudgetSet([[1, 0, 1], [1, 1, 0], [1, 1, 1]], budget_rhs_vec) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + buset.budget_membership_mat = invalid_col_mat + + @unittest.skipUnless(baron_available, "BARON is not available") + def test_compute_parameter_bounds(self): + """ + Test parameter bounds computations give expected results. + """ + solver = SolverFactory("baron") + + buset1 = BudgetSet([[1, 1], [0, 1]], rhs_vec=[2, 3], origin=None) + np.testing.assert_allclose( + buset1.parameter_bounds, buset1._compute_parameter_bounds(solver) + ) + + # this also checks that the list entries are tuples + self.assertEqual(buset1.parameter_bounds, [(0, 2), (0, 2)]) + + buset2 = BudgetSet([[1, 0], [1, 1]], rhs_vec=[3, 2], origin=[1, 2]) + self.assertEqual( + buset2.parameter_bounds, buset2._compute_parameter_bounds(solver) + ) + np.testing.assert_allclose( + buset2.parameter_bounds, buset2._compute_parameter_bounds(solver) + ) + self.assertEqual(buset2.parameter_bounds, [(1, 3), (2, 4)]) + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + m.v2 = Var(initialize=0) + buset = BudgetSet([[1, 0], [1, 1]], rhs_vec=[3, 2], origin=[1, 3]) + + uq = buset.set_as_constraint(uncertain_params=[m.v1, m.v2], block=m) + self.assertEqual(uq.auxiliary_vars, []) + self.assertIs(uq.block, m) + self.assertEqual(len(uq.uncertain_param_vars), 2) + self.assertIs(uq.uncertain_param_vars[0], m.v1) + self.assertIs(uq.uncertain_param_vars[1], m.v2) + self.assertEqual(len(uq.uncertainty_cons), 4) + + assertExpressionsEqual( + self, uq.uncertainty_cons[0].expr, m.v1 + np.float64(0) * m.v2 <= np.int_(4) + ) + assertExpressionsEqual( + self, uq.uncertainty_cons[1].expr, m.v1 + m.v2 <= np.int_(6) + ) + assertExpressionsEqual( + self, + uq.uncertainty_cons[2].expr, + -np.float64(1.0) * m.v1 - np.float64(0) * m.v2 <= np.int_(-1), + ) + assertExpressionsEqual( + self, + uq.uncertainty_cons[3].expr, + -np.float64(0) * m.v1 + np.float64(-1.0) * m.v2 <= np.int_(-3), + ) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain parameters + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + buset = BudgetSet([[1, 0], [1, 1]], rhs_vec=[3, 2], origin=[1, 3]) + with self.assertRaisesRegex(ValueError, ".*dimension"): + buset.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1], initialize=0, mutable=True) + buset = BudgetSet([[1, 0], [1, 1]], rhs_vec=[3, 2], origin=[1, 3]) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + buset.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + buset.set_as_constraint(uncertain_params=m.p1, block=m) + + def test_point_in_set(self): + """ + Test point in set checks work as expected. + """ + buset = BudgetSet([[1, 0], [1, 1]], rhs_vec=[3, 2], origin=[1, 3]) + self.assertTrue(buset.point_in_set([1, 3])) + self.assertTrue(buset.point_in_set([3, 3])) + self.assertTrue(buset.point_in_set([2, 4])) + self.assertFalse(buset.point_in_set([0, 0])) + self.assertFalse(buset.point_in_set([0, 3])) + self.assertFalse(buset.point_in_set([4, 2])) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + buset.point_in_set([1, 2, 3, 4]) + + def test_add_bounds_on_uncertain_parameters(self): + """ + Test method for adding bounds on uncertain params + works as expected. + """ + m = ConcreteModel() + m.v = Var([0, 1], initialize=0.5) + buset = BudgetSet([[1, 0], [1, 1]], rhs_vec=[3, 2], origin=[1, 3]) + buset._add_bounds_on_uncertain_parameters( + global_solver=None, uncertain_param_vars=m.v + ) + self.assertEqual(m.v[0].bounds, (1, 3)) + self.assertEqual(m.v[1].bounds, (3, 5)) + + +class TestFactorModelSet(unittest.TestCase): + """ + Tests for the FactorModelSet. + """ + + def test_normal_factor_model_construction_and_update(self): + """ + Test FactorModelSet constructor and setter work normally + when attribute values are appropriate. + """ + # valid inputs + fset = FactorModelSet( + origin=[0, 0, 1], + number_of_factors=2, + psi_mat=[[1, 2], [0, 1], [1, 0]], + beta=0.1, + ) + + # check attributes are as expected + np.testing.assert_allclose(fset.origin, [0, 0, 1]) + np.testing.assert_allclose(fset.psi_mat, [[1, 2], [0, 1], [1, 0]]) + np.testing.assert_allclose(fset.number_of_factors, 2) + np.testing.assert_allclose(fset.beta, 0.1) + self.assertEqual(fset.dim, 3) + + # update the set + fset.origin = [1, 1, 0] + fset.psi_mat = [[1, 0], [0, 1], [1, 1]] + fset.beta = 0.5 + + # check updates work + np.testing.assert_allclose(fset.origin, [1, 1, 0]) + np.testing.assert_allclose(fset.psi_mat, [[1, 0], [0, 1], [1, 1]]) + np.testing.assert_allclose(fset.beta, 0.5) + + def test_error_on_factor_model_set_dim_change(self): + """ + Test ValueError raised when attempting to change FactorModelSet + dimension (by changing number of entries in origin + or number of rows of psi_mat). + """ + origin = [0, 0, 0] + number_of_factors = 2 + psi_mat = [[1, 0], [0, 1], [1, 1]] + beta = 0.5 + + # construct factor model set + fset = FactorModelSet(origin, number_of_factors, psi_mat, beta) + + # assert error on psi mat update + exc_str = ( + r"should be of shape \(3, 2\) to match.*dimensions " + r"\(provided shape \(2, 2\)\)" + ) + with self.assertRaisesRegex(ValueError, exc_str): + fset.psi_mat = [[1, 0], [1, 2]] + + # assert error on origin update + exc_str = r"Attempting.*factor model set of dimension 3 to value of dimension 2" + with self.assertRaisesRegex(ValueError, exc_str): + fset.origin = [1, 3] + + def test_error_on_invalid_number_of_factors(self): + """ + Test ValueError raised if number of factors + is negative int, or AttributeError + if attempting to update (should be immutable). + """ + exc_str = r".*'number_of_factors' must be a positive int \(provided value -1\)" + with self.assertRaisesRegex(ValueError, exc_str): + FactorModelSet( + origin=[0], number_of_factors=-1, psi_mat=[[1, 2], [1, 1]], beta=0.1 + ) + + fset = FactorModelSet( + origin=[0, 1], number_of_factors=2, psi_mat=[[1, 2], [1, 1]], beta=0.1 + ) + + exc_str = r".*'number_of_factors' is immutable" + with self.assertRaisesRegex(AttributeError, exc_str): + fset.number_of_factors = 3 + + def test_error_on_invalid_beta(self): + """ + Test ValueError raised if beta is invalid (exceeds 1 or + is negative) + """ + origin = [0, 0, 0] + number_of_factors = 2 + psi_mat = [[1, 0], [0, 1], [1, 1]] + neg_beta = -0.5 + big_beta = 1.5 + + # assert error on construction + neg_exc_str = ( + r".*must be a real number between 0 and 1.*\(provided value -0.5\)" + ) + big_exc_str = r".*must be a real number between 0 and 1.*\(provided value 1.5\)" + with self.assertRaisesRegex(ValueError, neg_exc_str): + FactorModelSet(origin, number_of_factors, psi_mat, neg_beta) + with self.assertRaisesRegex(ValueError, big_exc_str): + FactorModelSet(origin, number_of_factors, psi_mat, big_beta) + + # create a valid factor model set + fset = FactorModelSet(origin, number_of_factors, psi_mat, 1) + + # assert error on update + with self.assertRaisesRegex(ValueError, neg_exc_str): + fset.beta = neg_beta + with self.assertRaisesRegex(ValueError, big_exc_str): + fset.beta = big_beta + + def test_error_on_rank_deficient_psi_mat(self): + """ + Test exception raised if factor loading matrix `psi_mat` + is rank-deficient. + """ + with self.assertRaisesRegex(ValueError, r"full column rank.*\(2, 3\)"): + # more columns than rows + FactorModelSet( + origin=[0, 0], + number_of_factors=3, + psi_mat=[[1, -1, 1], [1, 0.1, 1]], + beta=1 / 6, + ) + with self.assertRaisesRegex(ValueError, r"full column rank.*\(2, 2\)"): + # linearly dependent columns + FactorModelSet( + origin=[0, 0], + number_of_factors=2, + psi_mat=[[1, -1], [1, -1]], + beta=1 / 6, + ) + + @unittest.skipUnless(baron_available, "BARON is not available") + @parameterized.expand( + [ + # map beta to expected parameter bounds + ["beta0", 0, [(-2.0, 2.0), (0.1, 1.9), (-5.0, 9.0), (-4.0, 10.0)]], + ["beta1ov6", 1 / 6, [(-2.5, 2.5), (-0.4, 2.4), (-8.0, 12.0), (-7.0, 13.0)]], + [ + "beta1ov3", + 1 / 3, + [(-3.0, 3.0), (-0.9, 2.9), (-11.0, 15.0), (-10.0, 16.0)], + ], + [ + "beta1ov2", + 1 / 2, + [(-3.0, 3.0), (-0.95, 2.95), (-11.5, 15.5), (-10.5, 16.5)], + ], + [ + "beta2ov3", + 2 / 3, + [(-3.0, 3.0), (-1.0, 3.0), (-12.0, 16.0), (-11.0, 17.0)], + ], + [ + "beta7ov9", + 7 / 9, + [ + (-3.0, 3.0), + (-31 / 30, 91 / 30), + (-37 / 3, 49 / 3), + (-34 / 3, 52 / 3), + ], + ], + ["beta1", 1, [(-3.0, 3.0), (-1.1, 3.1), (-13.0, 17.0), (-12.0, 18.0)]], + ] + ) + def test_compute_parameter_bounds(self, name, beta, expected_param_bounds): + """ + Test parameter bounds computations give expected results. + """ + solver = SolverFactory("baron") + + fset = FactorModelSet( + origin=[0, 1, 2, 3], + number_of_factors=3, + psi_mat=[[1, -1, 1], [1, 0.1, 1], [-1, -6, -8], [1, 6, 8]], + beta=beta, + ) + + param_bounds = fset.parameter_bounds + # won't be exactly equal, + np.testing.assert_allclose(param_bounds, expected_param_bounds, atol=1e-13) + + # check parameter bounds matches LP results + # exactly for each case + solver_param_bounds = fset._compute_parameter_bounds(solver) + np.testing.assert_allclose( + solver_param_bounds, + param_bounds, + err_msg=( + "Parameter bounds not consistent with LP values for " + "FactorModelSet with parameterization:\n" + f"F={fset.number_of_factors},\n" + f"beta={fset.beta},\n" + f"psi_mat={fset.psi_mat},\n" + f"origin={fset.origin}." + ), + # account for solver tolerances and numerical errors + atol=1e-4, + ) + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + fset = FactorModelSet( + origin=[0, 1, 2, 3], + number_of_factors=3, + psi_mat=[[1, -1, 1], [1, 0.1, 1], [-1, -6, -8], [1, 6, 8]], + beta=1 / 6, + ) + uq = fset.set_as_constraint(uncertain_params=None) + + self.assertEqual(len(uq.auxiliary_vars), 3) + self.assertEqual(uq.auxiliary_vars[0].bounds, (-1, 1)) + self.assertEqual(uq.auxiliary_vars[1].bounds, (-1, 1)) + self.assertEqual(uq.auxiliary_vars[2].bounds, (-1, 1)) + + *factor_model_matrix_cons, betaf_abs_val_con = uq.uncertainty_cons + + self.assertEqual(len(factor_model_matrix_cons), 4) + assertExpressionsEqual( + self, + factor_model_matrix_cons[0].expr, + ( + uq.auxiliary_vars[0] + + (-1.0) * uq.auxiliary_vars[1] + + uq.auxiliary_vars[2] + == uq.uncertain_param_vars[0] + ), + ) + assertExpressionsEqual( + self, + factor_model_matrix_cons[1].expr, + ( + uq.auxiliary_vars[0] + + 0.1 * uq.auxiliary_vars[1] + + uq.auxiliary_vars[2] + + 1 + == uq.uncertain_param_vars[1] + ), + ) + assertExpressionsEqual( + self, + factor_model_matrix_cons[2].expr, + ( + (-1.0) * uq.auxiliary_vars[0] + + (-6.0) * uq.auxiliary_vars[1] + + (-8.0) * uq.auxiliary_vars[2] + + 2 + == uq.uncertain_param_vars[2] + ), + ) + assertExpressionsEqual( + self, + factor_model_matrix_cons[3].expr, + ( + (1.0) * uq.auxiliary_vars[0] + + (6.0) * uq.auxiliary_vars[1] + + (8.0) * uq.auxiliary_vars[2] + + 3 + == uq.uncertain_param_vars[3] + ), + ) + + betaf_abs_val_con = uq.uncertainty_cons[-1] + assertExpressionsEqual( + self, + betaf_abs_val_con.expr, + RangedExpression((-0.5, sum(uq.auxiliary_vars), 0.5), False), + ) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain parameters + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + box_set = BoxSet(bounds=[[1, 2], [3, 4]]) + with self.assertRaisesRegex(ValueError, ".*dimension"): + box_set.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1], initialize=0, mutable=True) + box_set = BoxSet(bounds=[[1, 2], [3, 4]]) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + box_set.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + box_set.set_as_constraint(uncertain_params=m.p1, block=m) + + def test_point_in_set(self): + """ + Test point in set check works if psi matrix is skinny. + """ + fset = FactorModelSet( + origin=[0, 0, 0, 0], + number_of_factors=3, + psi_mat=[[1, -1, 1], [1, 0.1, 1], [2, 0.3, 1], [4, 5, 1]], + beta=1 / 6, + ) + + self.assertTrue(fset.point_in_set(fset.origin)) + + for aux_space_pt in it.permutations([1, 0.5, -1]): + fset_pt_from_crit = fset.origin + fset.psi_mat @ aux_space_pt + self.assertTrue( + fset.point_in_set(fset_pt_from_crit), + msg=( + f"Point {fset_pt_from_crit} generated from critical point " + f"{aux_space_pt} of the auxiliary variable space " + "is not in the set." + ), + ) + + fset_pt_from_neg_crit = fset.origin - fset.psi_mat @ aux_space_pt + self.assertTrue( + fset.point_in_set(fset_pt_from_neg_crit), + msg=( + f"Point {fset_pt_from_neg_crit} generated from critical point " + f"{aux_space_pt} of the auxiliary variable space " + "is not in the set." + ), + ) + + # some points transformed from hypercube vertices. + # since F - k = 2 < 1 = k, no such point should be in the set + self.assertFalse(fset.point_in_set(fset.origin + fset.psi_mat @ [1, 1, 1])) + self.assertFalse(fset.point_in_set(fset.origin + fset.psi_mat @ [1, 1, -1])) + self.assertFalse(fset.point_in_set(fset.origin + fset.psi_mat @ [1, -1, -1])) + self.assertFalse(fset.point_in_set(fset.origin + fset.psi_mat @ [-1, -1, -1])) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + fset.point_in_set([1, 2, 3]) + + def test_add_bounds_on_uncertain_parameters(self): + m = ConcreteModel() + m.uncertain_param_vars = Var(range(4), initialize=0) + fset = FactorModelSet( + origin=[0, 1, 2, 3], + number_of_factors=3, + psi_mat=[[1, -1, 1], [1, 0.1, 1], [-1, -6, -8], [1, 6, 8]], + beta=1, + ) + + fset._add_bounds_on_uncertain_parameters( + global_solver=None, uncertain_param_vars=m.uncertain_param_vars + ) + self.assertEqual(m.uncertain_param_vars[0].bounds, (-3.0, 3.0)) + self.assertEqual(m.uncertain_param_vars[1].bounds, (-1.1, 3.1)) + self.assertEqual(m.uncertain_param_vars[2].bounds, (-13.0, 17.0)) + self.assertEqual(m.uncertain_param_vars[3].bounds, (-12.0, 18.0)) + + +class TestIntersectionSet(unittest.TestCase): + """ + Tests for the IntersectionSet. + """ + + def test_normal_construction_and_update(self): + """ + Test IntersectionSet constructor and setter + work normally when arguments are appropriate. + """ + bset = BoxSet(bounds=[[-1, 1], [-1, 1], [-1, 1]]) + aset = AxisAlignedEllipsoidalSet([0, 0, 0], [1, 1, 1]) + + iset = IntersectionSet(box_set=bset, axis_aligned_set=aset) + self.assertIn( + bset, + iset.all_sets, + msg=( + "IntersectionSet 'all_sets' attribute does not" + "contain expected BoxSet" + ), + ) + self.assertIn( + aset, + iset.all_sets, + msg=( + "IntersectionSet 'all_sets' attribute does not" + "contain expected AxisAlignedEllipsoidalSet" + ), + ) + + def test_error_on_intersecting_wrong_dims(self): + """ + Test ValueError raised if IntersectionSet sets + are not of same dimension. + """ + bset = BoxSet(bounds=[[-1, 1], [-1, 1]]) + aset = AxisAlignedEllipsoidalSet([0, 0], [2, 2]) + wrong_aset = AxisAlignedEllipsoidalSet([0, 0, 0], [1, 1, 1]) + + exc_str = r".*of dimension 2, but attempting to add set of dimension 3" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + IntersectionSet(box_set=bset, axis_set=aset, wrong_set=wrong_aset) + + # construct a valid intersection set + iset = IntersectionSet(box_set=bset, axis_set=aset) + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + iset.all_sets.append(wrong_aset) + + def test_type_error_on_invalid_arg(self): + """ + Test TypeError raised if an argument not of type + UncertaintySet is passed to the IntersectionSet + constructor or appended to 'all_sets'. + """ + bset = BoxSet(bounds=[[-1, 1], [-1, 1]]) + aset = AxisAlignedEllipsoidalSet([0, 0], [2, 2]) + + exc_str = ( + r"Entry '1' of the argument `all_sets` is not An `UncertaintySet` " + r"object.*\(provided type 'int'\)" + ) + + # assert error on construction + with self.assertRaisesRegex(TypeError, exc_str): + IntersectionSet(box_set=bset, axis_set=aset, invalid_arg=1) + + # construct a valid intersection set + iset = IntersectionSet(box_set=bset, axis_set=aset) + + # assert error on update + with self.assertRaisesRegex(TypeError, exc_str): + iset.all_sets.append(1) + + def test_error_on_intersection_dim_change(self): + """ + IntersectionSet dimension is considered immutable. + Test ValueError raised when attempting to set the + constituent sets to a different dimension. + """ + bset = BoxSet(bounds=[[-1, 1], [-1, 1]]) + aset = AxisAlignedEllipsoidalSet([0, 0], [2, 2]) + + # construct the set + iset = IntersectionSet(box_set=bset, axis_set=aset) + + exc_str = r"Attempting to set.*dimension 2 to a sequence.* of dimension 1" + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + # attempt to set to 1-dimensional sets + iset.all_sets = [BoxSet([[1, 1]]), AxisAlignedEllipsoidalSet([0], [1])] + + def test_error_on_too_few_sets(self): + """ + Check ValueError raised if too few sets are passed + to the intersection set. + """ + exc_str = r"Attempting.*minimum required length 2.*iterable of length 1" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + IntersectionSet(bset=BoxSet([[1, 2]])) + + # construct a valid intersection set + iset = IntersectionSet( + box_set=BoxSet([[1, 2]]), axis_set=AxisAlignedEllipsoidalSet([0], [1]) + ) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + # attempt to set to 1-dimensional sets + iset.all_sets = [BoxSet([[1, 1]])] + + def test_intersection_uncertainty_set_list_behavior(self): + """ + Test the 'all_sets' attribute of the IntersectionSet + class behaves like a regular Python list. + """ + iset = IntersectionSet( + bset=BoxSet([[0, 2]]), aset=AxisAlignedEllipsoidalSet([0], [1]) + ) + + # an UncertaintySetList of length 2. + # should behave like a list of length 2 + all_sets = iset.all_sets + + # test append + all_sets.append(BoxSet([[1, 2]])) + del all_sets[2:] + + # test extend + all_sets.extend([BoxSet([[1, 2]]), EllipsoidalSet([0], [[1]], 2)]) + del all_sets[2:] + + # index in range. Allow slicing as well + # none of these should result in exception + all_sets[0] + all_sets[1] + all_sets[100:] + all_sets[0:2:20] + all_sets[0:2:1] + all_sets[-20:-1:2] + + # index out of range + self.assertRaises(IndexError, lambda: all_sets[2]) + self.assertRaises(IndexError, lambda: all_sets[-3]) + + # assert min length ValueError if attempting to clear + # list to length less than 2 + with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): + all_sets[:] = all_sets[0] + with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): + del all_sets[1] + with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): + del all_sets[1:] + with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): + del all_sets[:] + with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): + all_sets.clear() + with self.assertRaisesRegex(ValueError, r"Length.* must be at least 2"): + all_sets[0:] = [] + + # assignment out of range + with self.assertRaisesRegex(IndexError, r"assignment index out of range"): + all_sets[-3] = BoxSet([[1, 1.5]]) + with self.assertRaisesRegex(IndexError, r"assignment index out of range"): + all_sets[2] = BoxSet([[1, 1.5]]) + + # assigning to slices should work fine + all_sets[3:] = [BoxSet([[1, 1.5]]), BoxSet([[1, 3]])] + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + m.v2 = Var(initialize=0) + + i_set = IntersectionSet( + set1=BoxSet([(-0.5, 0.5), (-0.5, 0.5)]), + set2=FactorModelSet( + origin=[0, 0], number_of_factors=2, beta=0.75, psi_mat=[[1, 1], [1, 2]] + ), + set3=CardinalitySet([-0.5, -0.5], [2, 2], 2), + # ellipsoid. this is enclosed in all the other sets + set4=AxisAlignedEllipsoidalSet([0, 0], [0.25, 0.25]), + ) + + uq = i_set.set_as_constraint(uncertain_params=[m.v1, m.v2], block=m) + + self.assertIs(uq.block, m) + self.assertEqual(uq.uncertain_param_vars, [m.v1, m.v2]) + self.assertEqual(len(uq.auxiliary_vars), 4) + self.assertEqual(len(uq.uncertainty_cons), 9) + + # box set constraints + assertExpressionsEqual( + self, + uq.uncertainty_cons[0].expr, + RangedExpression((np.float64(-0.5), m.v1, np.float64(0.5)), False), + ) + assertExpressionsEqual( + self, + uq.uncertainty_cons[1].expr, + RangedExpression((np.float64(-0.5), m.v2, np.float64(0.5)), False), + ) + + # factor model constraints + aux_vars = uq.auxiliary_vars + assertExpressionsEqual( + self, uq.uncertainty_cons[2].expr, aux_vars[0] + aux_vars[1] == m.v1 + ) + assertExpressionsEqual( + self, uq.uncertainty_cons[3].expr, aux_vars[0] + 2 * aux_vars[1] == m.v2 + ) + assertExpressionsEqual( + self, + uq.uncertainty_cons[4].expr, + RangedExpression((-1.5, aux_vars[0] + aux_vars[1], 1.5), False), + ) + self.assertEqual(aux_vars[0].bounds, (-1, 1)) + self.assertEqual(aux_vars[1].bounds, (-1, 1)) + + # cardinality set constraints + assertExpressionsEqual( + self, uq.uncertainty_cons[5].expr, -0.5 + 2 * aux_vars[2] == m.v1 + ) + assertExpressionsEqual( + self, uq.uncertainty_cons[6].expr, -0.5 + 2 * aux_vars[3] == m.v2 + ) + assertExpressionsEqual( + self, uq.uncertainty_cons[7].expr, sum(aux_vars[2:4]) <= 2 + ) + self.assertEqual(aux_vars[2].bounds, (0, 1)) + self.assertEqual(uq.auxiliary_vars[3].bounds, (0, 1)) + + # axis-aligned ellipsoid constraint + assertExpressionsEqual( + self, + uq.uncertainty_cons[8].expr, + m.v1**2 / np.float64(0.0625) + m.v2**2 / np.float64(0.0625) <= 1, + ) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain parameters + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + i_set = IntersectionSet( + set1=BoxSet(bounds=[[1, 2], [3, 4]]), + set2=AxisAlignedEllipsoidalSet([0, 1], [5, 5]), + ) + with self.assertRaisesRegex(ValueError, ".*dimension"): + i_set.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1], initialize=0, mutable=True) + i_set = IntersectionSet( + set1=BoxSet(bounds=[[1, 2], [3, 4]]), + set2=AxisAlignedEllipsoidalSet([0, 1], [5, 5]), + ) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + i_set.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + i_set.set_as_constraint(uncertain_params=m.p1, block=m) + + @unittest.skipUnless(baron_available, "BARON is not available.") + def test_compute_parameter_bounds(self): + """ + Test parameter bounds computations give expected results. + """ + i_set = IntersectionSet( + set1=BoxSet([(-0.5, 0.5), (-0.5, 0.5)]), + set2=FactorModelSet( + origin=[0, 0], number_of_factors=2, beta=0.75, psi_mat=[[1, 1], [1, 2]] + ), + # another origin-centered square + set3=CardinalitySet([-0.5, -0.5], [2, 2], 2), + # ellipsoid. this is enclosed in all the other sets + set4=AxisAlignedEllipsoidalSet([0, 0], [0.25, 0.25]), + ) + + # ellipsoid is enclosed by everyone else, so + # that determines the bounds + computed_bounds = i_set._compute_parameter_bounds(SolverFactory("baron")) + np.testing.assert_allclose(computed_bounds, [[-0.25, 0.25], [-0.25, 0.25]]) + + # returns empty list + self.assertFalse(i_set.parameter_bounds) + + def test_point_in_set(self): + """ + Test point in set check for intersection set. + """ + i_set = IntersectionSet( + set1=BoxSet([(-0.5, 0.5), (-0.5, 0.5)]), + # this is just an origin-centered square + set2=FactorModelSet( + origin=[0, 0], number_of_factors=2, beta=0.75, psi_mat=[[1, 1], [1, 2]] + ), + set3=CardinalitySet([-0.5, -0.5], [2, 2], 2), + # ellipsoid. this is enclosed in all the other sets + set4=AxisAlignedEllipsoidalSet([0, 0], [0.25, 0.25]), + ) + + # ellipsoid points + self.assertTrue(i_set.point_in_set([0, 0])) + self.assertTrue(i_set.point_in_set([0, 0.25])) + self.assertTrue(i_set.point_in_set([0, -0.25])) + self.assertTrue(i_set.point_in_set([0.25, 0])) + self.assertTrue(i_set.point_in_set([-0.25, 0])) + + # box vertex + self.assertFalse(i_set.point_in_set([0.5, 0.5])) + # cardinality set origin and vertex of the box + # are outside the ellipse + self.assertFalse(i_set.point_in_set([-0.5, -0.5])) + + @unittest.skipUnless(baron_available, "Global NLP solver is not available.") + def test_add_bounds_on_uncertain_parameters(self): + m = ConcreteModel() + m.uncertain_param_vars = Var([0, 1], initialize=0) + iset = IntersectionSet( + set1=BoxSet([(-0.5, 0.5), (-0.5, 0.5)]), + set2=FactorModelSet( + origin=[0, 0], number_of_factors=2, beta=0.75, psi_mat=[[1, 1], [1, 2]] + ), + set3=CardinalitySet([-0.5, -0.5], [2, 2], 2), + # ellipsoid. this is enclosed in all the other sets + set4=AxisAlignedEllipsoidalSet([0, 0], [0.25, 0.25]), + ) + + iset._add_bounds_on_uncertain_parameters( + global_solver=SolverFactory("baron"), + uncertain_param_vars=m.uncertain_param_vars, + ) + + # account for imprecision + np.testing.assert_allclose(m.uncertain_param_vars[0].bounds, (-0.25, 0.25)) + np.testing.assert_allclose(m.uncertain_param_vars[1].bounds, (-0.25, 0.25)) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + iset.point_in_set([1, 2, 3]) + + +class TestCardinalitySet(unittest.TestCase): + """ + Tests for the CardinalitySet. + """ + + def test_normal_cardinality_construction_and_update(self): + """ + Test CardinalitySet constructor and setter work normally + when bounds are appropriate. + """ + # valid inputs + cset = CardinalitySet(origin=[0, 0], positive_deviation=[1, 3], gamma=2) + + # check attributes are as expected + np.testing.assert_allclose(cset.origin, [0, 0]) + np.testing.assert_allclose(cset.positive_deviation, [1, 3]) + np.testing.assert_allclose(cset.gamma, 2) + self.assertEqual(cset.dim, 2) + + # update the set + cset.origin = [1, 2] + cset.positive_deviation = [3, 0] + cset.gamma = 0.5 + + # check updates work + np.testing.assert_allclose(cset.origin, [1, 2]) + np.testing.assert_allclose(cset.positive_deviation, [3, 0]) + np.testing.assert_allclose(cset.gamma, 0.5) + + def test_error_on_neg_positive_deviation(self): + """ + Cardinality set positive deviation attribute should + contain nonnegative numerical entries. + + Check ValueError raised if any negative entries provided. + """ + origin = [0, 0] + positive_deviation = [1, -2] # invalid + gamma = 2 + + exc_str = r"Entry -2 of attribute 'positive_deviation' is negative value" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + cset = CardinalitySet(origin, positive_deviation, gamma) + + # construct a valid cardinality set + cset = CardinalitySet(origin, [1, 1], gamma) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + cset.positive_deviation = positive_deviation + + def test_error_on_invalid_gamma(self): + """ + Cardinality set gamma attribute should be a float-like + between 0 and the set dimension. + + Check ValueError raised if gamma attribute is set + to an invalid value. + """ + origin = [0, 0] + positive_deviation = [1, 1] + gamma = 3 # should be invalid + + exc_str = ( + r".*attribute 'gamma' must be a real number " + r"between 0 and dimension 2 \(provided value 3\)" + ) + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + CardinalitySet(origin, positive_deviation, gamma) + + # construct a valid cardinality set + cset = CardinalitySet(origin, positive_deviation, gamma=2) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + cset.gamma = gamma + + def test_error_on_cardinality_set_dim_change(self): + """ + Dimension is considered immutable. + Test ValueError raised when attempting to alter the + set dimension (i.e. number of entries of `origin`). + """ + # construct a valid cardinality set + cset = CardinalitySet(origin=[0, 0], positive_deviation=[1, 1], gamma=2) + + exc_str = r"Attempting to set.*dimension 2 to value of dimension 3" + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + cset.origin = [0, 0, 0] + with self.assertRaisesRegex(ValueError, exc_str): + cset.positive_deviation = [1, 1, 1] + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + cset = CardinalitySet([-0.5, 1, 2], [2.5, 3, 0], 1.5) + uq = cset.set_as_constraint(uncertain_params=None, block=m) + + self.assertEqual(len(uq.uncertainty_cons), 4) + self.assertEqual(len(uq.auxiliary_vars), 3) + self.assertEqual(len(uq.uncertain_param_vars), 3) + self.assertIs(uq.block, m) + + *hadamard_cons, gamma_con = uq.uncertainty_cons + var1, var2, var3 = uq.uncertain_param_vars + auxvar1, auxvar2, auxvar3 = uq.auxiliary_vars + + assertExpressionsEqual( + self, hadamard_cons[0].expr, -0.5 + 2.5 * auxvar1 == var1 + ) + assertExpressionsEqual(self, hadamard_cons[1].expr, 1.0 + 3.0 * auxvar2 == var2) + assertExpressionsEqual(self, hadamard_cons[2].expr, 2.0 + 0.0 * auxvar3 == var3) + assertExpressionsEqual(self, gamma_con.expr, auxvar1 + auxvar2 + auxvar3 <= 1.5) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain parameters + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + cset = CardinalitySet([-0.5, 1, 2], [2.5, 3, 0], 1.5) + with self.assertRaisesRegex(ValueError, ".*dimension"): + cset.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1, 2], initialize=0, mutable=True) + cset = CardinalitySet([-0.5, 1, 2], [2.5, 3, 0], 1.5) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + cset.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + cset.set_as_constraint(uncertain_params=m.p1, block=m) + + def test_point_in_set(self): + cset = CardinalitySet( + origin=[-0.5, 1, 2], positive_deviation=[2.5, 3, 0], gamma=1.5 + ) + + self.assertTrue(cset.point_in_set(cset.origin)) + + # first param full deviation + self.assertTrue(cset.point_in_set([-0.5, 4, 2])) + # second param full deviation + self.assertTrue(cset.point_in_set([2, 1, 2])) + # one and a half deviations (max) + self.assertTrue(cset.point_in_set([2, 2.5, 2])) + + # over one and a half deviations; out of set + self.assertFalse(cset.point_in_set([2.05, 2.5, 2])) + self.assertFalse(cset.point_in_set([2, 2.55, 2])) + + # deviation in dimension that has been fixed + self.assertFalse(cset.point_in_set([-0.25, 4, 2.01])) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + cset.point_in_set([1, 2, 3, 4]) + + @unittest.skipUnless(baron_available, "BARON is not available.") + def test_compute_parameter_bounds(self): + """ + Test parameter bounds computations give expected results. + """ + cset = CardinalitySet( + origin=[-0.5, 1, 2], positive_deviation=[2.5, 3, 0], gamma=1.5 + ) + computed_bounds = cset._compute_parameter_bounds(SolverFactory("baron")) + np.testing.assert_allclose(computed_bounds, [[-0.5, 2], [1, 4], [2, 2]]) + np.testing.assert_allclose(computed_bounds, cset.parameter_bounds) + + def test_add_bounds_on_uncertain_parameters(self): + m = ConcreteModel() + m.uncertain_param_vars = Var([0, 1, 2], initialize=0) + cset = CardinalitySet( + origin=[-0.5, 1, 2], positive_deviation=[2.5, 3, 0], gamma=1.5 + ) + + cset._add_bounds_on_uncertain_parameters( + global_solver=None, uncertain_param_vars=m.uncertain_param_vars + ) + self.assertEqual(m.uncertain_param_vars[0].bounds, (-0.5, 2)) + self.assertEqual(m.uncertain_param_vars[1].bounds, (1, 4)) + self.assertEqual(m.uncertain_param_vars[2].bounds, (2, 2)) + + +class TestDiscreteScenarioSet(unittest.TestCase): + """ + Tests for the DiscreteScenarioSet. + """ + + def test_normal_discrete_set_construction_and_update(self): + """ + Test DiscreteScenarioSet constructor and setter work normally + when scenarios are appropriate. + """ + scenarios = [[0, 0, 0], [1, 2, 3]] + + # normal construction should work + dset = DiscreteScenarioSet(scenarios) + + # check scenarios added appropriately + np.testing.assert_allclose(scenarios, dset.scenarios) + + # check scenarios updated appropriately + new_scenarios = [[0, 1, 2], [1, 2, 0], [3, 5, 4]] + dset.scenarios = new_scenarios + np.testing.assert_allclose(new_scenarios, dset.scenarios) + + def test_error_on_discrete_set_dim_change(self): + """ + Test ValueError raised when attempting to update + DiscreteScenarioSet dimension. + """ + scenarios = [[1, 2], [3, 4]] + dset = DiscreteScenarioSet(scenarios) # 2-dimensional set + + exc_str = ( + r".*must have 2 columns.* to match set dimension " + r"\(provided.*with 3 columns\)" + ) + with self.assertRaisesRegex(ValueError, exc_str): + dset.scenarios = [[1, 2, 3], [4, 5, 6]] + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + m.v1 = Var([0, 1], initialize=0) + dset = DiscreteScenarioSet([[1, 2], [3, 4]]) + uq = dset.set_as_constraint(block=m, uncertain_params=m.v1) + self.assertEqual(uq.uncertain_param_vars, [m.v1[0], m.v1[1]]) + self.assertEqual(uq.uncertainty_cons, []) + self.assertEqual(uq.auxiliary_vars, []) + self.assertIs(uq.block, m) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain parameters + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + dset = DiscreteScenarioSet([[1, 2], [3, 4]]) + with self.assertRaisesRegex(ValueError, ".*dimension"): + dset.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1], initialize=0, mutable=True) + dset = DiscreteScenarioSet([[1, 2], [3, 4]]) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + dset.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + dset.set_as_constraint(uncertain_params=m.p1, block=m) + + def test_point_in_set(self): + dset = DiscreteScenarioSet([(0, 0), (1.5, 0), (0, 1), (1, 1), (2, 0)]) + self.assertTrue(dset.point_in_set([0, 0])) + self.assertTrue(dset.point_in_set([1.5, 0])) + self.assertTrue(dset.point_in_set([0, 1.0])) + self.assertTrue(dset.point_in_set([1, 1.0])) + self.assertTrue(dset.point_in_set([2, 0])) + self.assertFalse(dset.point_in_set([2, 2])) + + # check precision: slight deviations from (0, 0) + self.assertTrue(dset.point_in_set([4.9e-9, 4.9e-9])) + self.assertFalse(dset.point_in_set([5.1e-9, 5.1e-9])) + self.assertFalse(dset.point_in_set([1e-7, 1e-7])) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + dset.point_in_set([1, 2, 3]) + + def test_add_bounds_on_uncertain_parameters(self): + m = ConcreteModel() + m.uncertain_param_vars = Var([0, 1], initialize=0) + dset = DiscreteScenarioSet([(0, 0), (1.5, 0), (0, 1), (1, 1), (2, 0)]) + + dset._add_bounds_on_uncertain_parameters( + global_solver=None, uncertain_param_vars=m.uncertain_param_vars + ) + self.assertEqual(m.uncertain_param_vars[0].bounds, (0, 2)) + self.assertEqual(m.uncertain_param_vars[1].bounds, (0, 1.0)) + + +class TestAxisAlignedEllipsoidalSet(unittest.TestCase): + """ + Tests for the AxisAlignedEllipsoidalSet. + """ + + def test_normal_construction_and_update(self): + """ + Test AxisAlignedEllipsoidalSet constructor and setter + work normally when bounds are appropriate. + """ + center = [0, 0] + half_lengths = [1, 3] + aset = AxisAlignedEllipsoidalSet(center, half_lengths) + np.testing.assert_allclose( + center, + aset.center, + err_msg="AxisAlignedEllipsoidalSet center not as expected", + ) + np.testing.assert_allclose( + half_lengths, + aset.half_lengths, + err_msg="AxisAlignedEllipsoidalSet half-lengths not as expected", + ) + + # check attributes update + new_center = [-1, -3] + new_half_lengths = [0, 1] + aset.center = new_center + aset.half_lengths = new_half_lengths + + np.testing.assert_allclose( + new_center, + aset.center, + err_msg="AxisAlignedEllipsoidalSet center update not as expected", + ) + np.testing.assert_allclose( + new_half_lengths, + aset.half_lengths, + err_msg=("AxisAlignedEllipsoidalSet half lengths update not as expected"), + ) + + def test_error_on_axis_aligned_dim_change(self): + """ + AxisAlignedEllipsoidalSet dimension is considered immutable. + Test ValueError raised when attempting to alter the + box set dimension (i.e. number of rows of `bounds`). + """ + center = [0, 0] + half_lengths = [1, 3] + aset = AxisAlignedEllipsoidalSet(center, half_lengths) + + exc_str = r"Attempting to set.*dimension 2 to value of dimension 3" + with self.assertRaisesRegex(ValueError, exc_str): + aset.center = [0, 0, 1] + + with self.assertRaisesRegex(ValueError, exc_str): + aset.half_lengths = [0, 0, 1] + + def test_error_on_negative_axis_aligned_half_lengths(self): + """ + Test ValueError if half lengths for AxisAlignedEllipsoidalSet + contains a negative value. + """ + center = [1, 1] + invalid_half_lengths = [1, -1] + exc_str = r"Entry -1 of.*'half_lengths' is negative.*" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + AxisAlignedEllipsoidalSet(center, invalid_half_lengths) + + # construct a valid axis-aligned ellipsoidal set + aset = AxisAlignedEllipsoidalSet(center, [1, 0]) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + aset.half_lengths = invalid_half_lengths + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + m.v = Var([0, 1, 2]) + aeset = AxisAlignedEllipsoidalSet(center=[0, 1.5, 1], half_lengths=[1.5, 2, 0]) + uq = aeset.set_as_constraint(uncertain_params=m.v, block=m) + + self.assertEqual(len(uq.uncertainty_cons), 2) + self.assertEqual(len(uq.uncertain_param_vars), 3) + self.assertEqual(uq.auxiliary_vars, []) + self.assertIs(uq.block, m) + + con1, con2 = uq.uncertainty_cons + + assertExpressionsEqual(self, con1.expr, m.v[2] == np.float64(1.0)) + assertExpressionsEqual( + self, + con2.expr, + m.v[0] ** 2 / np.float64(2.25) + + (m.v[1] - np.float64(1.5)) ** 2 / np.float64(4) + <= 1, + ) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain parameters + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + aeset = AxisAlignedEllipsoidalSet(center=[0, 1.5, 1], half_lengths=[1.5, 2, 0]) + with self.assertRaisesRegex(ValueError, ".*dimension"): + aeset.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1, 2], initialize=0, mutable=True) + aeset = AxisAlignedEllipsoidalSet(center=[0, 1.5, 1], half_lengths=[1.5, 2, 0]) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + aeset.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + aeset.set_as_constraint(uncertain_params=m.p1, block=m) + + @unittest.skipUnless(baron_available, "BARON is not available.") + def test_compute_parameter_bounds(self): + """ + Test parameter bounds computations give expected results. + """ + aeset = AxisAlignedEllipsoidalSet(center=[0, 1.5, 1], half_lengths=[1.5, 2, 0]) + computed_bounds = aeset._compute_parameter_bounds(SolverFactory("baron")) + np.testing.assert_allclose(computed_bounds, [[-1.5, 1.5], [-0.5, 3.5], [1, 1]]) + np.testing.assert_allclose(computed_bounds, aeset.parameter_bounds) + + def test_point_in_set(self): + aeset = AxisAlignedEllipsoidalSet(center=[0, 0, 1], half_lengths=[1.5, 2, 0]) + + self.assertTrue(aeset.point_in_set([0, 0, 1])) + self.assertTrue(aeset.point_in_set([0, 2, 1])) + self.assertTrue(aeset.point_in_set([0, -2, 1])) + self.assertTrue(aeset.point_in_set([1.5, 0, 1])) + self.assertTrue(aeset.point_in_set([-1.5, 0, 1])) + self.assertFalse(aeset.point_in_set([0, 0, 1.05])) + self.assertFalse(aeset.point_in_set([1.505, 0, 1])) + self.assertFalse(aeset.point_in_set([0, 2.05, 1])) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + aeset.point_in_set([1, 2, 3, 4]) + + def test_add_bounds_on_uncertain_parameters(self): + m = ConcreteModel() + m.uncertain_param_vars = Var([0, 1, 2], initialize=0) + aeset = AxisAlignedEllipsoidalSet(center=[0, 1.5, 1], half_lengths=[1.5, 2, 0]) + aeset._add_bounds_on_uncertain_parameters( + global_solver=None, uncertain_param_vars=m.uncertain_param_vars + ) + self.assertEqual(m.uncertain_param_vars[0].bounds, (-1.5, 1.5)) + self.assertEqual(m.uncertain_param_vars[1].bounds, (-0.5, 3.5)) + self.assertEqual(m.uncertain_param_vars[2].bounds, (1, 1)) + + +class TestEllipsoidalSet(unittest.TestCase): + """ + Tests for the EllipsoidalSet. + """ + + def test_normal_construction_and_update(self): + """ + Test EllipsoidalSet constructor and setter + work normally when arguments are appropriate. + """ + center = [0, 0] + shape_matrix = [[1, 0], [0, 2]] + scale = 2 + eset = EllipsoidalSet(center, shape_matrix, scale) + np.testing.assert_allclose( + center, eset.center, err_msg="EllipsoidalSet center not as expected" + ) + np.testing.assert_allclose( + shape_matrix, + eset.shape_matrix, + err_msg="EllipsoidalSet shape matrix not as expected", + ) + np.testing.assert_allclose( + scale, eset.scale, err_msg="EllipsoidalSet scale not as expected" + ) + + # check attributes update + new_center = [-1, -3] + new_shape_matrix = [[2, 1], [1, 3]] + new_scale = 1 + + eset.center = new_center + eset.shape_matrix = new_shape_matrix + eset.scale = new_scale + + np.testing.assert_allclose( + new_center, + eset.center, + err_msg="EllipsoidalSet center update not as expected", + ) + np.testing.assert_allclose( + new_shape_matrix, + eset.shape_matrix, + err_msg="EllipsoidalSet shape matrix update not as expected", + ) + np.testing.assert_allclose( + new_scale, eset.scale, err_msg="EllipsoidalSet scale update not as expected" + ) + + def test_error_on_ellipsoidal_dim_change(self): + """ + EllipsoidalSet dimension is considered immutable. + Test ValueError raised when center size is not equal + to set dimension. + """ + shape_matrix = [[1, 0], [0, 1]] + scale = 2 + + eset = EllipsoidalSet([0, 0], shape_matrix, scale) + + exc_str = r"Attempting to set.*dimension 2 to value of dimension 3" + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + eset.center = [0, 0, 0] + + def test_error_on_neg_scale(self): + """ + Test ValueError raised if scale attribute set to negative + value. + """ + center = [0, 0] + shape_matrix = [[1, 0], [0, 2]] + neg_scale = -1 + + exc_str = r".*must be a non-negative real \(provided.*-1\)" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + EllipsoidalSet(center, shape_matrix, neg_scale) + + # construct a valid EllipsoidalSet + eset = EllipsoidalSet(center, shape_matrix, scale=2) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + eset.scale = neg_scale + + def test_error_on_shape_matrix_with_wrong_size(self): + """ + Test error in event EllipsoidalSet shape matrix + is not in accordance with set dimension. + """ + center = [0, 0] + invalid_shape_matrix = [[1, 0]] + scale = 1 + + exc_str = r".*must be a square matrix of size 2.*\(provided.*shape \(1, 2\)\)" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + EllipsoidalSet(center, invalid_shape_matrix, scale) + + # construct a valid EllipsoidalSet + eset = EllipsoidalSet(center, [[1, 0], [0, 1]], scale) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + eset.shape_matrix = invalid_shape_matrix + + def test_error_on_invalid_shape_matrix(self): + """ + Test exceptional cases of invalid square shape matrix + arguments + """ + center = [0, 0] + scale = 3 + + # assert error on construction + with self.assertRaisesRegex( + ValueError, + r"Shape matrix must be symmetric", + msg="Asymmetric shape matrix test failed", + ): + EllipsoidalSet(center, [[1, 1], [0, 1]], scale) + with self.assertRaises( + np.linalg.LinAlgError, msg="Singular shape matrix test failed" + ): + EllipsoidalSet(center, [[0, 0], [0, 0]], scale) + with self.assertRaisesRegex( + ValueError, + r"Non positive-definite.*", + msg="Indefinite shape matrix test failed", + ): + EllipsoidalSet(center, [[1, 0], [0, -2]], scale) + + # construct a valid EllipsoidalSet + eset = EllipsoidalSet(center, [[1, 0], [0, 2]], scale) + + # assert error on update + with self.assertRaisesRegex( + ValueError, + r"Shape matrix must be symmetric", + msg="Asymmetric shape matrix test failed", + ): + eset.shape_matrix = [[1, 1], [0, 1]] + with self.assertRaises( + np.linalg.LinAlgError, msg="Singular shape matrix test failed" + ): + eset.shape_matrix = [[0, 0], [0, 0]] + with self.assertRaisesRegex( + ValueError, + r"Non positive-definite.*", + msg="Indefinite shape matrix test failed", + ): + eset.shape_matrix = [[1, 0], [0, -2]] + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + eset = EllipsoidalSet( + center=[1, 1.5], shape_matrix=[[1, 0.5], [0.5, 1]], scale=2.5 + ) + uq = eset.set_as_constraint(uncertain_params=None, block=m) + + self.assertEqual(uq.auxiliary_vars, []) + self.assertEqual(len(uq.uncertain_param_vars), 2) + self.assertEqual(len(uq.uncertainty_cons), 1) + self.assertIs(uq.block, m) + + var1, var2 = uq.uncertain_param_vars + + assertExpressionsEqual( + self, + uq.uncertainty_cons[0].expr, + ( + np.float64(4 / 3) * (var1 - np.float64(1.0)) * (var1 - np.float64(1.0)) + + np.float64(-2 / 3) + * (var1 - np.float64(1.0)) + * (var2 - np.float64(1.5)) + + np.float64(-2 / 3) + * (var2 - np.float64(1.5)) + * (var1 - np.float64(1.0)) + + np.float64(4 / 3) + * (var2 - np.float64(1.5)) + * (var2 - np.float64(1.5)) + <= 2.5 + ), + ) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain parameters + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + eset = EllipsoidalSet( + center=[1, 1.5], shape_matrix=[[1, 0.5], [0.5, 1]], scale=2.5 + ) + with self.assertRaisesRegex(ValueError, ".*dimension"): + eset.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1], initialize=0, mutable=True) + eset = EllipsoidalSet( + center=[1, 1.5], shape_matrix=[[1, 0.5], [0.5, 1]], scale=2.5 + ) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + eset.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + eset.set_as_constraint(uncertain_params=m.p1, block=m) + + def test_point_in_set(self): + eset = EllipsoidalSet( + center=[1, 1.5], shape_matrix=[[1, 0.5], [0.5, 1]], scale=2.5 + ) + sqrt_mat = np.linalg.cholesky(eset.shape_matrix) + sqrt_scale = eset.scale**0.5 + center = eset.center + self.assertTrue(eset.point_in_set(eset.center)) + + # some boundary points + self.assertTrue(eset.point_in_set(center + sqrt_mat @ [0, sqrt_scale])) + self.assertTrue(eset.point_in_set(center + sqrt_mat @ [sqrt_scale, 0])) + self.assertTrue(eset.point_in_set(center + sqrt_mat @ [0, -sqrt_scale])) + self.assertTrue(eset.point_in_set(center + sqrt_mat @ [-sqrt_scale, 0])) + + self.assertFalse(eset.point_in_set(center + sqrt_mat @ [0, sqrt_scale * 2])) + self.assertFalse(eset.point_in_set(center + sqrt_mat @ [sqrt_scale * 2, 0])) + self.assertFalse(eset.point_in_set(center + sqrt_mat @ [0, -sqrt_scale * 2])) + self.assertFalse(eset.point_in_set(center + sqrt_mat @ [-sqrt_scale * 2, 0])) + + # test singleton + eset.scale = 0 + self.assertTrue(eset.point_in_set(eset.center)) + self.assertTrue(eset.point_in_set(eset.center + [5e-9, 0])) + self.assertFalse(eset.point_in_set(eset.center + [1e-4, 1e-4])) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + eset.point_in_set([1, 2, 3, 4]) + + @unittest.skipUnless(baron_available, "BARON is not available.") + def test_compute_parameter_bounds(self): + """ + Test parameter bounds computations give expected results. + """ + baron = SolverFactory("baron") + eset = EllipsoidalSet( + center=[1, 1.5], shape_matrix=[[1, 0.5], [0.5, 1]], scale=0.25 + ) + computed_bounds = eset._compute_parameter_bounds(baron) + np.testing.assert_allclose(computed_bounds, [[0.5, 1.5], [1.0, 2.0]]) + np.testing.assert_allclose(computed_bounds, eset.parameter_bounds) + + eset2 = EllipsoidalSet( + center=[1, 1.5], shape_matrix=[[1, 0.5], [0.5, 1]], scale=2.25 + ) + computed_bounds_2 = eset2._compute_parameter_bounds(baron) + + # add absolute tolerance to account from + # matrix inversion and roundoff errors + np.testing.assert_allclose(computed_bounds_2, [[-0.5, 2.5], [0, 3]], atol=1e-8) + np.testing.assert_allclose(computed_bounds_2, eset2.parameter_bounds, atol=1e-8) + + def test_add_bounds_on_uncertain_parameters(self): + m = ConcreteModel() + m.uncertain_param_vars = Var([0, 1], initialize=0) + eset = EllipsoidalSet( + center=[1, 1.5], shape_matrix=[[1, 0.5], [0.5, 1]], scale=0.25 + ) + eset._add_bounds_on_uncertain_parameters( + global_solver=None, uncertain_param_vars=m.uncertain_param_vars + ) + self.assertEqual(m.uncertain_param_vars[0].bounds, (0.5, 1.5)) + self.assertEqual(m.uncertain_param_vars[1].bounds, (1, 2)) + + +class TestPolyhedralSet(unittest.TestCase): + """ + Tests for the PolyhedralSet. + """ + + def test_normal_construction_and_update(self): + """ + Test PolyhedralSet constructor and attribute setters work + appropriately. + """ + lhs_coefficients_mat = [[1, 2, 3], [4, 5, 6]] + rhs_vec = [1, 3] + + pset = PolyhedralSet(lhs_coefficients_mat, rhs_vec) + + # check attributes are as expected + np.testing.assert_allclose(lhs_coefficients_mat, pset.coefficients_mat) + np.testing.assert_allclose(rhs_vec, pset.rhs_vec) + + # update the set + pset.coefficients_mat = [[1, 0, 1], [1, 1, 1.5]] + pset.rhs_vec = [3, 4] + + # check updates work + np.testing.assert_allclose([[1, 0, 1], [1, 1, 1.5]], pset.coefficients_mat) + np.testing.assert_allclose([3, 4], pset.rhs_vec) + + def test_error_on_polyhedral_set_dim_change(self): + """ + PolyhedralSet dimension (number columns of 'coefficients_mat') + is considered immutable. + Test ValueError raised if attempt made to change dimension. + """ + # construct valid set + pset = PolyhedralSet([[1, 2, 3], [4, 5, 6]], [1, 3]) + + exc_str = ( + r".*must have 3 columns to match set dimension \(provided.*2 columns\)" + ) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + pset.coefficients_mat = [[1, 2], [3, 4]] + + def test_error_on_inconsistent_rows(self): + """ + Number of rows of budget membership mat is immutable. + Similarly, size of rhs_vec is immutable. + Check ValueError raised in event of attempted change. + """ + coeffs_mat_exc_str = ( + r".*must have 2 rows to match shape of attribute 'rhs_vec' " + r"\(provided.*3 rows\)" + ) + rhs_vec_exc_str = ( + r".*must have 2 entries to match shape of attribute " + r"'coefficients_mat' \(provided.*3 entries\)" + ) + # assert error on construction + with self.assertRaisesRegex(ValueError, rhs_vec_exc_str): + PolyhedralSet([[1, 2], [3, 4]], rhs_vec=[1, 3, 3]) + + # construct a valid polyhedral set + # (2 x 2 coefficients, 2-vector for RHS) + pset = PolyhedralSet([[1, 2], [3, 4]], rhs_vec=[1, 3]) + + # assert error on update + with self.assertRaisesRegex(ValueError, coeffs_mat_exc_str): + # 3 x 2 matrix row mismatch + pset.coefficients_mat = [[1, 2], [3, 4], [5, 6]] + with self.assertRaisesRegex(ValueError, rhs_vec_exc_str): + # 3-vector mismatches 2 rows + pset.rhs_vec = [1, 3, 2] + + def test_error_on_empty_set(self): + """ + Check ValueError raised if nonemptiness check performed + at construction returns a negative result. + """ + exc_str = r"PolyhedralSet.*is empty.*" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + PolyhedralSet([[1], [-1]], rhs_vec=[1, -3]) + + def test_error_on_polyhedral_mat_all_zero_columns(self): + """ + Test ValueError raised if budget membership mat + has a column with all zeros. + """ + invalid_col_mat = [[0, 0, 1], [0, 0, 1], [0, 0, 1]] + rhs_vec = [1, 1, 2] + + exc_str = r".*all entries zero in columns at indexes: 0, 1.*" + + # assert error on construction + with self.assertRaisesRegex(ValueError, exc_str): + PolyhedralSet(invalid_col_mat, rhs_vec) + + # construct a valid budget set + pset = PolyhedralSet([[1, 0, 1], [1, 1, 0], [1, 1, 1]], rhs_vec) + + # assert error on update + with self.assertRaisesRegex(ValueError, exc_str): + pset.coefficients_mat = invalid_col_mat + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + pset = PolyhedralSet( + lhs_coefficients_mat=[[1, 0], [-1, 1], [-1, -1]], rhs_vec=[2, -1, -1] + ) + uq = pset.set_as_constraint(uncertain_params=None, block=m) + + self.assertEqual(uq.auxiliary_vars, []) + self.assertEqual(len(uq.uncertain_param_vars), 2) + self.assertEqual(len(uq.uncertainty_cons), 3) + self.assertIs(uq.block, m) + + var1, var2 = uq.uncertain_param_vars + + assertExpressionsEqual( + self, uq.uncertainty_cons[0].expr, var1 + np.int_(0) * var2 <= np.int_(2) + ) + assertExpressionsEqual( + self, + uq.uncertainty_cons[1].expr, + np.int_(-1) * var1 + np.int_(1) * var2 <= np.int_(-1), + ) + assertExpressionsEqual( + self, + uq.uncertainty_cons[2].expr, + np.int_(-1) * var1 + np.int_(-1) * var2 <= np.int_(-1), + ) + + def test_set_as_constraint_dim_mismatch(self): + """ + Check exception raised if number of uncertain parameters + does not match the dimension. + """ + m = ConcreteModel() + m.v1 = Var(initialize=0) + pset = PolyhedralSet( + lhs_coefficients_mat=[[1, 0], [-1, 1], [-1, -1]], rhs_vec=[2, -1, -1] + ) + with self.assertRaisesRegex(ValueError, ".*dimension"): + pset.set_as_constraint(uncertain_params=[m.v1], block=m) + + def test_set_as_constraint_type_mismatch(self): + """ + Check exception raised if uncertain parameter variables + are of invalid type. + """ + m = ConcreteModel() + m.p1 = Param([0, 1], initialize=0, mutable=True) + pset = PolyhedralSet( + lhs_coefficients_mat=[[1, 0], [-1, 1], [-1, -1]], rhs_vec=[2, -1, -1] + ) + with self.assertRaisesRegex(TypeError, ".*valid component type"): + pset.set_as_constraint(uncertain_params=[m.p1[0], m.p1[1]], block=m) + + with self.assertRaisesRegex(TypeError, ".*valid component type"): + pset.set_as_constraint(uncertain_params=m.p1, block=m) + + @unittest.skipUnless(baron_available, "BARON is not available.") + def test_compute_parameter_bounds(self): + """ + Test parameter bounds computations give expected results. + """ + pset = PolyhedralSet( + lhs_coefficients_mat=[[1, 0], [-1, 1], [-1, -1]], rhs_vec=[2, -1, -1] + ) + self.assertEqual(pset.parameter_bounds, []) + computed_bounds = pset._compute_parameter_bounds(SolverFactory("baron")) + self.assertEqual(computed_bounds, [(1, 2), (-1, 1)]) + + def test_point_in_set(self): + """ + Test point in set checks work as expected. + """ + pset = PolyhedralSet( + lhs_coefficients_mat=[[1, 0], [-1, 1], [-1, -1]], rhs_vec=[2, -1, -1] + ) + self.assertTrue(pset.point_in_set([1, 0])) + self.assertTrue(pset.point_in_set([2, 1])) + self.assertTrue(pset.point_in_set([2, -1])) + self.assertFalse(pset.point_in_set([1, 1])) + self.assertFalse(pset.point_in_set([-1, 0])) + self.assertFalse(pset.point_in_set([0, 0])) + + # check what happens if dimensions are off + with self.assertRaisesRegex(ValueError, ".*to match the set dimension.*"): + pset.point_in_set([1, 2, 3, 4]) + + @unittest.skipUnless(baron_available, "Global NLP solver is not available.") + def test_add_bounds_on_uncertain_parameters(self): + m = ConcreteModel() + m.uncertain_param_vars = Var([0, 1], initialize=0) + pset = PolyhedralSet( + lhs_coefficients_mat=[[1, 0], [-1, 1], [-1, -1]], rhs_vec=[2, -1, -1] + ) + pset._add_bounds_on_uncertain_parameters( + global_solver=SolverFactory("baron"), + uncertain_param_vars=m.uncertain_param_vars, + ) + self.assertEqual(m.uncertain_param_vars[0].bounds, (1, 2)) + self.assertEqual(m.uncertain_param_vars[1].bounds, (-1, 1)) + + +class CustomUncertaintySet(UncertaintySet): + """ + Test simple custom uncertainty set subclass. + """ + + def __init__(self, dim): + self._dim = dim + + @property + def geometry(self): + self.geometry = Geometry.LINEAR + + @property + def dim(self): + return self._dim + + def set_as_constraint(self, uncertain_params=None, block=None): + blk, param_var_list, conlist, aux_vars = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, + uncertain_param_vars=uncertain_params, + dim=self.dim, + num_auxiliary_vars=None, + ) + ) + conlist.add(sum(param_var_list) <= 0) + for var in param_var_list: + conlist.add(-1 <= var) + + return UncertaintyQuantification( + block=blk, + uncertainty_cons=list(conlist.values()), + uncertain_param_vars=param_var_list, + auxiliary_vars=aux_vars, + ) + + def point_in_set(self, point): + point_arr = np.array(point) + return point_arr.sum() <= 0 and np.all(-1 <= point_arr) + + @property + def parameter_bounds(self): + return [(-1, 1)] * self.dim + + +class TestCustomUncertaintySet(unittest.TestCase): + """ + Test for a custom uncertainty set subclass. + """ + + def test_set_as_constraint(self): + """ + Test method for setting up constraints works correctly. + """ + m = ConcreteModel() + custom_set = CustomUncertaintySet(dim=2) + uq = custom_set.set_as_constraint(uncertain_params=None, block=m) + + con1, con2, con3 = uq.uncertainty_cons + var1, var2 = uq.uncertain_param_vars + self.assertEqual(uq.auxiliary_vars, []) + self.assertIs(uq.block, m) + self.assertEqual(len(uq.uncertainty_cons), 3) + self.assertEqual(len(uq.uncertain_param_vars), 2) + + def test_compute_parameter_bounds(self): + """ + Test parameter bounds computations give expected results. + """ + baron = SolverFactory("baron") + custom_set = CustomUncertaintySet(dim=2) + self.assertEqual(custom_set.parameter_bounds, [(-1, 1)] * 2) + self.assertEqual(custom_set._compute_parameter_bounds(baron), [(-1, 1)] * 2) + + +if __name__ == "__main__": + unittest.main() diff --git a/pyomo/contrib/pyros/uncertainty_sets.py b/pyomo/contrib/pyros/uncertainty_sets.py index 9eff366681e..ed03aa7553a 100644 --- a/pyomo/contrib/pyros/uncertainty_sets.py +++ b/pyomo/contrib/pyros/uncertainty_sets.py @@ -10,68 +10,191 @@ # ___________________________________________________________________________ """ -Abstract and pre-defined classes for representing uncertainty sets (or -uncertain parameter spaces) of two-stage nonlinear robust optimization -models. +This module defines the :class:`~UncertaintySet` abstract base class, +used for representing the uncertainty set of a robust +optimization problem to be solved with PyROS, +and a suite of pre-implemented concrete subclasses, +based on uncertainty sets often used in the robust optimization +literature. +""" -Along with a ``ConcreteModel`` object representing a deterministic model -formulation, an uncertainty set object may be passed to the PyROS solver -to obtain a solution to the model's two-stage robust optimization -counterpart. +import abc +import math +import functools +from numbers import Integral +from collections import namedtuple +from collections.abc import Iterable, MutableSequence +from enum import Enum -Classes -------- -``UncertaintySet`` - Abstract base class for a generic uncertainty set. All other set - types defined in this module are subclasses. A user may implement - their own uncertainty set type as a custom-written subclass. +from pyomo.common.dependencies import numpy as np, scipy as sp +from pyomo.common.modeling import unique_component_name +from pyomo.core.base import ( + Block, + ConstraintList, + ConcreteModel, + maximize, + minimize, + Var, + VarData, +) +from pyomo.core.expr import mutable_expression, native_numeric_types, value +from pyomo.core.util import quicksum, dot_product +from pyomo.opt.results import check_optimal_termination +from pyomo.contrib.pyros.util import ( + copy_docstring, + POINT_IN_UNCERTAINTY_SET_TOL, + standardize_component_data, +) -``EllipsoidalSet`` - A hyperellipsoid. -``AxisAlignedEllipsoidalSet`` - An axis-aligned hyperellipsoid. +valid_num_types = tuple(native_numeric_types) -``PolyhedralSet`` - A bounded convex polyhedron/polytope. -``BoxSet`` - A hyperrectangle. +def standardize_uncertain_param_vars(obj, dim): + """ + Standardize an object castable to a list of VarData objects + representing uncertain model parameters, + and check that the length of the resulting list is equal + to the specified dimension. -``BudgetSet`` - A budget set. + Parameters + ---------- + obj : Var, VarData, or iterable of Var/VarData + Object to standardize. + dim : int + Specified dimension. -``CardinalitySet`` - A cardinality set (or gamma set). + Returns + ------- + var_data_list : list of VarData + Standard variable list. + """ + var_data_list = standardize_component_data( + obj=obj, + valid_ctype=Var, + valid_cdatatype=VarData, + ctype_validator=None, + cdatatype_validator=None, + allow_repeats=False, + from_iterable=obj, + ) + if len(var_data_list) != dim: + raise ValueError( + f"Passed {len(var_data_list)} VarData objects representing " + "the uncertain parameters, but the uncertainty set is of " + f"dimension {dim}." + ) -``DiscreteScenarioSet`` - A discrete set of finitely many points. + return var_data_list -``FactorModelSet`` - A factor model set (or net-alpha model set). -``IntersectionSet`` - An intersection of two or more sets, each represented by an - ``UncertaintySet`` object. -""" +def _setup_standard_uncertainty_set_constraint_block( + block, uncertain_param_vars, dim, num_auxiliary_vars=None +): + """ + Set up block to prepare for declaration of uncertainty + set constraints. -import abc -import math -import functools -from numbers import Integral -from collections.abc import Iterable, MutableSequence -from enum import Enum + Parameters + ---------- + block : BlockData or None + Block to be prepared. If `None`, a new concrete block + is instantiated. + uncertain_param_vars : list of VarData or None + Variables representing the main uncertain parameters. + If `None`, then a new IndexedVar object consisting of + `dim` members is declared on `block`. + dim : int + Dimension of the uncertainty set of interest. + num_auxiliary_vars : int + Number of variables representing auxiliary uncertain + parameters to be declared. + + Returns + ------- + block : BlockData + Prepared block. + param_var_data_list : list of VarData + Variable data objects representing the main uncertain + parameters. + con_list : ConstraintList + Empty ConstraintList, to which the uncertainty set constraints + should be added later. + auxiliary_var_list : list of VarData + Variable data objects representing the auxiliary uncertain + parameters. + """ + if block is None: + block = Block(concrete=True) + + if uncertain_param_vars is None: + uncertain_param_indexed_var = Var(range(dim)) + block.add_component( + unique_component_name(block, "uncertain_param_indexed_var"), + uncertain_param_indexed_var, + ) + param_var_data_list = list(uncertain_param_indexed_var.values()) + else: + # resolve arguments + param_var_data_list = standardize_uncertain_param_vars( + uncertain_param_vars, dim=dim + ) + con_list = ConstraintList() + block.add_component( + unique_component_name(block, "uncertainty_set_conlist"), con_list + ) + + auxiliary_var_list = [] + if num_auxiliary_vars is not None: + auxiliary_param_var = Var(range(num_auxiliary_vars)) + block.add_component( + unique_component_name(block, "auxiliary_param_var"), auxiliary_param_var + ) + auxiliary_var_list = list(auxiliary_param_var.values()) -from pyomo.common.dependencies import numpy as np, scipy as sp -from pyomo.core.base import ConcreteModel, Objective, maximize, minimize, Block -from pyomo.core.base.constraint import ConstraintList -from pyomo.core.base.var import Var, IndexedVar -from pyomo.core.expr.numvalue import value, native_numeric_types -from pyomo.opt.results import check_optimal_termination -from pyomo.contrib.pyros.util import add_bounds_for_uncertain_parameters + return block, param_var_data_list, con_list, auxiliary_var_list -valid_num_types = tuple(native_numeric_types) +UncertaintyQuantification = namedtuple( + "UncertaintyQuantification", + ("block", "uncertainty_cons", "uncertain_param_vars", "auxiliary_vars"), +) +UncertaintyQuantification.__doc__ = """ + A collection of modeling components + generated or addressed by the `set_as_constraint` method of + an uncertainty set object. + + The UncertaintyQuantification class was generated using + the Python :py:func:`~collections.namedtuple` factory function, + so the standard :py:func:`~collections.namedtuple` + attributes and methods + (e.g., :py:meth:`~collections.somenamedtuple._asdict`) + are available. + + Parameters + ---------- + block : BlockData + Block on which the uncertainty set constraints + were added. + uncertainty_cons : list of ConstraintData + The added uncertainty set constraints. + uncertain_param_vars : list of VarData + Variables representing the (main) uncertain parameters. + auxiliary_vars : list of VarData + Variables representing the auxiliary uncertain parameters. +""" +UncertaintyQuantification.block.__doc__ = ( + "Block on which the uncertainty set constraints were added." +) +UncertaintyQuantification.uncertainty_cons.__doc__ = ( + "The added uncertainty set constraints." +) +UncertaintyQuantification.uncertain_param_vars.__doc__ = ( + "Variables representing the (main) uncertain parameters." +) +UncertaintyQuantification.auxiliary_vars.__doc__ = ( + "Variables representing the auxiliary uncertain parameters." +) def validate_arg_type( @@ -157,11 +280,24 @@ def validate_arg_type( def is_ragged(arr, arr_types=None): """ - Determine whether an array-like (such as a list or Numpy ndarray) - is ragged. + Return True if the array-like `arr` is ragged, False otherwise. NOTE: if Numpy ndarrays are considered to be arr types, then zero-dimensional arrays are not considered to be as such. + + Parameters + ---------- + arr : array_like + Array to check. + arr_types : None or iterable of type + Types of entries of `arr` to be considered subarrays. + If `None` is specified, then this is set to + ``(list, numpy.ndarray, tuple)``. + + Returns + ------- + bool + True if ragged, False otherwise. """ arr_types = (list, np.ndarray, tuple) if arr_types is None else arr_types @@ -192,7 +328,23 @@ def is_ragged(arr, arr_types=None): def validate_dimensions(arr_name, arr, dim, display_value=False): """ Validate dimension of an array-like object. - Raise Exception if validation fails. + + Parameters + ---------- + arr_name : str + Name of the array to validate. + arr : array_like + Array to validate. + dim : int + Required dimension of the array. + display_value : bool, optional + True to include the array string representation + in exception messages, False otherwise. + + Raises + ------ + ValueError + If `arr` is ragged or not of the required dimension `dim`. """ if is_ragged(arr): raise ValueError( @@ -217,7 +369,13 @@ def validate_dimensions(arr_name, arr, dim, display_value=False): def validate_array( - arr, arr_name, dim, valid_types, valid_type_desc=None, required_shape=None + arr, + arr_name, + dim, + valid_types, + valid_type_desc=None, + required_shape=None, + required_shape_qual="", ): """ Validate shape and entry types of an array-like object. @@ -244,6 +402,15 @@ def validate_array( corresponding to the position of the entry or `None` (meaning no requirement for the length in the corresponding dimension). + required_shape_qual : str, optional + Clause/phrase expressing reason `arr` should be of shape + `required_shape`, e.g. "to match the set dimension". + + Raises + ------ + ValueError + If the Numpy array to which `arr` is cast is not of shape + `required_shape`. """ np_arr = np.array(arr, dtype=object) validate_dimensions(arr_name, np_arr, dim, display_value=False) @@ -267,9 +434,15 @@ def generate_shape_str(shape, required_shape): if size is not None and size != np_arr.shape[idx]: req_shape_str = generate_shape_str(required_shape, required_shape) actual_shape_str = generate_shape_str(np_arr.shape, required_shape) + required_shape_qual = ( + # add a preceding space, if needed + f" {required_shape_qual}" + if required_shape_qual + else "" + ) raise ValueError( f"Attribute '{arr_name}' should be of shape " - f"{req_shape_str}, but detected shape " + f"{req_shape_str}{required_shape_qual}, but detected shape " f"{actual_shape_str}" ) @@ -283,11 +456,6 @@ def generate_shape_str(shape, required_shape): ) -def column(matrix, i): - # Get column i of a given multi-dimensional list - return [row[i] for row in matrix] - - class Geometry(Enum): """ Geometry classifications for PyROS uncertainty set objects. @@ -337,33 +505,26 @@ def parameter_bounds(self): """ raise NotImplementedError - def bounding_model(self, config=None): + def _create_bounding_model(self): """ Make uncertain parameter value bounding problems (optimize value of each uncertain parameter subject to constraints on the uncertain parameters). - Parameters - ---------- - config : None or ConfigDict, optional - If a ConfigDict is provided, then it contains - arguments passed to the PyROS solver. - Returns ------- model : ConcreteModel - Bounding problem, with all Objectives deactivated. + Bounding model, with an indexed mimimization sense + Objective with name 'param_var_objectives' consisting + of `N` entries, all of which have been deactivated. """ model = ConcreteModel() - model.util = Block() # construct param vars, initialize to nominal point model.param_vars = Var(range(self.dim)) # add constraints - model.cons = self.set_as_constraint( - uncertain_params=model.param_vars, model=model, config=config - ) + self.set_as_constraint(uncertain_params=model.param_vars, block=model) @model.Objective(range(self.dim)) def param_var_objectives(self, idx): @@ -400,32 +561,19 @@ def is_bounded(self, config): This method is invoked during the validation step of a PyROS solver call. """ - bounding_model = self.bounding_model(config=config) - solver = config.global_solver - # initialize uncertain parameter variables - for param, param_var in zip( - config.uncertain_params, bounding_model.param_vars.values() - ): - param_var.set_value(param.value, skip_validation=True) - - for idx, obj in bounding_model.param_var_objectives.items(): - # activate objective for corresponding dimension - obj.activate() - - # solve for lower bound, then upper bound - for sense in (minimize, maximize): - obj.sense = sense - res = solver.solve(bounding_model, load_solutions=False, tee=False) - - if not check_optimal_termination(res): - return False + param_bounds_arr = np.array( + self._compute_parameter_bounds(solver=config.global_solver) + ) - # ensure sense is minimize when done, deactivate - obj.sense = minimize - obj.deactivate() + all_bounds_finite = np.all(np.isfinite(param_bounds_arr)) + if not all_bounds_finite: + config.progress_logger.info( + "Computed coordinate value bounds are not all finite. " + f"Got bounds: {param_bounds_arr}" + ) - return True + return all_bounds_finite def is_nonempty(self, config): """ @@ -441,21 +589,28 @@ def is_valid(self, config): return self.is_nonempty(config=config) and self.is_bounded(config=config) @abc.abstractmethod - def set_as_constraint(self, **kwargs): + def set_as_constraint(self, uncertain_params=None, block=None): """ - Construct a (sequence of) mathematical constraint(s) - (represented by Pyomo `Constraint` objects) on the uncertain - parameters to represent the uncertainty set for use in a - two-stage robust optimization problem or subproblem (such as a - PyROS separation subproblem). + Construct a block of Pyomo constraint(s) defining the + uncertainty set on variables representing the uncertain + parameters, for use in a two-stage robust optimization + problem or subproblem (such as a PyROS separation subproblem). Parameters ---------- - **kwargs : dict - Keyword arguments containing, at the very least, a sequence - of `Param` or `Var` objects representing the uncertain - parameters of interest, and any additional information - needed to generate the constraints. + uncertain_params : None, Var, or list of Var, optional + Variable objects representing the (main) uncertain + parameters. If `None` is passed, then + new variable objects are constructed. + block : BlockData or None, optional + Block on which to declare the constraints and any + new variable objects. If `None` is passed, then a new + block is constructed. + + Returns + ------- + UncertaintyQuantification + A collection of the components added or addressed. """ pass @@ -480,55 +635,124 @@ def point_in_set(self, point): determine whether a user-specified nominal parameter realization lies in the uncertainty set. """ - - # === Ensure point is of correct dimensionality as the uncertain parameters - if len(point) != self.dim: - raise AttributeError( - "Point must have same dimensions as uncertain parameters." - ) + validate_array( + arr=point, + arr_name="point", + dim=1, + valid_types=valid_num_types, + valid_type_desc="numeric type", + required_shape=[self.dim], + required_shape_qual="to match the set dimension", + ) m = ConcreteModel() - the_params = [] - for i in range(self.dim): - m.add_component("x_%s" % i, Var(initialize=point[i])) - the_params.append(getattr(m, "x_%s" % i)) + uncertainty_quantification = self.set_as_constraint(block=m) + for var, val in zip(uncertainty_quantification.uncertain_param_vars, point): + var.set_value(val) + + # since constraint expressions are relational, + # `value()` returns True if constraint satisfied, False else + # NOTE: this check may be inaccurate if there are auxiliary + # variables and they have not been initialized to + # feasible values + is_in_set = all( + value(con.expr) for con in uncertainty_quantification.uncertainty_cons + ) - # === Generate constraint for set - set_constraint = self.set_as_constraint(uncertain_params=the_params) + return is_in_set - # === value() returns True if the constraint is satisfied, False else. - is_in_set = all(value(con.expr) for con in set_constraint.values()) + def _compute_parameter_bounds(self, solver): + """ + Compute coordinate value bounds for every dimension + of `self` by solving a bounding model. + """ + bounding_model = self._create_bounding_model() + param_bounds = [] + for idx, obj in bounding_model.param_var_objectives.items(): + # activate objective for corresponding dimension + obj.activate() + bounds = [] - return is_in_set + # solve for lower bound, then upper bound + # solve should be successful + for sense in (minimize, maximize): + obj.sense = sense + res = solver.solve(bounding_model, load_solutions=False) + if check_optimal_termination(res): + bounding_model.solutions.load_from(res) + else: + raise ValueError( + "Could not compute " + f"{'lower' if sense == minimize else 'upper'} " + f"bound in dimension {idx + 1} of {self.dim}. " + f"Solver status summary:\n {res.solver}." + ) + bounds.append(value(obj)) - @staticmethod - def add_bounds_on_uncertain_parameters(**kwargs): + # add parameter bounds for current dimension + param_bounds.append(tuple(bounds)) + + # ensure sense is minimize when done, deactivate + obj.sense = minimize + obj.deactivate() + + return param_bounds + + def _add_bounds_on_uncertain_parameters( + self, uncertain_param_vars, global_solver=None + ): """ - Specify the numerical bounds for the uncertain parameters - restricted by the set. Each uncertain parameter is represented - by a Pyomo `Var` object in a model passed to this method, - and the numerical bounds are specified by setting the - `.lb()` and `.ub()` attributes of the `Var` object. + Specify declared bounds for Vars representing the uncertain + parameters constrained to an uncertainty set. Parameters ---------- - kwargs : dict - Keyword arguments consisting of a Pyomo `ConfigDict` and a - Pyomo `ConcreteModel` object, representing a PyROS solver - configuration and the optimization model of interest. + global_solver : None or Pyomo solver, optional + Optimizer capable of solving bounding problems to + global optimality. If the coordinate bounds for the + set can be retrieved through `self.parameter_bounds`, + then None can be passed. + uncertain_param_vars : Var, VarData, or list of Var/VarData + Variables representing the uncertain parameter objects. Notes ----- This method is invoked in advance of a PyROS separation subproblem. """ - config = kwargs.pop('config') - model = kwargs.pop('model') - _set = config.uncertainty_set - parameter_bounds = _set.parameter_bounds - for i, p in enumerate(model.util.uncertain_param_vars.values()): - p.setlb(parameter_bounds[i][0]) - p.setub(parameter_bounds[i][1]) + uncertain_param_vars = standardize_uncertain_param_vars( + uncertain_param_vars, self.dim + ) + + parameter_bounds = self.parameter_bounds + if not parameter_bounds: + parameter_bounds = self._compute_parameter_bounds(global_solver) + + for (lb, ub), param_var in zip(parameter_bounds, uncertain_param_vars): + param_var.setlb(lb) + param_var.setub(ub) + + def compute_auxiliary_uncertain_param_vals(self, point, solver=None): + """ + Compute auxiliary uncertain parameter values for a given point. + The point need not be in the uncertainty set. + + Parameters + ---------- + point : (N,) array-like + Point of interest. + solver : Pyomo solver, optional + If needed, a Pyomo solver with which to compute the + auxiliary values. + + Returns + ------- + aux_space_pt : numpy.ndarray + Computed auxiliary uncertain parameter values. + """ + raise NotImplementedError( + f"Auxiliary parameter computation not supported for {type(self).__name__}." + ) class UncertaintySetList(MutableSequence): @@ -835,35 +1059,27 @@ def parameter_bounds(self): """ return [tuple(bound) for bound in self.bounds] - def set_as_constraint(self, uncertain_params, **kwargs): - """ - Construct a list of box constraints on a given sequence - of uncertain parameter objects. - - Parameters - ---------- - uncertain_params : list of Param or list of Var - Uncertain parameter objects upon which the constraints - are imposed. - **kwargs : dict, optional - Additional arguments. These arguments are currently - ignored. - - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - """ - conlist = ConstraintList() - conlist.construct() - - set_i = list(range(len(uncertain_params))) + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, uncertain_params=None, block=None): + block, param_var_list, uncertainty_conlist, aux_var_list = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, + uncertain_param_vars=uncertain_params, + dim=self.dim, + num_auxiliary_vars=None, + ) + ) - for i in set_i: - conlist.add(uncertain_params[i] >= self.bounds[i][0]) - conlist.add(uncertain_params[i] <= self.bounds[i][1]) + vardata_bound_zip = zip(param_var_list, self.bounds) + for idx, (param_var, (lb, ub)) in enumerate(vardata_bound_zip): + uncertainty_conlist.add((lb, param_var, ub)) - return conlist + return UncertaintyQuantification( + block=block, + uncertain_param_vars=param_var_list, + uncertainty_cons=list(uncertainty_conlist.values()), + auxiliary_vars=aux_var_list, + ) class CardinalitySet(UncertaintySet): @@ -1046,49 +1262,58 @@ def parameter_bounds(self): ] return parameter_bounds - def set_as_constraint(self, uncertain_params, **kwargs): - """ - Construct a list of cardinality set constraints on - a sequence of uncertain parameter objects. + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, uncertain_params=None, block=None): + # resolve arguments + block, param_var_data_list, conlist, aux_var_list = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, + uncertain_param_vars=uncertain_params, + dim=self.dim, + num_auxiliary_vars=self.dim, + ) + ) - Parameters - ---------- - uncertain_params : list of Param or list of Var - Uncertain parameter objects upon which the constraints - are imposed. - **kwargs : dict - Additional arguments. This dictionary should consist - of a `model` entry, which maps to a `ConcreteModel` - object representing the model of interest (parent model - of the uncertain parameter objects). + cardinality_zip = zip( + self.origin, self.positive_deviation, aux_var_list, param_var_data_list + ) + for orig_val, pos_dev, auxvar, param_var in cardinality_zip: + conlist.add(orig_val + pos_dev * auxvar == param_var) - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - """ - # === Ensure dimensions - if len(uncertain_params) != len(self.origin): - raise AttributeError( - "Dimensions of origin and uncertain_param lists must be equal." - ) + conlist.add(quicksum(aux_var_list) <= self.gamma) - model = kwargs['model'] - set_i = list(range(len(uncertain_params))) - model.util.cassi = Var(set_i, initialize=0, bounds=(0, 1)) - - # Make n equality constraints - conlist = ConstraintList() - conlist.construct() - for i in set_i: - conlist.add( - self.origin[i] + self.positive_deviation[i] * model.util.cassi[i] - == uncertain_params[i] - ) + for aux_var in aux_var_list: + aux_var.setlb(0) + aux_var.setub(1) - conlist.add(sum(model.util.cassi[i] for i in set_i) <= self.gamma) + return UncertaintyQuantification( + block=block, + uncertain_param_vars=param_var_data_list, + uncertainty_cons=list(conlist.values()), + auxiliary_vars=aux_var_list, + ) - return conlist + @copy_docstring(UncertaintySet.compute_auxiliary_uncertain_param_vals) + def compute_auxiliary_uncertain_param_vals(self, point, solver=None): + validate_array( + arr=point, + arr_name="point", + dim=1, + valid_types=valid_num_types, + valid_type_desc="numeric type", + required_shape=[self.dim], + required_shape_qual="to match the set dimension", + ) + point_arr = np.array(point) + + is_dev_nonzero = self.positive_deviation != 0 + aux_space_pt = np.empty(self.dim) + aux_space_pt[is_dev_nonzero] = ( + point_arr[is_dev_nonzero] - self.origin[is_dev_nonzero] + ) / self.positive_deviation[is_dev_nonzero] + aux_space_pt[self.positive_deviation == 0] = 0 + + return aux_space_pt def point_in_set(self, point): """ @@ -1104,17 +1329,13 @@ def point_in_set(self, point): : bool True if the point lies in the set, False otherwise. """ - cassis = [] - for i in range(self.dim): - if self.positive_deviation[i] > 0: - cassis.append((point[i] - self.origin[i]) / self.positive_deviation[i]) - - if sum(cassi for cassi in cassis) <= self.gamma and all( - cassi >= 0 and cassi <= 1 for cassi in cassis - ): - return True - else: - return False + aux_space_pt = self.compute_auxiliary_uncertain_param_vals(point) + return ( + np.all(point == self.origin + self.positive_deviation * aux_space_pt) + and aux_space_pt.sum() <= self.gamma + and np.all(0 <= aux_space_pt) + and np.all(aux_space_pt <= 1) + ) class PolyhedralSet(UncertaintySet): @@ -1177,7 +1398,7 @@ def _validate(self): c=np.zeros(self.coefficients_mat.shape[1]), A_ub=self.coefficients_mat, b_ub=self.rhs_vec, - method="simplex", + method="highs", bounds=(None, None), ) @@ -1185,7 +1406,7 @@ def _validate(self): if res.status == 1 or res.status == 4: raise ValueError( "Could not verify nonemptiness of the " - "polyhedral set (`scipy.optimize.linprog(method=simplex)` " + "polyhedral set (`scipy.optimize.linprog(method='highs')` " f" status {res.status}) " ) elif res.status == 2: @@ -1318,68 +1539,24 @@ def parameter_bounds(self): """ return [] - def set_as_constraint(self, uncertain_params, **kwargs): - """ - Construct a list of polyhedral constraints on a given sequence - of uncertain parameter objects. - - Parameters - ---------- - uncertain_params : list of Param or list of Var - Uncertain parameter objects upon which the constraints - are imposed. - **kwargs : dict, optional - Additional arguments. These arguments are currently - ignored. - - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - """ - - # === Ensure valid dimensions of lhs and rhs w.r.t uncertain_params - if np.asarray(self.coefficients_mat).shape[1] != len(uncertain_params): - raise AttributeError( - "Columns of coefficients_mat matrix " - "must equal length of uncertain parameters list." + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, uncertain_params=None, block=None): + block, param_var_data_list, conlist, aux_var_list = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, uncertain_param_vars=uncertain_params, dim=self.dim ) + ) - set_i = list(range(len(self.coefficients_mat))) - - conlist = ConstraintList() - conlist.construct() - - for i in set_i: - constraint = 0 - for j in range(len(uncertain_params)): - constraint += float(self.coefficients_mat[i][j]) * uncertain_params[j] - conlist.add(constraint <= float(self.rhs_vec[i])) - - return conlist - - @staticmethod - def add_bounds_on_uncertain_parameters(model, config): - """ - Specify the numerical bounds for each of a sequence of uncertain - parameters, represented by Pyomo `Var` objects, in a modeling - object. The numerical bounds are specified through the `.lb()` - and `.ub()` attributes of the `Var` objects. - - Parameters - ---------- - model : ConcreteModel - Model of interest (parent model of the uncertain parameter - objects for which to specify bounds). - config : ConfigDict - PyROS solver config. + for row, rhs_val in zip(self.coefficients_mat, self.rhs_vec): + lhs_expr = dot_product(row, param_var_data_list, index=range(row.size)) + conlist.add(lhs_expr <= rhs_val) - Notes - ----- - This method is invoked in advance of a PyROS separation - subproblem. - """ - add_bounds_for_uncertain_parameters(model=model, config=config) + return UncertaintyQuantification( + block=block, + uncertain_param_vars=param_var_data_list, + uncertainty_cons=list(conlist.values()), + auxiliary_vars=aux_var_list, + ) class BudgetSet(UncertaintySet): @@ -1661,59 +1838,9 @@ def parameter_bounds(self): return bounds - def set_as_constraint(self, uncertain_params, **kwargs): - """ - Construct a list of the constraints defining the budget - set on a given sequence of uncertain parameter objects. - - Parameters - ---------- - uncertain_params : list of Param or list of Var - Uncertain parameter objects upon which the constraints - are imposed. - **kwargs : dict, optional - Additional arguments. These arguments are currently - ignored. - - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - """ - # === Ensure matrix cols == len uncertain params - if self.dim != len(uncertain_params): - raise ValueError( - f"Argument 'uncertain_params' must contain {self.dim}" - "Param objects to match BudgetSet dimension" - f"(provided {len(uncertain_params)} objects)" - ) - - return PolyhedralSet.set_as_constraint(self, uncertain_params) - - @staticmethod - def add_bounds_on_uncertain_parameters(model, config): - """ - Specify the numerical bounds for each of a sequence of uncertain - parameters, represented by Pyomo `Var` objects, in a modeling - object. The numerical bounds are specified through the `.lb()` - and `.ub()` attributes of the `Var` objects. - - Parameters - ---------- - model : ConcreteModel - Model of interest (parent model of the uncertain parameter - objects for which to specify bounds). - config : ConfigDict - PyROS solver config. - - Notes - ----- - This method is invoked in advance of a PyROS separation - subproblem. - """ - # In this case, we use the UncertaintySet class method - # because we have numerical parameter_bounds - UncertaintySet.add_bounds_on_uncertain_parameters(model=model, config=config) + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, **kwargs): + return PolyhedralSet.set_as_constraint(self, **kwargs) class FactorModelSet(UncertaintySet): @@ -1726,14 +1853,17 @@ class FactorModelSet(UncertaintySet): Uncertain parameter values around which deviations are restrained. number_of_factors : int - Natural number representing the dimensionality of the + Natural number representing the dimension of the space to which the set projects. psi_mat : (N, F) array_like - Matrix designating each uncertain parameter's contribution to - each factor. Each row is associated with a separate uncertain + Matrix, of full column rank, designating each uncertain + parameter's contribution to each factor. + Each row is associated with a separate uncertain parameter. Each column is associated with a separate factor. Number of columns `F` of `psi_mat` should be equal to `number_of_factors`. + Since `psi_mat` is expected to be full column rank, + we require `F <= N`. beta : numeric type Real value between 0 and 1 specifying the fraction of the independent factors that can simultaneously attain @@ -1748,7 +1878,7 @@ class FactorModelSet(UncertaintySet): >>> fset = FactorModelSet( ... origin=np.zeros(4), ... number_of_factors=2, - ... psi_mat=np.full(shape=(4, 2), fill_value=0.1), + ... psi_mat=[[0, 0.1], [0, 0.1], [0.1, 0], [0.1, 0]], ... beta=0.5, ... ) >>> fset.origin @@ -1756,10 +1886,10 @@ class FactorModelSet(UncertaintySet): >>> fset.number_of_factors 2 >>> fset.psi_mat - array([[0.1, 0.1], - [0.1, 0.1], - [0.1, 0.1], - [0.1, 0.1]]) + array([[0. , 0.1], + [0. , 0.1], + [0.1, 0. ], + [0.1, 0. ]]) >>> fset.beta 0.5 """ @@ -1811,13 +1941,15 @@ def origin(self, val): @property def number_of_factors(self): """ - int : Natural number representing the dimensionality `F` + int : Natural number representing the dimension `F` of the space to which the set projects. - This attribute is immutable, and may only be set at - object construction. Typically, the number of factors - is significantly less than the set dimension, but no - restriction to that end is imposed here. + This attribute is immutable, may only be set at + object construction, and must be equal to the number of + columns of the factor loading matrix ``self.psi_mat``. + Therefore, since we also require that ``self.psi_mat`` + be full column rank, `number_of_factors` + must not exceed the set dimension. """ return self._number_of_factors @@ -1838,10 +1970,12 @@ def number_of_factors(self, val): @property def psi_mat(self): """ - (N, F) numpy.ndarray : Matrix designating each - uncertain parameter's contribution to each factor. Each row is - associated with a separate uncertain parameter. Each column with - a separate factor. + (N, F) numpy.ndarray : Factor loading matrix, i.e., a full + column rank matrix for which each entry indicates how strongly + the factor corresponding to the entry's column is related + to the uncertain parameter corresponding to the entry's row. + Since `psi_mat` is expected to be full column rank, + we require `F <= N`. """ return self._psi_mat @@ -1868,13 +2002,15 @@ def psi_mat(self, val): f"(provided shape {psi_mat_arr.shape})" ) - # check values acceptable - for column in psi_mat_arr.T: - if np.allclose(column, 0): - raise ValueError( - "Each column of attribute 'psi_mat' should have at least " - "one nonzero entry" - ) + psi_mat_rank = np.linalg.matrix_rank(psi_mat_arr) + is_full_column_rank = psi_mat_rank == self.number_of_factors + if not is_full_column_rank: + raise ValueError( + "Attribute 'psi_mat' should be full column rank. " + f"(Got a matrix of shape {psi_mat_arr.shape} and rank {psi_mat_rank}.) " + "Ensure `psi_mat` does not have more columns than rows, " + "and the columns of `psi_mat` are linearly independent." + ) self._psi_mat = psi_mat_arr @@ -1889,7 +2025,7 @@ def beta(self): that as many factors will be above 0 as there will be below 0 (i.e., "zero-net-alpha" model). If ``beta = 1``, then the set is numerically equivalent to a `BoxSet` with bounds - ``[origin - psi @ np.ones(F), origin + psi @ np.ones(F)].T``. + ``[self.origin - psi @ np.ones(F), self.origin + psi @ np.ones(F)].T``. """ return self._beta @@ -1974,57 +2110,60 @@ def parameter_bounds(self): return parameter_bounds - def set_as_constraint(self, uncertain_params, **kwargs): - """ - Construct a list of factor model constraints on a given sequence - of uncertain parameter objects. - - Parameters - ---------- - uncertain_params : list of Param or list of Var - Uncertain parameter objects upon which the constraints - are imposed. - **kwargs : dict - Additional arguments. This dictionary should consist - of a `model` entry, which maps to a `ConcreteModel` - object representing the model of interest (parent model - of the uncertain parameter objects). - - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - """ - model = kwargs['model'] - - # === Ensure dimensions - if len(uncertain_params) != len(self.origin): - raise AttributeError( - "Dimensions of origin and uncertain_param lists must be equal." + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, uncertain_params=None, block=None): + block, param_var_data_list, uncertainty_conlist, aux_var_list = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, + uncertain_param_vars=uncertain_params, + dim=self.dim, + num_auxiliary_vars=self.number_of_factors, ) + ) - # Make F-dim cassi variable - n = list(range(self.number_of_factors)) - model.util.cassi = Var(n, initialize=0, bounds=(-1, 1)) + factor_zip = zip(self.origin, self.psi_mat, param_var_data_list) + for orig_val, psi_row, param_var in factor_zip: + psi_dot_product = dot_product( + psi_row, aux_var_list, index=range(self.number_of_factors) + ) + uncertainty_conlist.add(orig_val + psi_dot_product == param_var) - conlist = ConstraintList() - conlist.construct() + # absolute value constraints on sum of auxiliary vars + beta_F = self.beta * self.number_of_factors + uncertainty_conlist.add((-beta_F, quicksum(aux_var_list), beta_F)) - disturbances = [ - sum(self.psi_mat[i][j] * model.util.cassi[j] for j in n) - for i in range(len(uncertain_params)) - ] + for var in aux_var_list: + var.setlb(-1) + var.setub(1) - # Make n equality constraints - for i in range(len(uncertain_params)): - conlist.add(self.origin[i] + disturbances[i] == uncertain_params[i]) - conlist.add( - sum(model.util.cassi[i] for i in n) <= +self.beta * self.number_of_factors + return UncertaintyQuantification( + block=block, + uncertain_param_vars=param_var_data_list, + uncertainty_cons=list(uncertainty_conlist.values()), + auxiliary_vars=aux_var_list, ) - conlist.add( - sum(model.util.cassi[i] for i in n) >= -self.beta * self.number_of_factors + + @copy_docstring(UncertaintySet.compute_auxiliary_uncertain_param_vals) + def compute_auxiliary_uncertain_param_vals(self, point, solver=None): + validate_array( + arr=point, + arr_name="point", + dim=1, + valid_types=valid_num_types, + valid_type_desc="numeric type", + required_shape=[self.dim], + required_shape_qual="to match the set dimension", ) - return conlist + point_arr = np.array(point) + + # protect against cases where + # `psi_mat` was recently modified entrywise + # to a matrix that is not full column rank + self.psi_mat = self.psi_mat + + # since `psi_mat` is full column rank, + # the pseudoinverse uniquely determines the auxiliary values + return np.linalg.pinv(self.psi_mat) @ (point_arr - self.origin) def point_in_set(self, point): """ @@ -2040,18 +2179,13 @@ def point_in_set(self, point): : bool True if the point lies in the set, False otherwise. """ - inv_psi = np.linalg.pinv(self.psi_mat) - diff = np.asarray(list(point[i] - self.origin[i] for i in range(len(point)))) - cassis = np.dot(inv_psi, np.transpose(diff)) - - if abs( - sum(cassi for cassi in cassis) - ) <= self.beta * self.number_of_factors and all( - cassi >= -1 and cassi <= 1 for cassi in cassis - ): - return True - else: - return False + aux_space_pt = self.compute_auxiliary_uncertain_param_vals(point) + tol = POINT_IN_UNCERTAINTY_SET_TOL + return abs( + aux_space_pt.sum() + ) <= self.beta * self.number_of_factors + tol and np.all( + np.abs(aux_space_pt) <= 1 + tol + ) class AxisAlignedEllipsoidalSet(UncertaintySet): @@ -2198,63 +2332,37 @@ def parameter_bounds(self): ] return parameter_bounds - def set_as_constraint(self, uncertain_params, model=None, config=None): - """ - Construct a list of ellipsoidal constraints on a given sequence - of uncertain parameter objects. - - Parameters - ---------- - uncertain_params : {IndexedParam, IndexedVar, list of Param/Var} - Uncertain parameter objects upon which the constraints - are imposed. Indexed parameters are accepted, and - are unpacked for constraint generation. - **kwargs : dict, optional - Additional arguments. These arguments are currently - ignored. - - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - """ - all_params = list() - - # expand all uncertain parameters to a list. - # this accounts for the cases in which `uncertain_params` - # consists of indexed model components, - # or is itself a single indexed component - if not isinstance(uncertain_params, (tuple, list)): - uncertain_params = [uncertain_params] - - all_params = [] - for uparam in uncertain_params: - all_params.extend(uparam.values()) - - if len(all_params) != len(self.center): - raise AttributeError( - f"Center of ellipsoid is of dimension {len(self.center)}," - f" but vector of uncertain parameters is of dimension" - f" {len(all_params)}" + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, uncertain_params=None, block=None): + block, param_var_data_list, uncertainty_conlist, aux_var_list = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, + uncertain_param_vars=uncertain_params, + dim=self.dim, + num_auxiliary_vars=None, ) - - zip_all = zip(all_params, self.center, self.half_lengths) - diffs_squared = list() + ) # now construct the constraints - conlist = ConstraintList() - conlist.construct() + diffs_squared = list() + zip_all = zip(param_var_data_list, self.center, self.half_lengths) for param, ctr, half_len in zip_all: if half_len > 0: diffs_squared.append((param - ctr) ** 2 / (half_len) ** 2) else: # equality constraints for parameters corresponding to # half-lengths of zero - conlist.add(param == ctr) + uncertainty_conlist.add(param == ctr) - conlist.add(sum(diffs_squared) <= 1) + if diffs_squared: + uncertainty_conlist.add(quicksum(diffs_squared) <= 1) - return conlist + return UncertaintyQuantification( + block=block, + uncertain_param_vars=param_var_data_list, + uncertainty_cons=list(uncertainty_conlist.values()), + auxiliary_vars=aux_var_list, + ) class EllipsoidalSet(UncertaintySet): @@ -2496,54 +2604,54 @@ def parameter_bounds(self): ] return parameter_bounds - def set_as_constraint(self, uncertain_params, **kwargs): - """ - Construct a list of ellipsoidal constraints on a given sequence - of uncertain parameter objects. - - Parameters - ---------- - uncertain_params : {IndexedParam, IndexedVar, list of Param/Var} - Uncertain parameter objects upon which the constraints - are imposed. Indexed parameters are accepted, and - are unpacked for constraint generation. - **kwargs : dict, optional - Additional arguments. These arguments are currently - ignored. - - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - """ - inv_covar = np.linalg.inv(self.shape_matrix) + @copy_docstring(UncertaintySet.point_in_set) + def point_in_set(self, point): + validate_array( + arr=point, + arr_name="point", + dim=1, + valid_types=valid_num_types, + valid_type_desc="numeric type", + required_shape=[self.dim], + required_shape_qual="to match the set dimension", + ) + off_center = point - self.center + normalized_pt_radius = np.sqrt( + off_center @ np.linalg.inv(self.shape_matrix) @ off_center + ) + normalized_boundary_radius = np.sqrt(self.scale) + return ( + normalized_pt_radius + <= normalized_boundary_radius + POINT_IN_UNCERTAINTY_SET_TOL + ) - if len(uncertain_params) != len(self.center): - raise AttributeError( - "Center of ellipsoid must be same dimensions as vector of uncertain parameters." + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, uncertain_params=None, block=None): + block, param_var_data_list, uncertainty_conlist, aux_var_list = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, + uncertain_param_vars=uncertain_params, + dim=self.dim, + num_auxiliary_vars=None, ) + ) - # Calculate row vector of differences - diff = [] - # === Assume VarList uncertain_param_vars - for idx, i in enumerate(uncertain_params): - if uncertain_params[idx].is_indexed(): - for index in uncertain_params[idx]: - diff.append(uncertain_params[idx][index] - self.center[idx]) - else: - diff.append(uncertain_params[idx] - self.center[idx]) - - # Calculate inner product of difference vector and covar matrix - product1 = [ - sum([x * y for x, y in zip(diff, column(inv_covar, i))]) - for i in range(len(inv_covar)) - ] - constraint = sum([x * y for x, y in zip(product1, diff)]) + inv_shape_mat = np.linalg.inv(self.shape_matrix) + with mutable_expression() as expr: + for (idx1, idx2), mat_entry in np.ndenumerate(inv_shape_mat): + expr += ( + mat_entry + * (param_var_data_list[idx1] - self.center[idx1]) + * (param_var_data_list[idx2] - self.center[idx2]) + ) + uncertainty_conlist.add(expr <= self.scale) - conlist = ConstraintList() - conlist.construct() - conlist.add(constraint <= self.scale) - return conlist + return UncertaintyQuantification( + block=block, + uncertain_param_vars=param_var_data_list, + uncertainty_cons=list(uncertainty_conlist.values()), + auxiliary_vars=aux_var_list, + ) class DiscreteScenarioSet(UncertaintySet): @@ -2659,41 +2767,27 @@ def is_bounded(self, config): """ return True - def set_as_constraint(self, uncertain_params, **kwargs): - """ - Construct a list of constraints on a given sequence - of uncertain parameter objects. - - Parameters - ---------- - uncertain_params : list of Param or list of Var - Uncertain parameter objects upon which the constraints - are imposed. - **kwargs : dict, optional - Additional arguments. These arguments are currently - ignored. - - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - """ + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, uncertain_params=None, block=None): # === Ensure point is of correct dimensionality as the uncertain parameters - dim = len(uncertain_params) - if any(len(d) != dim for d in self.scenarios): - raise AttributeError( - "All scenarios must have same dimensions as uncertain parameters." + block, param_var_data_list, uncertainty_conlist, aux_var_list = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, + uncertain_param_vars=uncertain_params, + dim=self.dim, + num_auxiliary_vars=None, ) + ) - conlist = ConstraintList() - conlist.construct() - - for n in list(range(len(self.scenarios))): - for i in list(range(len(uncertain_params))): - conlist.add(uncertain_params[i] == self.scenarios[n][i]) + # no constraints declared for the discrete set; + # instead, the param vars are fixed during separation - conlist.deactivate() - return conlist + return UncertaintyQuantification( + block=block, + uncertainty_cons=list(uncertainty_conlist.values()), + uncertain_param_vars=param_var_data_list, + auxiliary_vars=aux_var_list, + ) def point_in_set(self, point): """ @@ -2710,14 +2804,20 @@ def point_in_set(self, point): : bool True if the point lies in the set, False otherwise. """ - # Round all double precision to a tolerance - num_decimals = 8 - rounded_scenarios = list( - list(round(num, num_decimals) for num in d) for d in self.scenarios + validate_array( + arr=point, + arr_name="point", + dim=1, + valid_types=valid_num_types, + valid_type_desc="numeric type", + required_shape=[self.dim], + required_shape_qual="to match the set dimension", ) - rounded_point = list(round(num, num_decimals) for num in point) - - return any(rounded_point == rounded_d for rounded_d in rounded_scenarios) + # Round all double precision to a tolerance + num_decimals = round(-np.log10(POINT_IN_UNCERTAINTY_SET_TOL)) + rounded_scenarios = np.round(self.scenarios, decimals=num_decimals) + rounded_point = np.round(point, decimals=num_decimals) + return np.any(np.all(rounded_point == rounded_scenarios, axis=1)) class IntersectionSet(UncertaintySet): @@ -2841,61 +2941,13 @@ def point_in_set(self, point): else: return False - def is_empty_intersection(self, uncertain_params, nlp_solver): - """ - Determine if intersection is empty. - - Arguments - --------- - uncertain_params : list of Param or list of Var - List of uncertain parameter objects. - nlp_solver : Pyomo SolverFactory object - NLP solver. - - Returns - ------- - is_empty_intersection : bool - True if the intersection is certified to be empty, - and False otherwise. - """ - - # === Non-emptiness check for the set intersection - is_empty_intersection = True - if any(a_set.type == "discrete" for a_set in self.all_sets): - disc_sets = (a_set for a_set in self.all_sets if a_set.type == "discrete") - disc_set = min( - disc_sets, key=lambda x: len(x.scenarios) - ) # minimum set of scenarios - # === Ensure there is at least one scenario from this discrete set which is a member of all other sets - for scenario in disc_set.scenarios: - if all(a_set.point_in_set(point=scenario) for a_set in self.all_sets): - is_empty_intersection = False - break - else: - # === Compile constraints and solve NLP - m = ConcreteModel() - m.obj = Objective(expr=0) # dummy objective required if using baron - m.param_vars = Var(uncertain_params.index_set()) - for a_set in self.all_sets: - m.add_component( - a_set.type + "_constraints", - a_set.set_as_constraint(uncertain_params=m.param_vars), - ) - try: - res = nlp_solver.solve(m) - except: - raise ValueError( - "Solver terminated with an error while checking set intersection non-emptiness." - ) - if check_optimal_termination(res): - is_empty_intersection = False - return is_empty_intersection - # === Define pairwise intersection function @staticmethod def intersect(Q1, Q2): """ - Obtain the intersection of two uncertainty sets. + Obtain the intersection of two uncertainty sets, + accounting for the case where either of the two sets + is discrete. Parameters ---------- @@ -2904,113 +2956,52 @@ def intersect(Q1, Q2): Returns ------- - : DiscreteScenarioSet or IntersectionSet + DiscreteScenarioSet or IntersectionSet Intersection of the sets. A `DiscreteScenarioSet` is returned if both operand sets are `DiscreteScenarioSet` instances; otherwise, an `IntersectionSet` is returned. """ - constraints = ConstraintList() - constraints.construct() - - for set in (Q1, Q2): - other = Q1 if set is Q2 else Q2 - if set.type == "discrete": - intersected_scenarios = [] - for point in set.scenarios: - if other.point_in_set(point=point): - intersected_scenarios.append(point) - return DiscreteScenarioSet(scenarios=intersected_scenarios) + for set1, set2 in zip((Q1, Q2), (Q2, Q1)): + if isinstance(set1, DiscreteScenarioSet): + return DiscreteScenarioSet( + scenarios=[pt for pt in set1.scenarios if set1.point_in_set(pt)] + ) # === This case is if both sets are continuous return IntersectionSet(set1=Q1, set2=Q2) - return - - def set_as_constraint(self, uncertain_params, **kwargs): - """ - Construct a list of constraints on a given sequence - of uncertain parameter objects. In advance of constructing - the constraints, a check is performed to determine whether - the set is empty. - - Parameters - ---------- - uncertain_params : list of Param or list of Var - Uncertain parameter objects upon which the constraints - are imposed. - **kwargs : dict - Additional arguments. Must contain a `config` entry, - which maps to a `ConfigDict` containing an entry - entitled `global_solver`. The `global_solver` - key maps to an NLP solver, purportedly with global - optimization capabilities. - - Returns - ------- - conlist : ConstraintList - The constraints on the uncertain parameters. - - Raises - ------ - AttributeError - If the intersection set is found to be empty. - """ - try: - nlp_solver = kwargs["config"].global_solver - except: - raise AttributeError( - "set_as_constraint for SetIntersection requires access to an NLP solver via" - "the PyROS Solver config." + @copy_docstring(UncertaintySet.set_as_constraint) + def set_as_constraint(self, uncertain_params=None, block=None): + block, param_var_data_list, uncertainty_conlist, aux_var_list = ( + _setup_standard_uncertainty_set_constraint_block( + block=block, + uncertain_param_vars=uncertain_params, + dim=self.dim, + num_auxiliary_vars=None, ) - is_empty_intersection = self.is_empty_intersection( - uncertain_params=uncertain_params, nlp_solver=nlp_solver ) - def _intersect(Q1, Q2): - return self.intersect(Q1, Q2) - - if not is_empty_intersection: - Qint = functools.reduce(_intersect, self.all_sets) - - if Qint.type == "discrete": - return Qint.set_as_constraint(uncertain_params=uncertain_params) - else: - conlist = ConstraintList() - conlist.construct() - for set in Qint.all_sets: - for con in list( - set.set_as_constraint( - uncertain_params=uncertain_params - ).values() - ): - conlist.add(con.expr) - return conlist - else: - raise AttributeError( - "Set intersection is empty, cannot proceed with PyROS robust optimization." + intersection_set = functools.reduce(self.intersect, self.all_sets) + if isinstance(intersection_set, DiscreteScenarioSet): + return intersection_set.set_as_constraint( + uncertain_params=uncertain_params, block=block ) - @staticmethod - def add_bounds_on_uncertain_parameters(model, config): - """ - Specify the numerical bounds for each of a sequence of uncertain - parameters, represented by Pyomo `Var` objects, in a modeling - object. The numerical bounds are specified through the `.lb()` - and `.ub()` attributes of the `Var` objects. - - Parameters - ---------- - model : ConcreteModel - Model of interest (parent model of the uncertain parameter - objects for which to specify bounds). - config : ConfigDict - PyROS solver config. - - Notes - ----- - This method is invoked in advance of a PyROS separation - subproblem. - """ - - add_bounds_for_uncertain_parameters(model=model, config=config) - return + all_cons, all_aux_vars = [], [] + for idx, unc_set in enumerate(intersection_set.all_sets): + sub_block = Block() + block.add_component( + unique_component_name(block, f"sub_block_{idx}"), sub_block + ) + set_quantification = unc_set.set_as_constraint( + block=sub_block, uncertain_params=param_var_data_list + ) + all_cons.extend(set_quantification.uncertainty_cons) + all_aux_vars.extend(set_quantification.auxiliary_vars) + + return UncertaintyQuantification( + block=block, + uncertain_param_vars=param_var_data_list, + uncertainty_cons=all_cons, + auxiliary_vars=all_aux_vars, + ) diff --git a/pyomo/contrib/pyros/util.py b/pyomo/contrib/pyros/util.py index e1d25e573f1..e95be5b7b30 100644 --- a/pyomo/contrib/pyros/util.py +++ b/pyomo/contrib/pyros/util.py @@ -13,51 +13,50 @@ Utility functions for the PyROS solver ''' -import copy +from collections import namedtuple +from collections.abc import Iterable +from contextlib import contextmanager from enum import Enum, auto -from pyomo.common.collections import ComponentSet, ComponentMap -from pyomo.common.errors import ApplicationError +import functools +import itertools as it +import logging +import math +import timeit + +from pyomo.common.collections import ComponentMap, ComponentSet +from pyomo.common.dependencies import scipy as sp +from pyomo.common.errors import ApplicationError, InvalidValueError +from pyomo.common.log import Preformatted from pyomo.common.modeling import unique_component_name -from pyomo.common.timing import TicTocTimer +from pyomo.common.timing import HierarchicalTimer, TicTocTimer from pyomo.core.base import ( + Any, + Block, + Component, + ConcreteModel, Constraint, - Var, - ConstraintList, - Objective, - minimize, Expression, - ConcreteModel, + Objective, maximize, - Block, - Param, + minimize, + Reals, + Var, + value, ) -from pyomo.core.util import prod -from pyomo.core.base.var import IndexedVar -from pyomo.core.base.set_types import Reals -from pyomo.opt import TerminationCondition as tc -from pyomo.core.expr import value -from pyomo.core.expr.numeric_expr import NPV_MaxExpression, NPV_MinExpression -from pyomo.repn.standard_repn import generate_standard_repn -import pyomo.repn.plugins.nl_writer as pyomo_nl_writer -import pyomo.repn.ampl as pyomo_ampl_repn +from pyomo.core.expr.numeric_expr import SumExpression +from pyomo.core.expr.numvalue import native_types from pyomo.core.expr.visitor import ( identify_variables, identify_mutable_parameters, replace_expressions, ) -from pyomo.common.dependencies import scipy as sp -from pyomo.core.expr.numvalue import native_types +from pyomo.core.util import prod +from pyomo.opt import SolverFactory +import pyomo.repn.ampl as pyomo_ampl_repn +from pyomo.repn.parameterized_quadratic import ParameterizedQuadraticRepnVisitor +import pyomo.repn.plugins.nl_writer as pyomo_nl_writer +from pyomo.repn.util import OrderedVarRecorder from pyomo.util.vars_from_expressions import get_vars_from_components -from pyomo.core.expr.numeric_expr import SumExpression -from pyomo.environ import SolverFactory - -import itertools as it -import timeit -from contextlib import contextmanager -import logging -import math -from pyomo.common.timing import HierarchicalTimer -from pyomo.common.log import Preformatted # Tolerances used in the code @@ -66,8 +65,13 @@ COEFF_MATCH_REL_TOL = 1e-6 COEFF_MATCH_ABS_TOL = 0 ABS_CON_CHECK_FEAS_TOL = 1e-5 +PRETRIANGULAR_VAR_COEFF_TOL = 1e-6 +POINT_IN_UNCERTAINTY_SET_TOL = 1e-8 +DR_POLISHING_PARAM_PRODUCT_ZERO_TOL = 1e-10 + TIC_TOC_SOLVE_TIME_ATTR = "pyros_tic_toc_time" DEFAULT_LOGGER_NAME = "pyomo.contrib.pyros" +DEFAULT_SEPARATION_PRIORITY = 0 class TimingData: @@ -544,287 +548,71 @@ class ObjectiveType(Enum): nominal = auto() -def recast_to_min_obj(model, obj): +def standardize_component_data( + obj, + valid_ctype, + valid_cdatatype, + ctype_validator=None, + cdatatype_validator=None, + allow_repeats=False, + from_iterable=None, +): """ - Recast model objective to a minimization objective, as necessary. - - Parameters - ---------- - model : ConcreteModel - Model of interest. - obj : ScalarObjective - Objective of interest. - """ - if obj.sense is not minimize: - if isinstance(obj.expr, SumExpression): - # ensure additive terms in objective - # are split in accordance with user declaration - obj.expr = sum(-term for term in obj.expr.args) - else: - obj.expr = -obj.expr - obj.sense = minimize - - -def turn_bounds_to_constraints(variable, model, config=None): - ''' - Turn the variable in question's "bounds" into direct inequality constraints on the model. - :param variable: the variable with bounds to be turned to None and made into constraints. - :param model: the model in which the variable resides - :param config: solver config - :return: the list of inequality constraints that are the bounds - ''' - lb, ub = variable.lower, variable.upper - if variable.domain is not Reals: - variable.domain = Reals - - if isinstance(lb, NPV_MaxExpression): - lb_args = lb.args - else: - lb_args = (lb,) - - if isinstance(ub, NPV_MinExpression): - ub_args = ub.args - else: - ub_args = (ub,) - - count = 0 - for arg in lb_args: - if arg is not None: - name = unique_component_name( - model, variable.name + f"_lower_bound_con_{count}" - ) - model.add_component(name, Constraint(expr=arg - variable <= 0)) - count += 1 - variable.setlb(None) - - count = 0 - for arg in ub_args: - if arg is not None: - name = unique_component_name( - model, variable.name + f"_upper_bound_con_{count}" - ) - model.add_component(name, Constraint(expr=variable - arg <= 0)) - count += 1 - variable.setub(None) - - -def get_time_from_solver(results): + Standardize object to a list of component data objects. """ - Obtain solver time from a Pyomo `SolverResults` object. - - Returns - ------- - : float - Solver time. May be CPU time or elapsed time, - depending on the solver. If no time attribute - is found, then `float("nan")` is returned. - - NOTE - ---- - This method attempts to access solver time through the - attributes of `results.solver` in the following order - of precedence: - - 1) Attribute with name ``pyros.util.TIC_TOC_SOLVE_TIME_ATTR``. - This attribute is an estimate of the elapsed solve time - obtained using the Pyomo `TicTocTimer` at the point the - solver from which the results object is derived was invoked. - Preferred over other time attributes, as other attributes - may be in CPUs, and for purposes of evaluating overhead - time, we require wall s. - 2) `'user_time'` if the results object was returned by a GAMS - solver, `'time'` otherwise. - """ - solver_name = getattr(results.solver, "name", None) - - # is this sufficient to confirm GAMS solver used? - from_gams = solver_name is not None and str(solver_name).startswith("GAMS ") - time_attr_name = "user_time" if from_gams else "time" - for attr_name in [TIC_TOC_SOLVE_TIME_ATTR, time_attr_name]: - solve_time = getattr(results.solver, attr_name, None) - if solve_time is not None: - break - - return float("nan") if solve_time is None else solve_time - - -def add_bounds_for_uncertain_parameters(model, config): - ''' - This function solves a set of optimization problems to determine bounds on the uncertain parameters - given the uncertainty set description. These bounds will be added as additional constraints to the uncertainty_set_constr - constraint. Should only be called once set_as_constraint() has been called on the separation_model object. - :param separation_model: the model on which to add the bounds - :param config: solver config - :return: - ''' - # === Determine bounds on all uncertain params - uncertain_param_bounds = [] - bounding_model = ConcreteModel() - bounding_model.util = Block() - bounding_model.util.uncertain_param_vars = IndexedVar( - model.util.uncertain_param_vars.index_set() - ) - for tup in model.util.uncertain_param_vars.items(): - bounding_model.util.uncertain_param_vars[tup[0]].set_value( - tup[1].value, skip_validation=True + if isinstance(obj, valid_ctype): + if ctype_validator is not None: + ctype_validator(obj) + return list(obj.values()) + elif isinstance(obj, valid_cdatatype): + if cdatatype_validator is not None: + cdatatype_validator(obj) + return [obj] + elif isinstance(obj, Component): + # deal with this case separately from general + # iterables to prevent iteration over an invalid + # component type + raise TypeError( + f"Input object {obj!r} " + "is not of valid component type " + f"{valid_ctype.__name__} or component data type " + f"(got type {type(obj).__name__})." ) - - bounding_model.add_component( - "uncertainty_set_constraint", - config.uncertainty_set.set_as_constraint( - uncertain_params=bounding_model.util.uncertain_param_vars, - model=bounding_model, - config=config, - ), - ) - - for idx, param in enumerate( - list(bounding_model.util.uncertain_param_vars.values()) - ): - bounding_model.add_component( - "lb_obj_" + str(idx), Objective(expr=param, sense=minimize) + elif isinstance(obj, Iterable) and not isinstance(obj, str): + ans = [] + for item in obj: + ans.extend( + standardize_component_data( + item, + valid_ctype=valid_ctype, + valid_cdatatype=valid_cdatatype, + ctype_validator=ctype_validator, + cdatatype_validator=cdatatype_validator, + allow_repeats=allow_repeats, + from_iterable=obj, + ) + ) + else: + from_iterable_qual = ( + f" (entry of iterable {from_iterable})" if from_iterable is not None else "" ) - bounding_model.add_component( - "ub_obj_" + str(idx), Objective(expr=param, sense=maximize) + raise TypeError( + f"Input object {obj!r}{from_iterable_qual} " + "is not of valid component type " + f"{valid_ctype.__name__} or component data type " + f"{valid_cdatatype.__name__} (got type {type(obj).__name__})." ) - for o in bounding_model.component_data_objects(Objective): - o.deactivate() - - for i in range(len(bounding_model.util.uncertain_param_vars)): - bounds = [] - for limit in ("lb", "ub"): - getattr(bounding_model, limit + "_obj_" + str(i)).activate() - res = config.global_solver.solve(bounding_model, tee=False) - bounds.append(bounding_model.util.uncertain_param_vars[i].value) - getattr(bounding_model, limit + "_obj_" + str(i)).deactivate() - uncertain_param_bounds.append(bounds) - - # === Add bounds as constraints to uncertainty_set_constraint ConstraintList - for idx, bound in enumerate(uncertain_param_bounds): - model.util.uncertain_param_vars[idx].setlb(bound[0]) - model.util.uncertain_param_vars[idx].setub(bound[1]) - - return - - -def transform_to_standard_form(model): - """ - Recast all model inequality constraints of the form `a <= g(v)` (`<= b`) - to the 'standard' form `a - g(v) <= 0` (and `g(v) - b <= 0`), - in which `v` denotes all model variables and `a` and `b` are - contingent on model parameters. - - Parameters - ---------- - model : ConcreteModel - The model to search for constraints. This will descend into all - active Blocks and sub-Blocks as well. - - Note - ---- - If `a` and `b` are identical and the constraint is not classified as an - equality (i.e. the `equality` attribute of the constraint object - is `False`), then the constraint is recast to the equality `g(v) == a`. - """ - # Note: because we will be adding / modifying the number of - # constraints, we want to resolve the generator to a list before - # starting. - cons = list( - model.component_data_objects(Constraint, descend_into=True, active=True) - ) - for con in cons: - if not con.equality: - has_lb = con.lower is not None - has_ub = con.upper is not None - - if has_lb and has_ub: - if con.lower is con.upper: - # recast as equality Constraint - con.set_value(con.lower == con.body) - else: - # range inequality; split into two Constraints. - uniq_name = unique_component_name(model, con.name + '_lb') - model.add_component( - uniq_name, Constraint(expr=con.lower - con.body <= 0) - ) - con.set_value(con.body - con.upper <= 0) - elif has_lb: - # not in standard form; recast. - con.set_value(con.lower - con.body <= 0) - elif has_ub: - # move upper bound to body. - con.set_value(con.body - con.upper <= 0) - else: - # unbounded constraint: deactivate - con.deactivate() - - -def get_vars_from_component(block, ctype): - """Determine all variables used in active components within a block. - - Parameters - ---------- - block: Block - The block to search for components. This is a recursive - generator and will descend into any active sub-Blocks as well. - ctype: class - The component type (typically either :py:class:`Constraint` or - :py:class:`Objective` to search for). - - """ - - return get_vars_from_components(block, ctype, active=True, descend_into=True) - - -def replace_uncertain_bounds_with_constraints(model, uncertain_params): - """ - For variables of which the bounds are dependent on the parameters - in the list `uncertain_params`, remove the bounds and add - explicit variable bound inequality constraints. - - :param model: Model in which to make the bounds/constraint replacements - :type model: class:`pyomo.core.base.PyomoModel.ConcreteModel` - :param uncertain_params: List of uncertain model parameters - :type uncertain_params: list - """ - uncertain_param_set = ComponentSet(uncertain_params) - - # component for explicit inequality constraints - uncertain_var_bound_constrs = ConstraintList() - model.add_component( - unique_component_name(model, 'uncertain_var_bound_cons'), - uncertain_var_bound_constrs, - ) + # check for duplicates if desired + if not allow_repeats and len(ans) != len(ComponentSet(ans)): + comp_name_list = [comp.name for comp in ans] + raise ValueError( + f"Standardized component list {comp_name_list} " + f"derived from input {obj} " + "contains duplicate entries." + ) - # get all variables in active objective and constraint expression(s) - vars_in_cons = ComponentSet(get_vars_from_component(model, Constraint)) - vars_in_obj = ComponentSet(get_vars_from_component(model, Objective)) - - for v in vars_in_cons | vars_in_obj: - # get mutable parameters in variable bounds expressions - ub = v.upper - mutable_params_ub = ComponentSet(identify_mutable_parameters(ub)) - lb = v.lower - mutable_params_lb = ComponentSet(identify_mutable_parameters(lb)) - - # add explicit inequality constraint(s), remove variable bound(s) - if mutable_params_ub & uncertain_param_set: - if type(ub) is NPV_MinExpression: - upper_bounds = ub.args - else: - upper_bounds = (ub,) - for u_bnd in upper_bounds: - uncertain_var_bound_constrs.add(v - u_bnd <= 0) - v.setub(None) - if mutable_params_lb & uncertain_param_set: - if type(ub) is NPV_MaxExpression: - lower_bounds = lb.args - else: - lower_bounds = (lb,) - for l_bnd in lower_bounds: - uncertain_var_bound_constrs.add(l_bnd - v <= 0) - v.setlb(None) + return ans def check_components_descended_from_model(model, components, components_name, config): @@ -863,44 +651,12 @@ def check_components_descended_from_model(model, components, components_name, co f"{comp_names_str}" ) raise ValueError( - f"Found entries of {components_name} " + f"Found {components_name} " "not descended from input model. " "Check logger output messages." ) -def get_state_vars(blk, first_stage_variables, second_stage_variables): - """ - Get state variables of a modeling block. - - The state variables with respect to `blk` are the unfixed - `VarData` objects participating in the active objective - or constraints descended from `blk` which are not - first-stage variables or second-stage variables. - - Parameters - ---------- - blk : ScalarBlock - Block of interest. - first_stage_variables : Iterable of VarData - First-stage variables. - second_stage_variables : Iterable of VarData - Second-stage variables. - - Yields - ------ - VarData - State variable. - """ - dof_var_set = ComponentSet(first_stage_variables) | ComponentSet( - second_stage_variables - ) - for var in get_vars_from_component(blk, (Objective, Constraint)): - is_state_var = not var.fixed and var not in dof_var_set - if is_state_var: - yield var - - def check_variables_continuous(model, vars, config): """ Check that all DOF and state variables of the model @@ -981,6 +737,12 @@ def validate_model(model, config): ) +VariablePartitioning = namedtuple( + "VariablePartitioning", + ("first_stage_variables", "second_stage_variables", "state_variables"), +) + + def validate_variable_partitioning(model, config): """ Check that partitioning of the first-stage variables, @@ -1029,27 +791,33 @@ def validate_variable_partitioning(model, config): "contain at least one common Var object." ) - state_vars = list( - get_state_vars( - model, - first_stage_variables=config.first_stage_variables, - second_stage_variables=config.second_stage_variables, + active_model_vars = ComponentSet( + get_vars_from_components( + block=model, + active=True, + include_fixed=False, + descend_into=True, + ctype=(Objective, Constraint), ) ) - var_type_list_map = { - "first-stage variables": config.first_stage_variables, - "second-stage variables": config.second_stage_variables, - "state variables": state_vars, - } - for desc, vars in var_type_list_map.items(): - check_components_descended_from_model( - model=model, components=vars, components_name=desc, config=config - ) + check_components_descended_from_model( + model=model, + components=active_model_vars, + components_name=( + "Vars participating in the " + "active model Objective/Constraint expressions " + ), + config=config, + ) + check_variables_continuous(model, active_model_vars, config) - all_vars = config.first_stage_variables + config.second_stage_variables + state_vars - check_variables_continuous(model, all_vars, config) + first_stage_vars = ComponentSet(config.first_stage_variables) & active_model_vars + second_stage_vars = ComponentSet(config.second_stage_variables) & active_model_vars + state_vars = active_model_vars - (first_stage_vars | second_stage_vars) - return state_vars + return VariablePartitioning( + list(first_stage_vars), list(second_stage_vars), list(state_vars) + ) def validate_uncertainty_specification(model, config): @@ -1159,513 +927,951 @@ def validate_pyros_inputs(model, config): Input deterministic model. config : ConfigDict PyROS solver options. + + Returns + ------- + user_var_partitioning : VariablePartitioning + Partitioning of the in-scope model variables into + first-stage, second-stage, and state variables, + according to user specification of the first-stage + and second-stage variables. """ validate_model(model, config) - state_vars = validate_variable_partitioning(model, config) + user_var_partitioning = validate_variable_partitioning(model, config) validate_uncertainty_specification(model, config) validate_separation_problem_options(model, config) - return state_vars - - -def substitute_ssv_in_dr_constraints(model, constraint): - ''' - Generate the standard_repn for the dr constraints. Generate new expression with replace_expression to ignore - the ssv component. - Then, replace_expression with substitution_map between ssv and the new expression. - Deactivate or del_component the original dr equation. - Then, return modified model and do coefficient matching as normal. - :param model: the working_model - :param constraint: an equality constraint from the working model identified to be of the form h(x,z,q) = 0. - :return: - ''' - dr_eqns = model.util.decision_rule_eqns - fsv = ComponentSet(model.util.first_stage_variables) - if not hasattr(model, "dr_substituted_constraints"): - model.dr_substituted_constraints = ConstraintList() - - substitution_map = {} - for eqn in dr_eqns: - repn = generate_standard_repn(eqn.body, compute_values=False) - new_expression = 0 - map_linear_coeff_to_var = [ - x - for x in zip(repn.linear_coefs, repn.linear_vars) - if x[1] in ComponentSet(fsv) - ] - map_quad_coeff_to_var = [ - x - for x in zip(repn.quadratic_coefs, repn.quadratic_vars) - if x[1] in ComponentSet(fsv) - ] - if repn.linear_coefs: - for coeff, var in map_linear_coeff_to_var: - new_expression += coeff * var - if repn.quadratic_coefs: - for coeff, var in map_quad_coeff_to_var: - new_expression += coeff * var[0] * var[1] # var here is a 2-tuple - - substitution_map[id(repn.linear_vars[-1])] = new_expression - - model.dr_substituted_constraints.add( - replace_expressions(expr=constraint.lower, substitution_map=substitution_map) - == replace_expressions(expr=constraint.body, substitution_map=substitution_map) - ) - - # === Delete the original constraint - model.del_component(constraint.name) - - return model.dr_substituted_constraints[ - max(model.dr_substituted_constraints.keys()) - ] - - -def is_certain_parameter(uncertain_param_index, config): - ''' - If an uncertain parameter's inferred LB and UB are within a relative tolerance, - then the parameter is considered certain. - :param uncertain_param_index: index of the parameter in the config.uncertain_params list - :param config: solver config - :return: True if param is effectively "certain," else return False - ''' - if config.uncertainty_set.parameter_bounds: - param_bounds = config.uncertainty_set.parameter_bounds[uncertain_param_index] - return math.isclose( - a=param_bounds[0], - b=param_bounds[1], - rel_tol=PARAM_IS_CERTAIN_REL_TOL, - abs_tol=PARAM_IS_CERTAIN_ABS_TOL, - ) - else: - return False # cannot be determined without bounds - - -def coefficient_matching(model, constraint, uncertain_params, config): - ''' - :param model: master problem model - :param constraint: the constraint from the master problem model - :param uncertain_params: the list of uncertain parameters - :param first_stage_variables: the list of effective first-stage variables (includes ssv if decision_rule_order = 0) - :return: True if the coefficient matching was successful, False if its proven robust_infeasible due to - constraints of the form 1 == 0 - ''' - # === Returned flags - successful_matching = True - robust_infeasible = False - - # === Efficiency for q_LB = q_UB - actual_uncertain_params = [] - - for i in range(len(uncertain_params)): - if not is_certain_parameter(uncertain_param_index=i, config=config): - actual_uncertain_params.append(uncertain_params[i]) - - # === Add coefficient matching constraint list - if not hasattr(model, "coefficient_matching_constraints"): - model.coefficient_matching_constraints = ConstraintList() - if not hasattr(model, "swapped_constraints"): - model.swapped_constraints = ConstraintList() - - variables_in_constraint = ComponentSet(identify_variables(constraint.expr)) - params_in_constraint = ComponentSet(identify_mutable_parameters(constraint.expr)) - first_stage_variables = model.util.first_stage_variables - second_stage_variables = model.util.second_stage_variables - - # === Determine if we need to do DR expression/ssv substitution to - # make h(x,z,q) == 0 into h(x,d,q) == 0 (which is just h(x,q) == 0) - if all( - v in ComponentSet(first_stage_variables) for v in variables_in_constraint - ) and any(q in ComponentSet(actual_uncertain_params) for q in params_in_constraint): - # h(x, q) == 0 - pass - elif all( - v in ComponentSet(first_stage_variables + second_stage_variables) - for v in variables_in_constraint - ) and any(q in ComponentSet(actual_uncertain_params) for q in params_in_constraint): - constraint = substitute_ssv_in_dr_constraints( - model=model, constraint=constraint - ) + return user_var_partitioning - variables_in_constraint = ComponentSet(identify_variables(constraint.expr)) - params_in_constraint = ComponentSet( - identify_mutable_parameters(constraint.expr) - ) - else: - pass - - if all( - v in ComponentSet(first_stage_variables) for v in variables_in_constraint - ) and any(q in ComponentSet(actual_uncertain_params) for q in params_in_constraint): - # Swap param objects for variable objects in this constraint - model.param_set = [] - for i in range(len(list(variables_in_constraint))): - # Initialize Params to non-zero value due to standard_repn bug - model.add_component("p_%s" % i, Param(initialize=1, mutable=True)) - model.param_set.append(getattr(model, "p_%s" % i)) - - model.variable_set = [] - for i in range(len(list(actual_uncertain_params))): - model.add_component("x_%s" % i, Var(initialize=1)) - model.variable_set.append(getattr(model, "x_%s" % i)) - - original_var_to_param_map = list( - zip(list(variables_in_constraint), model.param_set) - ) - original_param_to_vap_map = list( - zip(list(actual_uncertain_params), model.variable_set) - ) - var_to_param_substitution_map_forward = {} - # Separation problem initialized to nominal uncertain parameter values - for var, param in original_var_to_param_map: - var_to_param_substitution_map_forward[id(var)] = param - - param_to_var_substitution_map_forward = {} - # Separation problem initialized to nominal uncertain parameter values - for param, var in original_param_to_vap_map: - param_to_var_substitution_map_forward[id(param)] = var - - var_to_param_substitution_map_reverse = {} - # Separation problem initialized to nominal uncertain parameter values - for var, param in original_var_to_param_map: - var_to_param_substitution_map_reverse[id(param)] = var - - param_to_var_substitution_map_reverse = {} - # Separation problem initialized to nominal uncertain parameter values - for param, var in original_param_to_vap_map: - param_to_var_substitution_map_reverse[id(var)] = param - - model.swapped_constraints.add( - replace_expressions( - expr=replace_expressions( - expr=constraint.lower, - substitution_map=param_to_var_substitution_map_forward, - ), - substitution_map=var_to_param_substitution_map_forward, - ) - == replace_expressions( - expr=replace_expressions( - expr=constraint.body, - substitution_map=param_to_var_substitution_map_forward, - ), - substitution_map=var_to_param_substitution_map_forward, - ) - ) +class ModelData: + """ + Container for modeling objects from which the PyROS + subproblems are constructed. - swapped = model.swapped_constraints[max(model.swapped_constraints.keys())] + Parameters + ---------- + original_model : ConcreteModel + Original user-provided model. + timing : TimingData + Main timing data object. - val = generate_standard_repn(swapped.body, compute_values=False) + Attributes + ---------- + original_model : ConcreteModel + Original user-provided model. + timing : TimingData + Main PyROS solver timing data object. + working_model : ConcreteModel + Preprocessed clone of `original_model` from which + the PyROS cutting set subproblems are to be + constructed. + separation_priority_order : dict + Mapping from constraint names to separation priority + values. + """ - if val.constant is not None: - if type(val.constant) not in native_types: - temp_expr = replace_expressions( - val.constant, substitution_map=var_to_param_substitution_map_reverse - ) - # We will use generate_standard_repn to generate a - # simplified expression (in particular, to remove any - # "0*..." terms) - temp_expr = generate_standard_repn(temp_expr).to_expression() - if temp_expr.__class__ not in native_types: - model.coefficient_matching_constraints.add(expr=temp_expr == 0) - elif math.isclose( - value(temp_expr), - 0, - rel_tol=COEFF_MATCH_REL_TOL, - abs_tol=COEFF_MATCH_ABS_TOL, - ): - pass - else: - successful_matching = False - robust_infeasible = True - elif math.isclose( - value(val.constant), - 0, - rel_tol=COEFF_MATCH_REL_TOL, - abs_tol=COEFF_MATCH_ABS_TOL, - ): - pass - else: - successful_matching = False - robust_infeasible = True - if val.linear_coefs is not None: - for coeff in val.linear_coefs: - if type(coeff) not in native_types: - temp_expr = replace_expressions( - coeff, substitution_map=var_to_param_substitution_map_reverse - ) - # We will use generate_standard_repn to generate a - # simplified expression (in particular, to remove any - # "0*..." terms) - temp_expr = generate_standard_repn(temp_expr).to_expression() - if temp_expr.__class__ not in native_types: - model.coefficient_matching_constraints.add(expr=temp_expr == 0) - elif math.isclose( - value(temp_expr), - 0, - rel_tol=COEFF_MATCH_REL_TOL, - abs_tol=COEFF_MATCH_ABS_TOL, - ): - pass - else: - successful_matching = False - robust_infeasible = True - elif math.isclose( - value(coeff), - 0, - rel_tol=COEFF_MATCH_REL_TOL, - abs_tol=COEFF_MATCH_ABS_TOL, - ): - pass - else: - successful_matching = False - robust_infeasible = True - if val.quadratic_coefs: - for coeff in val.quadratic_coefs: - if type(coeff) not in native_types: - temp_expr = replace_expressions( - coeff, substitution_map=var_to_param_substitution_map_reverse - ) - # We will use generate_standard_repn to generate a - # simplified expression (in particular, to remove any - # "0*..." terms) - temp_expr = generate_standard_repn(temp_expr).to_expression() - if temp_expr.__class__ not in native_types: - model.coefficient_matching_constraints.add(expr=temp_expr == 0) - elif math.isclose( - value(temp_expr), - 0, - rel_tol=COEFF_MATCH_REL_TOL, - abs_tol=COEFF_MATCH_ABS_TOL, - ): - pass - else: - successful_matching = False - robust_infeasible = True - elif math.isclose( - value(coeff), - 0, - rel_tol=COEFF_MATCH_REL_TOL, - abs_tol=COEFF_MATCH_ABS_TOL, - ): - pass - else: - successful_matching = False - robust_infeasible = True - if val.nonlinear_expr is not None: - successful_matching = False - robust_infeasible = False + def __init__(self, original_model, config, timing): + self.original_model = original_model + self.timing = timing + self.config = config + self.separation_priority_order = dict() + # working model will be addressed by preprocessing + self.working_model = None - if successful_matching: - model.util.h_x_q_constraints.add(constraint) + def preprocess(self, user_var_partitioning): + """ + Preprocess model data. - for i in range(len(list(variables_in_constraint))): - model.del_component("p_%s" % i) + See :meth:`~preprocess_model_data`. - for i in range(len(list(params_in_constraint))): - model.del_component("x_%s" % i) + Returns + ------- + bool + True if robust infeasibility detected, False otherwise. + """ + return preprocess_model_data(self, user_var_partitioning) - model.del_component("swapped_constraints") - model.del_component("swapped_constraints_index") - return successful_matching, robust_infeasible +def setup_quadratic_expression_visitor( + wrt, subexpression_cache=None, var_map=None, var_order=None, sorter=None +): + """Setup a parameterized quadratic expression walker.""" + visitor = ParameterizedQuadraticRepnVisitor( + subexpression_cache={} if subexpression_cache is None else subexpression_cache, + var_recorder=OrderedVarRecorder( + var_map={} if var_map is None else var_map, + var_order={} if var_order is None else var_order, + sorter=sorter, + ), + wrt=wrt, + ) + visitor.expand_nonlinear_products = True + return visitor -def selective_clone(block, first_stage_vars): +class BoundType: """ - Clone everything in a base_model except for the first-stage variables - :param block: the block of the model to be clones - :param first_stage_vars: the variables which should not be cloned - :return: + Indicator for whether a bound on a variable/constraint + is a lower bound, "equality" bound, or upper bound. """ - memo = {'__block_scope__': {id(block): True, id(None): False}} - for v in first_stage_vars: - memo[id(v)] = v - new_block = copy.deepcopy(block, memo) - new_block._parent = None - return new_block + LOWER = "lower" + EQ = "eq" + UPPER = "upper" -def add_decision_rule_variables(model_data, config): +def get_var_bound_pairs(var): """ - Add variables for polynomial decision rules to the working - model. + Get the domain and declared lower/upper + bound pairs of a variable data object. Parameters ---------- - model_data : ROSolveResults - Model data. - config : config_dict - PyROS solver options. + var : VarData + Variable data object of interest. - Note - ---- - Decision rule variables are considered first-stage decision - variables which do not get copied at each iteration. - PyROS currently supports static (zeroth order), - affine (first-order), and quadratic DR. + Returns + ------- + domain_bounds : 2-tuple of None or numeric type + Domain (lower, upper) bound pair. + declared_bounds : 2-tuple of None, numeric type, or NumericExpression + Declared (lower, upper) bound pair. + Bounds of type `NumericExpression` + are either constant or mutable expressions. """ - second_stage_variables = model_data.working_model.util.second_stage_variables - first_stage_variables = model_data.working_model.util.first_stage_variables - decision_rule_vars = [] - - # since DR expression is a general polynomial in the uncertain - # parameters, the exact number of DR variables per second-stage - # variable depends on DR order and uncertainty set dimension - degree = config.decision_rule_order - num_uncertain_params = len(model_data.working_model.util.uncertain_params) - num_dr_vars = sp.special.comb( - N=num_uncertain_params + degree, k=degree, exact=True, repetition=False - ) - - for idx, ss_var in enumerate(second_stage_variables): - # declare DR coefficients for current second-stage variable - indexed_dr_var = Var( - range(num_dr_vars), initialize=0, bounds=(None, None), domain=Reals - ) - model_data.working_model.add_component( - f"decision_rule_var_{idx}", indexed_dr_var - ) + # temporarily set domain to Reals to cleanly retrieve + # the declared bound expressions + orig_var_domain = var.domain + var.domain = Reals - # index 0 entry of the IndexedVar is the static - # DR term. initialize to user-provided value of - # the corresponding second-stage variable. - # all other entries remain initialized to 0. - indexed_dr_var[0].set_value(value(ss_var, exception=False)) + domain_bounds = orig_var_domain.bounds() + declared_bounds = var.lower, var.upper - # update attributes - first_stage_variables.extend(indexed_dr_var.values()) - decision_rule_vars.append(indexed_dr_var) + # ensure state of variable object is ultimately left unchanged + var.domain = orig_var_domain - model_data.working_model.util.decision_rule_vars = decision_rule_vars + return domain_bounds, declared_bounds -def add_decision_rule_constraints(model_data, config): +def determine_certain_and_uncertain_bound( + domain_bound, declared_bound, uncertain_params, bound_type +): """ - Add decision rule equality constraints to the working model. + Determine the certain and uncertain lower or upper + bound for a variable object, based on the specified + domain and declared bound. Parameters ---------- - model_data : ROSolveResults - Model data. - config : ConfigDict - PyROS solver options. + domain_bound : numeric type, NumericExpression, or None + Domain bound. + declared_bound : numeric type, NumericExpression, or None + Declared bound. + uncertain_params : iterable of ParamData + Uncertain model parameters. + bound_type : {BoundType.LOWER, BoundType.UPPER} + Indication of whether the domain bound and declared bound + specify lower or upper bounds for the variable value. + + Returns + ------- + certain_bound : numeric type, NumericExpression, or None + Bound that independent of the uncertain parameters. + uncertain_bound : numeric expression or None + Bound that is dependent on the uncertain parameters. """ + if bound_type not in {BoundType.LOWER, BoundType.UPPER}: + raise ValueError( + f"Argument {bound_type=!r} should be either " + f"'{BoundType.LOWER}' or '{BoundType.UPPER}'." + ) - second_stage_variables = model_data.working_model.util.second_stage_variables - uncertain_params = model_data.working_model.util.uncertain_params - decision_rule_eqns = [] - decision_rule_vars_list = model_data.working_model.util.decision_rule_vars - degree = config.decision_rule_order + if declared_bound is not None: + uncertain_params_in_declared_bound = ComponentSet( + uncertain_params + ) & ComponentSet(identify_mutable_parameters(declared_bound)) + else: + uncertain_params_in_declared_bound = False - # keeping track of degree of monomial in which each - # DR coefficient participates will be useful for later - dr_var_to_exponent_map = ComponentMap() + if not uncertain_params_in_declared_bound: + uncertain_bound = None - # set up uncertain parameter combinations for - # construction of the monomials of the DR expressions - monomial_param_combos = [] - for power in range(degree + 1): - power_combos = it.combinations_with_replacement(uncertain_params, power) - monomial_param_combos.extend(power_combos) + if declared_bound is None: + certain_bound = domain_bound + elif domain_bound is None: + certain_bound = declared_bound + else: + if bound_type == BoundType.LOWER: + certain_bound = ( + declared_bound + if value(declared_bound) >= domain_bound + else domain_bound + ) + else: + certain_bound = ( + declared_bound + if value(declared_bound) <= domain_bound + else domain_bound + ) + else: + uncertain_bound = declared_bound + certain_bound = domain_bound - # now construct DR equations and declare them on the working model - second_stage_dr_var_zip = zip(second_stage_variables, decision_rule_vars_list) - for idx, (ss_var, indexed_dr_var) in enumerate(second_stage_dr_var_zip): - # for each DR equation, the number of coefficients should match - # the number of monomial terms exactly - if len(monomial_param_combos) != len(indexed_dr_var.index_set()): - raise ValueError( - f"Mismatch between number of DR coefficient variables " - f"and number of DR monomials for DR equation index {idx}, " - f"corresponding to second-stage variable {ss_var.name!r}. " - f"({len(indexed_dr_var.index_set())}!= {len(monomial_param_combos)})" - ) + return certain_bound, uncertain_bound - # construct the DR polynomial - dr_expression = 0 - for dr_var, param_combo in zip(indexed_dr_var.values(), monomial_param_combos): - dr_expression += dr_var * prod(param_combo) - # map decision rule var to degree (exponent) of the - # associated monomial with respect to the uncertain params - dr_var_to_exponent_map[dr_var] = len(param_combo) +BoundTriple = namedtuple( + "BoundTriple", (BoundType.LOWER, BoundType.EQ, BoundType.UPPER) +) - # declare constraint on model - dr_eqn = Constraint(expr=dr_expression - ss_var == 0) - model_data.working_model.add_component(f"decision_rule_eqn_{idx}", dr_eqn) - # append to list of DR equality constraints - decision_rule_eqns.append(dr_eqn) +def rearrange_bound_pair_to_triple(lower_bound, upper_bound): + """ + Rearrange a lower/upper bound pair into a lower/equality/upper + bound triple, according to whether or not the lower and upper + bound are identical numerical values or expressions. - # finally, add attributes to util block - model_data.working_model.util.decision_rule_eqns = decision_rule_eqns - model_data.working_model.util.dr_var_to_exponent_map = dr_var_to_exponent_map + Parameters + ---------- + lower_bound : numeric type, NumericExpression, or None + Lower bound. + upper_bound : numeric type, NumericExpression, or None + Upper bound. + Returns + ------- + BoundTriple + Lower/equality/upper bound triple. The equality + bound is None if `lower_bound` and `upper_bound` + are not identical numeric type or ``NumericExpression`` + objects, or else it is set to `upper_bound`, + in which case, both the lower and upper bounds are + returned as None. -def enforce_dr_degree(blk, config, degree): + Note + ---- + This method is meant to behave in a manner akin to that of + ConstraintData.equality, in which a ranged inequality + constraint may be considered an equality constraint if + the `lower` and `upper` attributes of the constraint + are identical and not None. """ - Make decision rule polynomials of a given degree - by fixing value of the appropriate subset of the decision - rule coefficients to 0. + if lower_bound is not None and lower_bound is upper_bound: + eq_bound = upper_bound + lower_bound = None + upper_bound = None + else: + eq_bound = None + + return BoundTriple(lower_bound, eq_bound, upper_bound) + + +def get_var_certain_uncertain_bounds(var, uncertain_params): + """ + Determine the certain and uncertain lower/equality/upper bound + triples for a variable data object, based on that variable's + domain and declared bounds. Parameters ---------- - blk : ScalarBlock - Working model, or master problem block. - config : ConfigDict - PyROS solver options. - degree : int - Degree of the DR polynomials that is to be enforced. + var : VarData + Variable data object of interest. + uncertain_params : iterable of ParamData + Uncertain model parameters. + + Returns + ------- + certain_bounds : BoundTriple + The certain lower/equality/upper bound triple. + uncertain_bounds : BoundTriple + The uncertain lower/equality/upper bound triple. """ - second_stage_vars = blk.util.second_stage_variables - indexed_dr_vars = blk.util.decision_rule_vars - dr_var_to_exponent_map = blk.util.dr_var_to_exponent_map + (domain_lb, domain_ub), (declared_lb, declared_ub) = get_var_bound_pairs(var) - for ss_var, indexed_dr_var in zip(second_stage_vars, indexed_dr_vars): - for dr_var in indexed_dr_var.values(): - dr_var_degree = dr_var_to_exponent_map[dr_var] + certain_lb, uncertain_lb = determine_certain_and_uncertain_bound( + domain_bound=domain_lb, + declared_bound=declared_lb, + uncertain_params=uncertain_params, + bound_type=BoundType.LOWER, + ) + certain_ub, uncertain_ub = determine_certain_and_uncertain_bound( + domain_bound=domain_ub, + declared_bound=declared_ub, + uncertain_params=uncertain_params, + bound_type=BoundType.UPPER, + ) - if dr_var_degree > degree: - dr_var.fix(0) - else: - dr_var.unfix() + certain_bounds = rearrange_bound_pair_to_triple( + lower_bound=certain_lb, upper_bound=certain_ub + ) + uncertain_bounds = rearrange_bound_pair_to_triple( + lower_bound=uncertain_lb, upper_bound=uncertain_ub + ) + return certain_bounds, uncertain_bounds -def identify_objective_functions(model, objective): + +def get_effective_var_partitioning(model_data): """ - Identify the first and second-stage portions of an Objective - expression, subject to user-provided variable partitioning and - uncertain parameter choice. In doing so, the first and second-stage - objective expressions are added to the model as `Expression` - attributes. + Partition the in-scope variables of the input model + according to known nonadjustability to the uncertain parameters. + The result is referred to as the "effective" variable + partitioning. + + In addition to the first-stage variables, + some of the variables considered second-stage variables + or state variables according to the user-provided variable + partitioning may be nonadjustable. This method analyzes + the decision rule order, fixed variables, and, + through an iterative pretriangularization method, + the equality constraints, to identify nonadjustable variables. Parameters ---------- - model : ConcreteModel - Model of interest. - objective : Objective - Objective to be resolved into first and second-stage parts. + model_data : model data object + Main model data object. + + Returns + ------- + effective_partitioning : VariablePartitioning + Effective variable partitioning. + """ + config = model_data.config + working_model = model_data.working_model + user_var_partitioning = model_data.working_model.user_var_partitioning + + # truly nonadjustable variables + nonadjustable_var_set = ComponentSet() + + # the following variables are immediately known to be nonadjustable: + # - first-stage variables + # - (if decision rule order is 0) second-stage variables + # - all variables fixed to a constant (independent of the uncertain + # parameters) explicitly by user or implicitly by bounds + var_type_list_pairs = ( + ("first-stage", user_var_partitioning.first_stage_variables), + ("second-stage", user_var_partitioning.second_stage_variables), + ("state", user_var_partitioning.state_variables), + ) + for vartype, varlist in var_type_list_pairs: + for wvar in varlist: + certain_var_bounds, _ = get_var_certain_uncertain_bounds( + wvar, working_model.uncertain_params + ) + + is_var_nonadjustable = ( + vartype == "first-stage" + or (config.decision_rule_order == 0 and vartype == "second-stage") + or wvar.fixed + or certain_var_bounds.eq is not None + ) + if is_var_nonadjustable: + nonadjustable_var_set.add(wvar) + config.progress_logger.debug( + f"The {vartype} variable {wvar.name!r} " + "is nonadjustable, for the following reason(s):" + ) + + if vartype == "first-stage": + config.progress_logger.debug(f" the variable has a {vartype} status") + + if config.decision_rule_order == 0 and vartype == "second-stage": + config.progress_logger.debug( + f" the variable is {vartype} and the decision rules are static " + ) + + if wvar.fixed: + config.progress_logger.debug(" the variable is fixed explicitly") + + if certain_var_bounds.eq is not None: + config.progress_logger.debug(" the variable is fixed by domain/bounds") + + uncertain_params_set = ComponentSet(working_model.uncertain_params) + + # determine constraints that are potentially applicable for + # pretriangularization + certain_eq_cons = ComponentSet() + for wcon in working_model.component_data_objects(Constraint, active=True): + if not wcon.equality: + continue + uncertain_params_in_expr = ( + ComponentSet(identify_mutable_parameters(wcon.expr)) & uncertain_params_set + ) + if uncertain_params_in_expr: + continue + certain_eq_cons.add(wcon) + + pretriangular_con_var_map = ComponentMap() + for num_passes in it.count(1): + config.progress_logger.debug( + f"Performing pass number {num_passes} over the certain constraints." + ) + new_pretriangular_con_var_map = ComponentMap() + for ccon in certain_eq_cons: + vars_in_con = ComponentSet(identify_variables(ccon.body - ccon.upper)) + adj_vars_in_con = vars_in_con - nonadjustable_var_set + + # conditions for pretriangularization of constraint + # with no uncertain params: + # - only one nonadjustable variable in the constraint + # - the nonadjustable variable appears only linearly, + # and the linear coefficient exceeds our specified + # tolerance. + if len(adj_vars_in_con) == 1: + adj_var_in_con = next(iter(adj_vars_in_con)) + visitor = setup_quadratic_expression_visitor(wrt=[]) + ccon_expr_repn = visitor.walk_expression(expr=ccon.body - ccon.upper) + adj_var_appears_linearly = adj_var_in_con not in ComponentSet( + identify_variables(ccon_expr_repn.nonlinear) + ) and id(adj_var_in_con) in ComponentSet(ccon_expr_repn.linear) + if adj_var_appears_linearly: + adj_var_linear_coeff = ccon_expr_repn.linear[id(adj_var_in_con)] + if abs(adj_var_linear_coeff) > PRETRIANGULAR_VAR_COEFF_TOL: + new_pretriangular_con_var_map[ccon] = adj_var_in_con + config.progress_logger.debug( + f" The variable {adj_var_in_con.name!r} is " + "made nonadjustable by the pretriangular constraint " + f"{ccon.name!r}." + ) + + nonadjustable_var_set.update(new_pretriangular_con_var_map.values()) + pretriangular_con_var_map.update(new_pretriangular_con_var_map) + if not new_pretriangular_con_var_map: + config.progress_logger.debug( + "No new pretriangular constraint/variable pairs found. " + "Terminating pretriangularization loop." + ) + break + + for pcon in new_pretriangular_con_var_map: + certain_eq_cons.remove(pcon) + + pretriangular_vars = ComponentSet(pretriangular_con_var_map.values()) + config.progress_logger.debug( + f"Identified {len(pretriangular_con_var_map)} pretriangular " + f"constraints and {len(pretriangular_vars)} pretriangular variables " + f"in {num_passes} passes over the certain constraints." + ) + + effective_first_stage_vars = list(nonadjustable_var_set) + effective_second_stage_vars = [ + var + for var in user_var_partitioning.second_stage_variables + if var not in nonadjustable_var_set + ] + effective_state_vars = [ + var + for var in user_var_partitioning.state_variables + if var not in nonadjustable_var_set + ] + num_vars = len( + effective_first_stage_vars + effective_second_stage_vars + effective_state_vars + ) + + config.progress_logger.debug("Effective partitioning statistics:") + config.progress_logger.debug(f" Variables: {num_vars}") + config.progress_logger.debug( + f" Effective first-stage variables: {len(effective_first_stage_vars)}" + ) + config.progress_logger.debug( + f" Effective second-stage variables: {len(effective_second_stage_vars)}" + ) + config.progress_logger.debug( + f" Effective state variables: {len(effective_state_vars)}" + ) + + return VariablePartitioning( + first_stage_variables=effective_first_stage_vars, + second_stage_variables=effective_second_stage_vars, + state_variables=effective_state_vars, + ) + + +def add_effective_var_partitioning(model_data): + """ + Obtain a repartitioning of the in-scope variables of the + working model according to known adjustability to the + uncertain parameters, and add this repartitioning to the + working model. + + Parameters + ---------- + model_data : model data object + Main model data object. """ - expr_to_split = objective.expr + effective_partitioning = get_effective_var_partitioning(model_data) + model_data.working_model.effective_var_partitioning = VariablePartitioning( + **effective_partitioning._asdict() + ) + + +def create_bound_constraint_expr(expr, bound, bound_type, standardize=True): + """ + Create a relational expression establishing a bound + for a numeric expression of interest. + + If desired, the expression is such that `bound` appears on the + right-hand side of the relational (inequality/equality) + operator. + + Parameters + ---------- + expr : NumericValue + Expression for which a bound is to be imposed. + This can be a Pyomo expression, Var, or Param. + bound : native numeric type or NumericValue + Bound for `expr`. This should be a numeric constant, + Param, or constant/mutable Pyomo expression. + bound_type : BoundType + Indicator for whether `expr` is to be lower bounded, + equality bounded, or upper bounded, by `bound`. + standardize : bool, optional + True to ensure `expr` appears on the left-hand side of the + relational operator, False otherwise. - has_args = hasattr(expr_to_split, "args") - is_sum = isinstance(expr_to_split, SumExpression) + Returns + ------- + RelationalExpression + Establishes a bound on `expr`. + """ + if bound_type == BoundType.LOWER: + return -expr <= -bound if standardize else bound <= expr + elif bound_type == BoundType.EQ: + return expr == bound + elif bound_type == BoundType.UPPER: + return expr <= bound + else: + raise ValueError(f"Bound type {bound_type!r} not supported.") + + +def remove_var_declared_bound(var, bound_type): + """ + Remove the specified declared bound(s) of a variable data object. - # determine additive terms of the objective expression - # additive terms are in accordance with user declaration - if has_args and is_sum: - obj_args = expr_to_split.args + Parameters + ---------- + var : VarData + Variable data object of interest. + bound_type : BoundType + Indicator for the declared bound(s) to remove. + Note: if BoundType.EQ is specified, then both the + lower and upper bounds are removed. + """ + if bound_type == BoundType.LOWER: + var.setlb(None) + elif bound_type == BoundType.EQ: + var.setlb(None) + var.setub(None) + elif bound_type == BoundType.UPPER: + var.setub(None) else: - obj_args = [expr_to_split] + raise ValueError( + f"Bound type {bound_type!r} not supported. " + f"Bound type must be '{BoundType.LOWER}', " + f"'{BoundType.EQ}, or '{BoundType.UPPER}'." + ) + + +def remove_all_var_bounds(var): + """ + Remove all the domain and declared bounds for a specified + variable data object. + """ + var.setlb(None) + var.setub(None) + var.domain = Reals + + +def turn_nonadjustable_var_bounds_to_constraints(model_data): + """ + Reformulate uncertain bounds for the nonadjustable + (i.e. effective first-stage) variables of the working + model to constraints. + + Only uncertain declared bounds are reformulated to + constraints, as these are the only bounds we need to + reformulate to properly construct the subproblems. + Consequently, all constraints added to the working model + in this method are considered second-stage constraints. + + Parameters + ---------- + model_data : model data object + Main model data object. + """ + working_model = model_data.working_model + nonadjustable_vars = working_model.effective_var_partitioning.first_stage_variables + uncertain_params_set = ComponentSet(working_model.uncertain_params) + for var in nonadjustable_vars: + _, declared_bounds = get_var_bound_pairs(var) + declared_bound_triple = rearrange_bound_pair_to_triple(*declared_bounds) + var_name = var.getname( + relative_to=working_model.user_model, fully_qualified=True + ) + for btype, bound in declared_bound_triple._asdict().items(): + is_bound_uncertain = bound is not None and ( + ComponentSet(identify_mutable_parameters(bound)) & uncertain_params_set + ) + if is_bound_uncertain: + new_con_expr = create_bound_constraint_expr(var, bound, btype) + new_con_name = f"var_{var_name}_uncertain_{btype}_bound_con" + remove_var_declared_bound(var, btype) + if btype == BoundType.EQ: + working_model.second_stage.equality_cons[new_con_name] = ( + new_con_expr + ) + else: + working_model.second_stage.inequality_cons[new_con_name] = ( + new_con_expr + ) + # can't specify custom priorities for variable bounds + model_data.separation_priority_order[new_con_name] = ( + DEFAULT_SEPARATION_PRIORITY + ) + + # for subsequent developments: return a mapping + # from each variable to the corresponding binding constraints? + # we will add this as needed when changes are made to + # the interface for separation priority ordering + + +def turn_adjustable_var_bounds_to_constraints(model_data): + """ + Reformulate domain and declared bounds for the + adjustable (i.e., effective second-stage and effective state) + variables of the working model to explicit constraints. + + The domain and declared bounds for every adjustable variable + are unconditionally reformulated to constraints, + as this is required for appropriate construction of the + subproblems later. + Since these constraints depend on adjustable variables, + they are taken to be (effective) second-stage constraints. + + Parameters + ---------- + model_data : model data object + Main model data object. + """ + working_model = model_data.working_model + + adjustable_vars = ( + working_model.effective_var_partitioning.second_stage_variables + + working_model.effective_var_partitioning.state_variables + ) + for var in adjustable_vars: + cert_bound_triple, uncert_bound_triple = get_var_certain_uncertain_bounds( + var, working_model.uncertain_params + ) + var_name = var.getname( + relative_to=working_model.user_model, fully_qualified=True + ) + cert_uncert_bound_zip = ( + ("certain", cert_bound_triple), + ("uncertain", uncert_bound_triple), + ) + for certainty_desc, bound_triple in cert_uncert_bound_zip: + for btype, bound in bound_triple._asdict().items(): + if bound is not None: + new_con_name = f"var_{var_name}_{certainty_desc}_{btype}_bound_con" + new_con_expr = create_bound_constraint_expr(var, bound, btype) + if btype == BoundType.EQ: + working_model.second_stage.equality_cons[new_con_name] = ( + new_con_expr + ) + else: + working_model.second_stage.inequality_cons[new_con_name] = ( + new_con_expr + ) + # no custom separation priorities for Var + # bound constraints + model_data.separation_priority_order[new_con_name] = ( + DEFAULT_SEPARATION_PRIORITY + ) + + remove_all_var_bounds(var) + + # for subsequent developments: return a mapping + # from each variable to the corresponding binding constraints? + # we will add this as needed when changes are made to + # the interface for separation priority ordering + + +def setup_working_model(model_data, user_var_partitioning): + """ + Set up (construct) the working model based on user inputs, + and add it to the model data object. + + Parameters + ---------- + model_data : model data object + Main model data object. + user_var_partitioning : VariablePartitioning + User-based partitioning of the in-scope + variables of the input model. + """ + config = model_data.config + original_model = model_data.original_model + + # add temporary block to help keep track of variables + # and uncertain parameters after cloning + temp_util_block_attr_name = unique_component_name(original_model, "util") + original_model.add_component(temp_util_block_attr_name, Block()) + orig_temp_util_block = getattr(original_model, temp_util_block_attr_name) + orig_temp_util_block.uncertain_params = config.uncertain_params + orig_temp_util_block.user_var_partitioning = VariablePartitioning( + **user_var_partitioning._asdict() + ) + + # now set up working model + model_data.working_model = working_model = ConcreteModel() + + # stagewise blocks for containing stagewise constraints + working_model.first_stage = Block() + working_model.first_stage.equality_cons = Constraint(Any) + working_model.first_stage.inequality_cons = Constraint(Any) + working_model.second_stage = Block() + working_model.second_stage.equality_cons = Constraint(Any) + working_model.second_stage.inequality_cons = Constraint(Any) + + # original user model will be a sub-block of working model, + # in order to avoid attribute name clashes later + working_model.user_model = original_model.clone() + + # facilitate later retrieval of the user var partitioning + working_temp_util_block = getattr( + working_model.user_model, temp_util_block_attr_name + ) + model_data.working_model.uncertain_params = ( + working_temp_util_block.uncertain_params.copy() + ) + working_model.user_var_partitioning = VariablePartitioning( + **working_temp_util_block.user_var_partitioning._asdict() + ) + + # we are done with the util blocks + delattr(original_model, temp_util_block_attr_name) + delattr(working_model.user_model, temp_util_block_attr_name) + + # keep track of the original active constraints + working_model.original_active_equality_cons = [] + working_model.original_active_inequality_cons = [] + for con in working_model.component_data_objects(Constraint, active=True): + if con.equality: + # note: ranged constraints with identical LHS and RHS + # objects are considered equality constraints + working_model.original_active_equality_cons.append(con) + else: + working_model.original_active_inequality_cons.append(con) + + +def standardize_inequality_constraints(model_data): + """ + Standardize the inequality constraints of the working model, + and classify them as first-stage inequalities or second-stage + inequalities. + + Parameters + ---------- + model_data : model data object + Main model data object, containing the working model. + """ + config = model_data.config + working_model = model_data.working_model + uncertain_params_set = ComponentSet(working_model.uncertain_params) + adjustable_vars_set = ComponentSet( + working_model.effective_var_partitioning.second_stage_variables + + working_model.effective_var_partitioning.state_variables + ) + for con in working_model.original_active_inequality_cons: + uncertain_params_in_con_expr = ( + ComponentSet(identify_mutable_parameters(con.expr)) & uncertain_params_set + ) + adjustable_vars_in_con_body = ( + ComponentSet(identify_variables(con.body)) & adjustable_vars_set + ) + con_rel_name = con.getname( + relative_to=working_model.user_model, fully_qualified=True + ) + + if uncertain_params_in_con_expr | adjustable_vars_in_con_body: + con_bounds_triple = rearrange_bound_pair_to_triple( + lower_bound=con.lower, upper_bound=con.upper + ) + finite_bounds = { + btype: bd + for btype, bd in con_bounds_triple._asdict().items() + if bd is not None + } + for btype, bound in finite_bounds.items(): + if btype == BoundType.EQ: + # no equality bounds should be identified here. + # equality bound may be identified if: + # 1. bound rearrangement method has a bug + # 2. ConstraintData.equality is changed. + # such a change would affect this method + # only indirectly + raise ValueError( + f"Found an equality bound {bound} for the constraint " + f"for the constraint with name {con.name!r}. " + "Either the bound or the constraint has been misclassified." + "Report this case to the Pyomo/PyROS developers." + ) + + std_con_expr = create_bound_constraint_expr( + expr=con.body, bound=bound, bound_type=btype, standardize=True + ) + new_con_name = f"ineq_con_{con_rel_name}_{btype}_bound_con" + + uncertain_params_in_std_expr = uncertain_params_set & ComponentSet( + identify_mutable_parameters(std_con_expr) + ) + if adjustable_vars_in_con_body | uncertain_params_in_std_expr: + working_model.second_stage.inequality_cons[new_con_name] = ( + std_con_expr + ) + # account for user-specified priority specifications + model_data.separation_priority_order[new_con_name] = ( + config.separation_priority_order.get( + con_rel_name, DEFAULT_SEPARATION_PRIORITY + ) + ) + else: + # we do not want to modify the arrangement of + # lower bound for first-stage inequalities, so + # pass `standardize=False` + working_model.first_stage.inequality_cons[new_con_name] = ( + create_bound_constraint_expr( + expr=con.body, + bound=bound, + bound_type=btype, + standardize=False, + ) + ) + + # constraint has now been moved over to stagewise blocks + con.deactivate() + else: + # constraint depends on the nonadjustable variables only + working_model.first_stage.inequality_cons[f"ineq_con_{con_rel_name}"] = ( + con.expr + ) + con.deactivate() + + +def standardize_equality_constraints(model_data): + """ + Classify the original active equality constraints of the + working model as first-stage or second-stage constraints. + + Parameters + ---------- + model_data : model data object + Main model data object, containing the working model. + """ + working_model = model_data.working_model + uncertain_params_set = ComponentSet(working_model.uncertain_params) + adjustable_vars_set = ComponentSet( + working_model.effective_var_partitioning.second_stage_variables + + working_model.effective_var_partitioning.state_variables + ) + for con in working_model.original_active_equality_cons: + uncertain_params_in_con_expr = ( + ComponentSet(identify_mutable_parameters(con.expr)) & uncertain_params_set + ) + adjustable_vars_in_con_body = ( + ComponentSet(identify_variables(con.body)) & adjustable_vars_set + ) + + # note: none of the equality constraint expressions are modified + con_rel_name = con.getname( + relative_to=working_model.user_model, fully_qualified=True + ) + if uncertain_params_in_con_expr | adjustable_vars_in_con_body: + working_model.second_stage.equality_cons[f"eq_con_{con_rel_name}"] = ( + con.expr + ) + else: + working_model.first_stage.equality_cons[f"eq_con_{con_rel_name}"] = con.expr + + # definitely don't want active duplicate + con.deactivate() + + +def get_summands(expr): + """ + Recursively gather the individual summands of a numeric expression. + + Parameters + ---------- + expr : native numeric type or NumericValue + Expression to be analyzed. + + Returns + ------- + summands : list of expression-like + The summands. + """ + if isinstance(expr, SumExpression): + # note: NPV_SumExpression and LinearExpression + # are subclasses of SumExpression, + # so those instances are decomposed here, as well. + summands = [] + for arg in expr.args: + summands.extend(get_summands(arg)) + else: + summands = [expr] + return summands + + +def declare_objective_expressions(working_model, objective, sense=minimize): + """ + Identify the per-stage summands of an objective of interest, + according to the user-based variable partitioning. + + Two Expressions are declared on the working model to contain + the per-stage summands: + + - ``first_stage_objective``: Sum of additive terms of `objective` + that are non-uncertain constants or depend only on the + user-defined first-stage variables. + - ``second_stage_objective``: Sum of all other additive terms of + `objective`. + + To facilitate retrieval of the original objective expression + (modified to account for the sense), an Expression called + ``full_objective`` is also declared on the working model. + + Parameters + ---------- + working_model : ConcreteModel + Working model, constructed during a PyROS solver run. + objective : ObjectiveData + Objective of which summands are to be identified. + sense : {common.enums.minimize, common.enums.maximize}, optional + Desired sense of the objective; default is minimize. + """ + if sense not in {minimize, maximize}: + raise ValueError( + f"Objective sense {sense} not supported. " + f"Ensure sense is {minimize} (minimize) or {maximize} (maximize)." + ) + + obj_expr = objective.expr + + obj_args = get_summands(obj_expr) # initialize first and second-stage cost expressions - first_stage_cost_expr = 0 - second_stage_cost_expr = 0 + first_stage_expr = 0 + second_stage_expr = 0 - first_stage_var_set = ComponentSet(model.util.first_stage_variables) - uncertain_param_set = ComponentSet(model.util.uncertain_params) + first_stage_var_set = ComponentSet( + working_model.user_var_partitioning.first_stage_variables + ) + uncertain_param_set = ComponentSet(working_model.uncertain_params) + obj_sense = objective.sense for term in obj_args: non_first_stage_vars_in_term = ComponentSet( v for v in identify_variables(term) if v not in first_stage_var_set @@ -1676,102 +1882,775 @@ def identify_objective_functions(model, objective): if param in uncertain_param_set ) + # account for objective sense + + # update all expressions + std_term = term if obj_sense == sense else -term if non_first_stage_vars_in_term or uncertain_params_in_term: - second_stage_cost_expr += term + second_stage_expr += std_term else: - first_stage_cost_expr += term + first_stage_expr += std_term - model.first_stage_objective = Expression(expr=first_stage_cost_expr) - model.second_stage_objective = Expression(expr=second_stage_cost_expr) + working_model.first_stage_objective = Expression(expr=first_stage_expr) + working_model.second_stage_objective = Expression(expr=second_stage_expr) + # useful for later + working_model.full_objective = Expression( + expr=obj_expr if sense == obj_sense else -obj_expr + ) -def load_final_solution(model_data, master_soln, config): - ''' - load the final solution into the original model object - :param model_data: model data container object - :param master_soln: results data container object returned to user - :return: - ''' - if config.objective_focus == ObjectiveType.nominal: - model = model_data.original_model - soln = master_soln.nominal_block - elif config.objective_focus == ObjectiveType.worst_case: - model = model_data.original_model - indices = range(len(master_soln.master_model.scenarios)) - k = max( - indices, - key=lambda i: value( - master_soln.master_model.scenarios[i, 0].first_stage_objective - + master_soln.master_model.scenarios[i, 0].second_stage_objective - ), - ) - soln = master_soln.master_model.scenarios[k, 0] - - src_vars = getattr(model, 'tmp_var_list') - local_vars = getattr(soln, 'tmp_var_list') - varMap = list(zip(src_vars, local_vars)) - - for src, local in varMap: - src.set_value(local.value, skip_validation=True) - - return - - -def process_termination_condition_master_problem(config, results): - ''' - :param config: pyros config - :param results: solver results object - :return: tuple (try_backups (True/False) - pyros_return_code (default NONE or robust_infeasible or subsolver_error)) - ''' - locally_acceptable = [tc.optimal, tc.locallyOptimal, tc.globallyOptimal] - globally_acceptable = [tc.optimal, tc.globallyOptimal] - robust_infeasible = [tc.infeasible] - try_backups = [ - tc.feasible, - tc.maxTimeLimit, - tc.maxIterations, - tc.maxEvaluations, - tc.minStepLength, - tc.minFunctionValue, - tc.other, - tc.solverFailure, - tc.internalSolverError, - tc.error, - tc.unbounded, - tc.infeasibleOrUnbounded, - tc.invalidProblem, - tc.intermediateNonInteger, - tc.noSolution, - tc.unknown, - ] - termination_condition = results.solver.termination_condition - if config.solve_master_globally == False: - if termination_condition in locally_acceptable: - return (False, None) - elif termination_condition in robust_infeasible: - return (False, pyrosTerminationCondition.robust_infeasible) - elif termination_condition in try_backups: - return (True, None) +def standardize_active_objective(model_data): + """ + Standardize the active objective of the working model. + + This method involves declaration of: + + - named expressions for the full active objective + (in a minimization sense), the first-stage objective summand, + and the second-stage objective summand. + - an epigraph epigraph variable and constraint. + + The epigraph constraint is considered a first-stage + inequality provided that it is independent of the + adjustable (i.e., effective second-stage and effective state) + variables and the uncertain parameters. + + Parameters + ---------- + model_data : model data object + Main model data object. + """ + config = model_data.config + working_model = model_data.working_model + + active_obj = next( + working_model.component_data_objects(Objective, active=True, descend_into=True) + ) + model_data.active_obj_original_sense = active_obj.sense + + # per-stage summands will be useful for reporting later + declare_objective_expressions(working_model=working_model, objective=active_obj) + + # useful for later + working_model.first_stage.epigraph_var = Var( + initialize=value(active_obj, exception=False) + ) + + # we add the epigraph objective later, as needed, + # on a per subproblem basis; + # doing so is more efficient than adding the objective now + active_obj.deactivate() + + # add the epigraph constraint + adjustable_vars = ( + working_model.effective_var_partitioning.second_stage_variables + + working_model.effective_var_partitioning.state_variables + ) + uncertain_params_in_obj = ComponentSet( + identify_mutable_parameters(active_obj.expr) + ) & ComponentSet(working_model.uncertain_params) + adjustable_vars_in_obj = ( + ComponentSet(identify_variables(active_obj.expr)) & adjustable_vars + ) + if uncertain_params_in_obj | adjustable_vars_in_obj: + if config.objective_focus == ObjectiveType.worst_case: + working_model.second_stage.inequality_cons["epigraph_con"] = ( + working_model.full_objective.expr + - working_model.first_stage.epigraph_var + <= 0 + ) + model_data.separation_priority_order["epigraph_con"] = ( + DEFAULT_SEPARATION_PRIORITY + ) + elif config.objective_focus == ObjectiveType.nominal: + working_model.first_stage.inequality_cons["epigraph_con"] = ( + working_model.full_objective.expr + - working_model.first_stage.epigraph_var + <= 0 + ) else: - raise NotImplementedError( - "This solver return termination condition (%s) " - "is currently not supported by PyROS." % termination_condition + raise ValueError( + "Classification of the epigraph constraint with uncertain " + "and/or adjustable components not implemented " + f"for objective focus {config.objective_focus!r}." ) else: - if termination_condition in globally_acceptable: - return (False, None) - elif termination_condition in robust_infeasible: - return (False, pyrosTerminationCondition.robust_infeasible) - elif termination_condition in try_backups: - return (True, None) - else: - raise NotImplementedError( - "This solver return termination condition (%s) " - "is currently not supported by PyROS." % termination_condition + working_model.first_stage.inequality_cons["epigraph_con"] = ( + working_model.full_objective.expr - working_model.first_stage.epigraph_var + <= 0 + ) + + +def get_all_nonadjustable_variables(working_model): + """ + Get all nonadjustable variables of the working model. + + The nonadjustable variables comprise the: + + - epigraph variable + - decision rule variables + - effective first-stage variables + """ + epigraph_var = working_model.first_stage.epigraph_var + decision_rule_vars = list( + generate_all_decision_rule_var_data_objects(working_model) + ) + effective_first_stage_vars = ( + working_model.effective_var_partitioning.first_stage_variables + ) + + return [epigraph_var] + decision_rule_vars + effective_first_stage_vars + + +def get_all_adjustable_variables(working_model): + """ + Get all variables considered adjustable. + """ + return ( + working_model.effective_var_partitioning.second_stage_variables + + working_model.effective_var_partitioning.state_variables + ) + + +def generate_all_decision_rule_var_data_objects(working_blk): + """ + Generate a sequence of all decision rule variable data + objects. + + Parameters + ---------- + working_blk : BlockData + Block with a structure similar to the working model + created during preprocessing. + + Yields + ------ + VarData + Decision rule variable. + """ + for indexed_var in working_blk.first_stage.decision_rule_vars: + yield from indexed_var.values() + + +def generate_all_decision_rule_eqns(working_blk): + """ + Generate sequence of all decision rule equations. + """ + yield from working_blk.second_stage.decision_rule_eqns.values() + + +def get_dr_expression(working_blk, second_stage_var): + """ + Get DR expression corresponding to given second-stage variable. + + Parameters + ---------- + working_blk : BlockData + Block with a structure similar to the working model + created during preprocessing. + + Returns + ------ + VarData, LinearExpression, or SumExpression + The corresponding DR expression. + """ + dr_con = working_blk.eff_ss_var_to_dr_eqn_map[second_stage_var] + return sum(dr_con.body.args[:-1]) + + +def get_dr_var_to_monomial_map(working_blk): + """ + Get mapping from all decision rule variables in the working + block to their corresponding DR equation monomials. + + Parameters + ---------- + working_blk : BlockData + Working model Block, containing the decision rule + components. + + Returns + ------- + ComponentMap + The desired mapping. + """ + dr_var_to_monomial_map = ComponentMap() + for ss_var in working_blk.effective_var_partitioning.second_stage_variables: + dr_expr = get_dr_expression(working_blk, ss_var) + for dr_monomial in dr_expr.args: + if dr_monomial.is_expression_type(): + # degree > 1 monomial expression of form + # (product of uncertain params) * dr variable + dr_var_in_term = dr_monomial.args[-1] + else: + # the static term (intercept) + dr_var_in_term = dr_monomial + + dr_var_to_monomial_map[dr_var_in_term] = dr_monomial + + return dr_var_to_monomial_map + + +def check_time_limit_reached(timing_data, config): + """ + Return true if the PyROS solver time limit is reached, + False otherwise. + + Returns + ------- + bool + True if time limit reached, False otherwise. + """ + return ( + config.time_limit is not None + and timing_data.get_main_elapsed_time() >= config.time_limit + ) + + +def reformulate_state_var_independent_eq_cons(model_data): + """ + Reformulate second-stage equality constraints that are + independent of the state variables. + + The state variable-independent second-stage equality + constraints that can be rewritten as polynomials + in terms of the uncertain parameters + are reformulated to first-stage equalities + through matching of the polynomial coefficients. + Hence, this reformulation technique is referred to as + coefficient matching. + In some cases, matching of the coefficients may lead to + a certificate of robust infeasibility. + + All other state variable-independent second-stage equality + constraints are recast to pairs of opposing second-stage inequality + constraints, as they would otherwise over-constrain the uncertain + parameters in the separation subproblems. + + Parameters + ---------- + model_data : model data object + Main model data object. + + Returns + ------- + robust_infeasible : bool + True if model found to be robust infeasible, + False otherwise. + """ + config = model_data.config + working_model = model_data.working_model + ep = working_model.effective_var_partitioning + + effective_second_stage_var_set = ComponentSet(ep.second_stage_variables) + effective_state_var_set = ComponentSet(ep.state_variables) + all_vars_set = ComponentSet(working_model.all_variables) + originally_unfixed_vars = [var for var in all_vars_set if not var.fixed] + + # we will need this to substitute DR expressions for + # second-stage variables later + ssvar_id_to_dr_expr_map = { + id(ss_var): get_dr_expression(working_model, ss_var) + for ss_var in effective_second_stage_var_set + } + + # goal: examine constraint expressions in terms of the + # uncertain params. we will use standard repn to do this. + # standard repn analyzes expressions in terms of Var components, + # but the uncertain params are implemented as mutable Param objects + # so we temporarily define Var components to be briefly substituted + # for the uncertain parameters as the constraints are analyzed + uncertain_params_set = ComponentSet(working_model.uncertain_params) + working_model.temp_param_vars = temp_param_vars = Var( + range(len(uncertain_params_set)), + initialize={ + idx: value(param) for idx, param in enumerate(uncertain_params_set) + }, + ) + uncertain_param_to_temp_var_map = ComponentMap( + (param, param_var) + for param, param_var in zip(uncertain_params_set, temp_param_vars.values()) + ) + uncertain_param_id_to_temp_var_map = { + id(param): var for param, var in uncertain_param_to_temp_var_map.items() + } + + # copy the items iterable, + # as we will be modifying the constituents of the constraint + # in place + working_model.first_stage.coefficient_matching_cons = coefficient_matching_cons = [] + for con_idx, con in list(working_model.second_stage.equality_cons.items()): + vars_in_con = ComponentSet(identify_variables(con.expr)) + mutable_params_in_con = ComponentSet(identify_mutable_parameters(con.expr)) + + second_stage_vars_in_con = vars_in_con & effective_second_stage_var_set + state_vars_in_con = vars_in_con & effective_state_var_set + uncertain_params_in_con = mutable_params_in_con & uncertain_params_set + + coefficient_matching_applicable = not state_vars_in_con and ( + uncertain_params_in_con or second_stage_vars_in_con + ) + if coefficient_matching_applicable: + con_expr_after_dr_substitution = replace_expressions( + expr=con.body - con.upper, substitution_map=ssvar_id_to_dr_expr_map + ) + + # substitute temporarily defined vars for uncertain params. + # note: this is performed after, rather than along with, + # the DR expression substitution, as the DR expressions + # contain uncertain params + con_expr_after_all_substitutions = replace_expressions( + expr=con_expr_after_dr_substitution, + substitution_map=uncertain_param_id_to_temp_var_map, ) + # analyze the expression with respect to the + # uncertain parameters only. thus, only the proxy + # variables for the uncertain parameters are unfixed + # during the analysis + visitor = setup_quadratic_expression_visitor(wrt=originally_unfixed_vars) + expr_repn = visitor.walk_expression(con_expr_after_all_substitutions) + + if expr_repn.nonlinear is not None: + config.progress_logger.debug( + f"Equality constraint {con.name!r} " + "is state-variable independent, but cannot be written " + "as a polynomial in the uncertain parameters with " + "the currently available expression analyzers " + "and selected decision rules " + f"(decision_rule_order={config.decision_rule_order}). " + "We are unable to write a coefficient matching reformulation " + "of this constraint." + "Recasting to two inequality constraints." + ) + + # keeping this constraint as an equality is not appropriate, + # as it effectively constrains the uncertain parameters + # in the separation problems, since the effective DOF + # variables and DR variables are fixed. + # hence, we reformulate to inequalities + for bound_type in [BoundType.LOWER, BoundType.UPPER]: + std_con_expr = create_bound_constraint_expr( + expr=con.body, bound=con.upper, bound_type=bound_type + ) + new_con_name = f"reform_{bound_type}_bound_from_{con_idx}" + working_model.second_stage.inequality_cons[new_con_name] = ( + std_con_expr + ) + # no custom priorities specified + model_data.separation_priority_order[new_con_name] = ( + DEFAULT_SEPARATION_PRIORITY + ) + else: + polynomial_repn_coeffs = ( + [expr_repn.constant] + + list(expr_repn.linear.values()) + + ( + [] + if expr_repn.quadratic is None + else list(expr_repn.quadratic.values()) + ) + ) + for coeff_idx, coeff_expr in enumerate(polynomial_repn_coeffs): + # for robust satisfaction of the original equality + # constraint, all polynomial coefficients must be + # equal to zero. so for each coefficient, + # we either check for trivial robust + # feasibility/infeasibility, or add a constraint + # restricting the coefficient expression to value 0 + if isinstance(coeff_expr, tuple(native_types)): + # coefficient is a constant; + # check value to determine + # trivial feasibility/infeasibility + robust_infeasible = not math.isclose( + a=coeff_expr, + b=0, + rel_tol=COEFF_MATCH_REL_TOL, + abs_tol=COEFF_MATCH_ABS_TOL, + ) + if robust_infeasible: + config.progress_logger.info( + "PyROS has determined that the model is " + "robust infeasible. " + "One reason for this is that " + f"the equality constraint {con.name!r} " + "cannot be satisfied against all realizations " + "of uncertainty, " + "given the current partitioning into " + "first-stage, second-stage, and state variables. " + "Consider editing this constraint to reference some " + "(additional) second-stage and/or state variable(s)." + ) + + # robust infeasibility found; + # that is sufficient for termination of PyROS. + return robust_infeasible + + else: + # coefficient is dependent on model first-stage + # and DR variables. add matching constraint + new_con_name = f"coeff_matching_{con_idx}_coeff_{coeff_idx}" + working_model.first_stage.equality_cons[new_con_name] = ( + coeff_expr == 0 + ) + new_con = working_model.first_stage.equality_cons[new_con_name] + coefficient_matching_cons.append(new_con) + + config.progress_logger.debug( + f"Derived from constraint {con.name!r} a coefficient " + f"matching constraint named {new_con_name!r} " + "with expression: \n " + f"{new_con.expr}." + ) + + # remove rather than deactivate to facilitate: + # - we no longer need this constraint anywhere + # - facilitates accurate counting of active constraints + del working_model.second_stage.equality_cons[con_idx] + + # we no longer need these auxiliary components + working_model.del_component(temp_param_vars) + working_model.del_component(temp_param_vars.index_set()) + + return False + + +def preprocess_model_data(model_data, user_var_partitioning): + """ + Preprocess user inputs to modeling objects from which + PyROS subproblems can be efficiently constructed. + + Parameters + ---------- + model_data : model data object + Main model data object. + user_var_partitioning : VariablePartitioning + User-based partitioning of the in-scope + variables of the input model. + + Returns + ------- + robust_infeasible : bool + True if RO problem was found to be robust infeasible, + False otherwise. + """ + config = model_data.config + setup_working_model(model_data, user_var_partitioning) + + # extract as many truly nonadjustable variables as possible + # from the second-stage and state variables + config.progress_logger.debug("Repartitioning variables by nonadjustability...") + add_effective_var_partitioning(model_data) + + # different treatment for effective first-stage + # than for effective second-stage and state variables + config.progress_logger.debug("Turning some variable bounds to constraints...") + turn_nonadjustable_var_bounds_to_constraints(model_data) + turn_adjustable_var_bounds_to_constraints(model_data) + + config.progress_logger.debug("Standardizing the model constraints...") + standardize_inequality_constraints(model_data) + standardize_equality_constraints(model_data) + + # includes epigraph reformulation + config.progress_logger.debug("Standardizing the active objective...") + standardize_active_objective(model_data) + + # DR components are added only per effective second-stage variable + config.progress_logger.debug("Adding decision rule components...") + add_decision_rule_variables(model_data) + add_decision_rule_constraints(model_data) + + # the epigraph and DR variables are also first-stage + config.progress_logger.debug("Finalizing nonadjustable variables...") + model_data.working_model.all_nonadjustable_variables = ( + get_all_nonadjustable_variables(model_data.working_model) + ) + model_data.working_model.all_adjustable_variables = get_all_adjustable_variables( + model_data.working_model + ) + model_data.working_model.all_variables = ( + model_data.working_model.all_nonadjustable_variables + + model_data.working_model.all_adjustable_variables + ) + + config.progress_logger.debug( + "Reformulating state variable-independent second-stage equality constraints..." + ) + robust_infeasible = reformulate_state_var_independent_eq_cons(model_data) + + return robust_infeasible + + +def log_model_statistics(model_data): + """ + Log statistics for the preprocessed model. + + Parameters + ---------- + model_data : model data object + Main model data object. + """ + config = model_data.config + working_model = model_data.working_model + + ep = working_model.effective_var_partitioning + up = working_model.user_var_partitioning + + # variables. we log the user partitioning + num_vars = len(working_model.all_variables) + num_epigraph_vars = 1 + num_first_stage_vars = len(up.first_stage_variables) + num_second_stage_vars = len(up.second_stage_variables) + num_state_vars = len(up.state_variables) + num_eff_second_stage_vars = len(ep.second_stage_variables) + num_eff_state_vars = len(ep.state_variables) + num_dr_vars = len(list(generate_all_decision_rule_var_data_objects(working_model))) + + # uncertain parameters + num_uncertain_params = len(working_model.uncertain_params) + + # constraints + num_cons = len(list(working_model.component_data_objects(Constraint, active=True))) + + # # equality constraints + num_eq_cons = ( + len(working_model.first_stage.equality_cons) + + len(working_model.second_stage.equality_cons) + + len(working_model.second_stage.decision_rule_eqns) + ) + num_first_stage_eq_cons = len(working_model.first_stage.equality_cons) + num_coeff_matching_cons = len(working_model.first_stage.coefficient_matching_cons) + num_other_first_stage_eqns = num_first_stage_eq_cons - num_coeff_matching_cons + num_second_stage_eq_cons = len(working_model.second_stage.equality_cons) + num_dr_eq_cons = len(working_model.second_stage.decision_rule_eqns) + + # # inequality constraints + num_ineq_cons = len(working_model.first_stage.inequality_cons) + len( + working_model.second_stage.inequality_cons + ) + num_first_stage_ineq_cons = len(working_model.first_stage.inequality_cons) + num_second_stage_ineq_cons = len(working_model.second_stage.inequality_cons) + + info_log_func = config.progress_logger.info + + IterationLogRecord.log_header_rule(info_log_func) + info_log_func("Model Statistics:") + + info_log_func(f" Number of variables : {num_vars}") + info_log_func(f" Epigraph variable : {num_epigraph_vars}") + info_log_func(f" First-stage variables : {num_first_stage_vars}") + info_log_func( + f" Second-stage variables : {num_second_stage_vars} " + f"({num_eff_second_stage_vars} adj.)" + ) + info_log_func( + f" State variables : {num_state_vars} " f"({num_eff_state_vars} adj.)" + ) + info_log_func(f" Decision rule variables : {num_dr_vars}") + + info_log_func(f" Number of uncertain parameters : {num_uncertain_params}") + + info_log_func(f" Number of constraints : {num_cons}") + info_log_func(f" Equality constraints : {num_eq_cons}") + info_log_func(f" Coefficient matching constraints : {num_coeff_matching_cons}") + info_log_func(f" Other first-stage equations : {num_other_first_stage_eqns}") + info_log_func(f" Second-stage equations : {num_second_stage_eq_cons}") + info_log_func(f" Decision rule equations : {num_dr_eq_cons}") + info_log_func(f" Inequality constraints : {num_ineq_cons}") + info_log_func(f" First-stage inequalities : {num_first_stage_ineq_cons}") + info_log_func(f" Second-stage inequalities : {num_second_stage_ineq_cons}") + + +def add_decision_rule_variables(model_data): + """ + Add variables parameterizing the (polynomial) + decision rules to the working model. + + Parameters + ---------- + model_data : model data object + Model data. + + Notes + ----- + 1. One set of decision rule variables is added for each + effective second-stage variable. + 2. As an efficiency, no decision rule variables + are added for the nonadjustable, user-defined second-stage + variables, since the decision rules for such variables + are necessarily nonstatic. + """ + config = model_data.config + effective_second_stage_vars = ( + model_data.working_model.effective_var_partitioning.second_stage_variables + ) + model_data.working_model.first_stage.decision_rule_vars = decision_rule_vars = [] + + # facilitate matching of effective second-stage vars to DR vars later + model_data.working_model.eff_ss_var_to_dr_var_map = eff_ss_var_to_dr_var_map = ( + ComponentMap() + ) + + # since DR expression is a general polynomial in the uncertain + # parameters, the exact number of DR variables + # per effective second-stage variable + # depends only on the DR order and uncertainty set dimension + degree = config.decision_rule_order + num_uncertain_params = len(model_data.working_model.uncertain_params) + num_dr_vars = sp.special.comb( + N=num_uncertain_params + degree, k=degree, exact=True, repetition=False + ) + + for idx, eff_ss_var in enumerate(effective_second_stage_vars): + indexed_dr_var = Var( + range(num_dr_vars), initialize=0, bounds=(None, None), domain=Reals + ) + model_data.working_model.first_stage.add_component( + f"decision_rule_var_{idx}", indexed_dr_var + ) + + # index 0 entry of the IndexedVar is the static + # DR term. initialize to user-provided value of + # the corresponding second-stage variable. + # all other entries remain initialized to 0. + indexed_dr_var[0].set_value(value(eff_ss_var, exception=False)) + + # update attributes + decision_rule_vars.append(indexed_dr_var) + eff_ss_var_to_dr_var_map[eff_ss_var] = indexed_dr_var + + +def add_decision_rule_constraints(model_data): + """ + Add decision rule equality constraints to the working model. + + Parameters + ---------- + model_data : model data object + Main model data object. + """ + config = model_data.config + effective_second_stage_vars = ( + model_data.working_model.effective_var_partitioning.second_stage_variables + ) + indexed_dr_var_list = model_data.working_model.first_stage.decision_rule_vars + uncertain_params = model_data.working_model.uncertain_params + degree = config.decision_rule_order + + model_data.working_model.second_stage.decision_rule_eqns = decision_rule_eqns = ( + Constraint(range(len(effective_second_stage_vars))) + ) + + # keeping track of degree of monomial + # (in terms of the uncertain parameters) + # in which each DR coefficient participates will be useful for + # later + model_data.working_model.dr_var_to_exponent_map = dr_var_to_exponent_map = ( + ComponentMap() + ) + + # facilitate retrieval of DR equation for a given + # effective second-stage variable later + model_data.working_model.eff_ss_var_to_dr_eqn_map = eff_ss_var_to_dr_eqn_map = ( + ComponentMap() + ) + + # set up uncertain parameter combinations for + # construction of the monomials of the DR expressions + monomial_param_combos = [] + for power in range(degree + 1): + power_combos = it.combinations_with_replacement(uncertain_params, power) + monomial_param_combos.extend(power_combos) + + # now construct DR equations and declare them on the working model + second_stage_dr_var_zip = zip(effective_second_stage_vars, indexed_dr_var_list) + for idx, (eff_ss_var, indexed_dr_var) in enumerate(second_stage_dr_var_zip): + # for each DR equation, the number of coefficients should match + # the number of monomial terms exactly + if len(monomial_param_combos) != len(indexed_dr_var.index_set()): + raise ValueError( + f"Mismatch between number of DR coefficient variables " + f"and number of DR monomials for DR equation index {idx}, " + "corresponding to effective second-stage variable " + f"{eff_ss_var.name!r}. " + f"({len(indexed_dr_var.index_set())}!= {len(monomial_param_combos)})" + ) + + # construct the DR polynomial + dr_expression = 0 + for dr_var, param_combo in zip(indexed_dr_var.values(), monomial_param_combos): + dr_expression += dr_var * prod(param_combo) + + # map decision rule var to degree (exponent) of the + # associated monomial with respect to the uncertain params + dr_var_to_exponent_map[dr_var] = len(param_combo) + + # declare constraint on model + decision_rule_eqns[idx] = dr_expression - eff_ss_var == 0 + eff_ss_var_to_dr_eqn_map[eff_ss_var] = decision_rule_eqns[idx] + + +def enforce_dr_degree(working_blk, config, degree): + """ + Make decision rule polynomials of a given degree + by fixing value of the appropriate subset of the decision + rule coefficients to 0. + + Parameters + ---------- + blk : ScalarBlock + Working model, or master problem block. + config : ConfigDict + PyROS solver options. + degree : int + Degree of the DR polynomials that is to be enforced. + """ + for indexed_dr_var in working_blk.first_stage.decision_rule_vars: + for dr_var in indexed_dr_var.values(): + dr_var_degree = working_blk.dr_var_to_exponent_map[dr_var] + if dr_var_degree > degree: + dr_var.fix(0) + else: + dr_var.unfix() + + +def load_final_solution(model_data, master_soln, original_user_var_partitioning): + """ + Load variable values from the master problem to the + original model. + + Parameters + ---------- + master_soln : MasterResults + Master solution object, containing the master model. + original_user_var_partitioning : VariablePartitioning + User partitioning of the variables of the original + model. + """ + config = model_data.config + if config.objective_focus == ObjectiveType.nominal: + soln_master_blk = master_soln.master_model.scenarios[0, 0] + elif config.objective_focus == ObjectiveType.worst_case: + soln_master_blk = max( + master_soln.master_model.scenarios.values(), + key=lambda blk: value(blk.full_objective), + ) + + original_model_vars = ( + original_user_var_partitioning.first_stage_variables + + original_user_var_partitioning.second_stage_variables + + original_user_var_partitioning.state_variables + ) + master_soln_vars = ( + soln_master_blk.user_var_partitioning.first_stage_variables + + soln_master_blk.user_var_partitioning.second_stage_variables + + soln_master_blk.user_var_partitioning.state_variables + ) + for orig_var, master_blk_var in zip(original_model_vars, master_soln_vars): + orig_var.set_value(master_blk_var.value, skip_validation=True) + def call_solver(model, solver, config, timing_obj, timer_name, err_msg): """ @@ -1839,7 +2718,7 @@ def call_solver(model, solver, config, timing_obj, timer_name, err_msg): load_solutions=False, symbolic_solver_labels=config.symbolic_solver_labels, ) - except ApplicationError: + except (ApplicationError, InvalidValueError): # account for possible external subsolver errors # (such as segmentation faults, function evaluation # errors, etc.) @@ -1882,7 +2761,7 @@ class IterationLogRecord: dr_polishing_success : bool or None, optional True if DR polishing solved successfully, False otherwise. num_violated_cons : int or None, optional - Number of performance constraints found to be violated + Number of second-stage constraints found to be violated during separation step. all_sep_problems_solved : int or None, optional True if all separation problems were solved successfully, @@ -1893,7 +2772,7 @@ class IterationLogRecord: True if separation problems were solved with the subordinate global optimizer(s), False otherwise. max_violation : int or None - Maximum scaled violation of any performance constraint + Maximum scaled violation of any second-stage constraint found during separation step. elapsed_time : float, optional Total time elapsed up to the current iteration, in seconds. @@ -1921,7 +2800,7 @@ class IterationLogRecord: dr_polishing_success : bool or None True if DR polishing was solved successfully, False otherwise. num_violated_cons : int or None - Number of performance constraints found to be violated + Number of second-stage constraints found to be violated during separation step. all_sep_problems_solved : int or None True if all separation problems were solved successfully, @@ -1932,7 +2811,7 @@ class IterationLogRecord: True if separation problems were solved with the subordinate global optimizer(s), False otherwise. max_violation : int or None - Maximum scaled violation of any performance constraint + Maximum scaled violation of any second-stage constraint found during separation step. elapsed_time : float Total time elapsed up to the current iteration, in seconds. @@ -2062,3 +2941,25 @@ def log_header(log_func, with_rules=True, **log_func_kwargs): def log_header_rule(log_func, fillchar="-", **log_func_kwargs): """Log header rule.""" log_func(fillchar * IterationLogRecord._LINE_LENGTH, **log_func_kwargs) + + +def copy_docstring(source_func): + """ + Create a decorator which copies docstring of a callable + `source_func` to a target callable passed to the decorator. + + Returns + ------- + decorator_doc : callable + Decorator of interest. + """ + + def decorator_doc(func): + @functools.wraps(func) + def wrapper(*args, **kwargs): + return func(*args, **kwargs) + + wrapper.__doc__ = source_func.__doc__ + return wrapper + + return decorator_doc