diff --git a/docs/src/user/algorithms.rst b/docs/src/user/algorithms.rst index cfa0b156e..d0ec0213b 100644 --- a/docs/src/user/algorithms.rst +++ b/docs/src/user/algorithms.rst @@ -56,9 +56,10 @@ Configuration seed: null -``seed`` +.. autoclass:: orion.algo.random.Random + :noindex: + :exclude-members: space, state_dict, set_state, suggest, observe, is_done, seed_rng -Seed for the random number generator used to sample new trials. Default is ``None``. .. _grid-search: @@ -95,11 +96,12 @@ Configuration n_values: 100 -``n_values`` +.. autoclass:: orion.algo.gridsearch.GridSearch + :noindex: + :exclude-members: space, state_dict, set_state, suggest, observe, is_done, seed_rng, + configuration, requires_dist, requires_type, build_grid + -Number of different values to use for each dimensions to build the grid. Can be either -1. An integer. The same number will be used for all dimensions -2. A dictionary many dimension names to integers. Each dimension will have its own number of values. .. _hyperband-algorithm: @@ -152,16 +154,13 @@ Configuration algorithms. See :ref:`StubParallelStrategy` for more information. -``seed`` - -Seed for the random number generator used to sample new trials. Default is ``None``. - -``repetitions`` +.. autoclass:: orion.algo.hyperband.Hyperband + :noindex: + :exclude-members: space, state_dict, set_state, suggest, observe, is_done, seed_rng, + configuration, sample_from_bracket, append_brackets, create_bracket, + create_brackets, promote, register_samples, sample, seed_brackets, + executed_times -Number of executions for Hyperband. A single execution of Hyperband takes a finite -budget of ``(log(R)/log(eta) + 1) * (log(R)/log(eta) + 1) * R``, and ``repetitions`` allows you -to run multiple executions of Hyperband. Default is ``numpy.inf`` which means to run Hyperband -until no new trials can be suggested. .. _ASHA: @@ -220,36 +219,19 @@ Configuration Notice the additional ``strategy`` in configuration which is not mandatory for most other algorithms. See :ref:`StubParallelStrategy` for more information. - -``seed`` - -Seed for the random number generator used to sample new trials. Default is ``None``. - - -``num_rungs`` - -Number of rungs for the largest bracket. If not defined, it will be equal to ``(base + 1)`` of the -fidelity dimension. In the original paper, -``num_rungs == log(fidelity.high/fidelity.low) / log(fidelity.base) + 1``. - -``num_brackets`` - -Using a grace period that is too small may bias ASHA too strongly towards fast -converging trials that do not lead to best results at convergence (stragglers). -To overcome this, you can increase the number of brackets, which increases the amount of resources -required for optimisation but decreases the bias towards stragglers. Default is 1. +.. autoclass:: orion.algo.asha.ASHA + :noindex: + :exclude-members: space, state_dict, set_state, suggest, observe, is_done, seed_rng, + configuration, sample_from_bracket, append_brackets, create_bracket, + create_brackets, promote, register_samples, sample, seed_brackets, + executed_times, compute_bracket_idx -``repetitions`` - -Number of execution of ASHA. Default is ``numpy.inf`` which means to -run ASHA until no new trials can be suggested. - .. _tpe-algorithm: TPE ---------- +--- `Tree-structured Parzen Estimator`_ (TPE) algorithm is one of Sequential Model-Based Global Optimization (SMBO) algorithms, which will build models to propose new points based @@ -291,35 +273,12 @@ Configuration full_weight_num: 25 -``seed`` - -Seed to sample initial points and candidates points. Default is ``None``. - -``n_initial_points`` - -Number of initial points randomly sampled. Default is ``20``. - -``n_ei_candidates`` - -Number of candidates points sampled for ei compute. Default is ``24``. - -``gamma`` +.. autoclass:: orion.algo.tpe.TPE + :noindex: + :exclude-members: space, state_dict, set_state, suggest, observe, is_done, seed_rng, + configuration, sample_one_dimension, split_trials, requires_type -Ratio to split the observed trials into good and bad distributions. Default is ``0.25``. -``equal_weight`` - -True to set equal weights for observed points. Default is ``False``. - -``prior_weight`` - -The weight given to the prior point of the input space. Default is ``1.0``. - -``full_weight_num`` - -The number of the most recent trials which get the full weight where the others will be -applied with a linear ramp from 0 to 1.0. It will only take effect if ``equal_weight`` -is ``False``. Default is ``25``. .. _evolution-es algorithm: @@ -382,116 +341,48 @@ Configuration strategy: StubParallelStrategy -``seed`` - -Seed for the random number generator used to sample new trials. Default is ``None``. - -``repetitions`` - -Number of executions for Hyperband. A single execution of Hyperband takes a finite -budget of ``(log(R)/log(eta) + 1) * (log(R)/log(eta) + 1) * R``, and ``repetitions`` allows you -to run multiple executions of Hyperband. Default is ``numpy.inf`` which means to run Hyperband -until no new trials can be suggested. - -``nums_population`` -Number of population for EvolutionES. Larger number of population often gets better performance -but causes more computation. So there is a trade-off according to -the search space and required budget of your problems. - -``mutate`` - -In the mutate part, one can define the customized mutate function with its mutate factors, -such as multiply factor (times/divides by a multiply factor) and add factor -(add/subtract by a multiply factor). We support the default mutate function. +.. autoclass:: orion.algo.evolution_es.EvolutionES + :noindex: + :exclude-members: space, state_dict, set_state, suggest, observe, is_done, seed_rng, + requires_dist, requires_type Algorithm Plugins ================= -.. _scikit-bayesopt: +Plugins documentation is hosted separately. See short documentations below to find +links to full plugins documentation. -Scikit Bayesian Optimizer -------------------------- +.. _skopt-plugin: -``orion.algo.skopt`` provides a wrapper for `Bayesian optimizer`_ using Gaussian process implemented -in `scikit optimize`_. +Scikit-Optimize +--------------- -.. _scikit optimize: https://scikit-optimize.github.io/ -.. _bayesian optimizer: https://scikit-optimize.github.io/#skopt.Optimizer +This package is a plugin providing a wrapper for +`skopt `__ optimizers. -Installation -~~~~~~~~~~~~ +For more information, you can find the documentation at +`orionalgoskopt.readthedocs.io `__. -.. code-block:: sh - pip install orion.algo.skopt +.. _robo-plugin: -Configuration -~~~~~~~~~~~~~ +Robust Bayesian Optimization +---------------------------- -.. code-block:: yaml +This package is a plugin providing a wrapper for +`RoBO `__ optimizers. - experiment: - algorithms: - BayesianOptimizer: - seed: null - n_initial_points: 10 - acq_func: gp_hedge - alpha: 1.0e-10 - n_restarts_optimizer: 0 - noise: "gaussian" - normalize_y: False - -``seed`` - -``n_initial_points`` - -Number of evaluations of ``func`` with initialization points -before approximating it with ``base_estimator``. Points provided as -``x0`` count as initialization points. If len(x0) < n_initial_points -additional points are sampled at random. - -``acq_func`` - -Function to minimize over the posterior distribution. Can be: -``["LCB", "EI", "PI", "gp_hedge", "EIps", "PIps"]``. Check skopt -docs for details. - -``alpha`` - -Value added to the diagonal of the kernel matrix during fitting. -Larger values correspond to increased noise level in the observations -and reduce potential numerical issues during fitting. If an array is -passed, it must have the same number of entries as the data used for -fitting and is used as datapoint-dependent noise level. Note that this -is equivalent to adding a WhiteKernel with c=alpha. Allowing to specify -the noise level directly as a parameter is mainly for convenience and -for consistency with Ridge. - -``n_restarts_optimizer`` - -The number of restarts of the optimizer for finding the kernel's -parameters which maximize the log-marginal likelihood. The first run -of the optimizer is performed from the kernel's initial parameters, -the remaining ones (if any) from thetas sampled log-uniform randomly -from the space of allowed theta-values. If greater than 0, all bounds -must be finite. Note that n_restarts_optimizer == 0 implies that one -run is performed. - -``noise`` - -If set to "gaussian", then it is assumed that y is a noisy estimate of f(x) where the -noise is gaussian. - -``normalize_y`` - -Whether the target values y are normalized, i.e., the mean of the -observed target values become zero. This parameter should be set to -True if the target values' mean is expected to differ considerable from -zero. When enabled, the normalization effectively modifies the GP's -prior based on the data, which contradicts the likelihood principle; -normalization is thus disabled per default. +You will find in this plugin many models for Bayesian Optimization: +`Gaussian Process `__, +`Gaussian Process with MCMC `__, +`Random Forest `__, +`DNGO `__ and +`BOHAMIANN `__. + +For more information, you can find the documentation at +`epistimio.github.io/orion.algo.robo `__. .. _parallel-strategies: diff --git a/setup.py b/setup.py index 9e2912457..07f67bcaf 100644 --- a/setup.py +++ b/setup.py @@ -59,6 +59,7 @@ "legacy = orion.storage.legacy:Legacy", ], "Executor": [ + "singleexecutor = orion.executor.single_backend:SingleExecutor", "joblib = orion.executor.joblib_backend:Joblib", "dask = orion.executor.dask_backend:Dask", ], diff --git a/src/orion/algo/asha.py b/src/orion/algo/asha.py index d678359c5..0923fc313 100644 --- a/src/orion/algo/asha.py +++ b/src/orion/algo/asha.py @@ -102,10 +102,10 @@ class ASHA(Hyperband): Seed for the random number generator used to sample new trials. Default: ``None`` num_rungs: int, optional - Number of rungs for the largest bracket. If not defined, it will be equal to (base + 1) of - the fidelity dimension. In the original paper, - num_rungs == log(fidelity.high/fidelity.low) / log(fidelity.base) + 1. - Default: log(fidelity.high/fidelity.low) / log(fidelity.base) + 1 + Number of rungs for the largest bracket. If not defined, it will be equal to ``(base + 1)`` + of the fidelity dimension. In the original paper, + ``num_rungs == log(fidelity.high/fidelity.low) / log(fidelity.base) + 1``. + Default: ``log(fidelity.high/fidelity.low) / log(fidelity.base) + 1`` num_brackets: int Using a grace period that is too small may bias ASHA too strongly towards fast converging trials that do not lead to best results at convergence (stagglers). To diff --git a/src/orion/algo/evolution_es.py b/src/orion/algo/evolution_es.py index 30395dc89..ddd8ae73a 100644 --- a/src/orion/algo/evolution_es.py +++ b/src/orion/algo/evolution_es.py @@ -100,6 +100,17 @@ class EvolutionES(Hyperband): repetitions: int Number of execution of Hyperband. Default is numpy.inf which means to run Hyperband until no new trials can be suggested. + nums_population: int + Number of population for EvolutionES. Larger number of population often gets better + performance but causes more computation. So there is a trade-off according to the search + space and required budget of your problems. + Default: 20 + mutate: str or None, optional + In the mutate part, one can define the customized mutate function with its mutate factors, + such as multiply factor (times/divides by a multiply factor) and add factor + (add/subtract by a multiply factor). The function must be defined by + an importable string. If None, default + mutate function is used: ``orion.algo.mutate_functions.default_mutate``. """ diff --git a/src/orion/algo/hyperband.py b/src/orion/algo/hyperband.py index 6d49f2f29..742513e1e 100644 --- a/src/orion/algo/hyperband.py +++ b/src/orion/algo/hyperband.py @@ -137,8 +137,10 @@ class Hyperband(BaseAlgorithm): Seed for the random number generator used to sample new trials. Default: ``None`` repetitions: int - Number of execution of Hyperband. Default is numpy.inf which means to - run Hyperband until no new trials can be suggested. + Number of executions for Hyperband. A single execution of Hyperband takes a finite budget of + ``(log(R)/log(eta) + 1) * (log(R)/log(eta) + 1) * R``, and ``repetitions`` allows you to run + multiple executions of Hyperband. Default is ``numpy.inf`` which means to run Hyperband + until no new trials can be suggested. """ diff --git a/src/orion/algo/random.py b/src/orion/algo/random.py index e3d7c9080..6b57c3292 100644 --- a/src/orion/algo/random.py +++ b/src/orion/algo/random.py @@ -12,15 +12,19 @@ class Random(BaseAlgorithm): - """Implement a algorithm that samples randomly from the problem's space.""" + """An algorithm that samples randomly from the problem's space. - def __init__(self, space, seed=None): - """Random sampler takes no other hyperparameter than the problem's space - itself. + Parameters + ---------- + space: `orion.algo.space.Space` + Optimisation space with priors for each dimension. + seed: None, int or sequence of int + Seed for the random number generator used to sample new trials. + Default: ``None`` - :param space: `orion.algo.space.Space` of optimization. - :param seed: Integer seed for the random number generator. - """ + """ + + def __init__(self, space, seed=None): super(Random, self).__init__(space, seed=seed) def seed_rng(self, seed): diff --git a/src/orion/algo/space.py b/src/orion/algo/space.py index d738d22e5..51c40ee6f 100644 --- a/src/orion/algo/space.py +++ b/src/orion/algo/space.py @@ -35,6 +35,7 @@ import numpy from scipy.stats import distributions +from orion.core.utils import float_to_digits_list from orion.core.utils.points import flatten_dims, regroup_dims logger = logging.getLogger(__name__) @@ -319,7 +320,7 @@ def shape(self): _, _, _, size = self.prior._parse_args_rvs( *self._args, # pylint:disable=protected-access size=self._shape, - **self._kwargs + **self._kwargs, ) return size @@ -470,14 +471,61 @@ def cast(self, point): return casted_point @staticmethod - def get_cardinality(shape, interval): + def get_cardinality(shape, interval, precision, prior_name): """Return the number of all the possible points based and shape and interval""" - return numpy.inf + if precision is None or prior_name not in ["loguniform", "reciprocal"]: + return numpy.inf + + # If loguniform, compute every possible combinations based on precision + # for each orders of magnitude. + + def format_number(number): + """Turn number into an array of digits, the size of the precision""" + + formated_number = numpy.zeros(precision) + digits_list = float_to_digits_list(number) + lenght = min(len(digits_list), precision) + formated_number[:lenght] = digits_list[:lenght] + + return formated_number + + min_number = format_number(interval[0]) + max_number = format_number(interval[1]) + + # Compute the number of orders of magnitude spanned by lower and upper bounds + # (if lower and upper bounds on same order of magnitude, span is equal to 1) + lower_order = numpy.floor(numpy.log10(numpy.abs(interval[0]))) + upper_order = numpy.floor(numpy.log10(numpy.abs(interval[1]))) + order_span = upper_order - lower_order + 1 + + # Total number of possibilities for an order of magnitude + full_cardinality = 9 * 10 ** (precision - 1) + + def num_below(number): + + return ( + numpy.clip(number, a_min=0, a_max=9) + * 10 ** numpy.arange(precision - 1, -1, -1) + ).sum() + + # Number of values out of lower bound on lowest order of magnitude + cardinality_below = num_below(min_number) + # Number of values out of upper bound on highest order of magnitude. + # Remove 1 to be inclusive. + cardinality_above = full_cardinality - num_below(max_number) - 1 + + # Full cardinality on all orders of magnitude, minus those out of bounds. + cardinality = ( + full_cardinality * order_span - cardinality_below - cardinality_above + ) + return int(cardinality) ** int(numpy.prod(shape) if shape else 1) @property def cardinality(self): """Return the number of all the possible points from Integer `Dimension`""" - return Real.get_cardinality(self.shape, self.interval()) + return Real.get_cardinality( + self.shape, self.interval(), self.precision, self._prior_name + ) class _Discrete(Dimension): diff --git a/src/orion/algo/tpe.py b/src/orion/algo/tpe.py index c4bb384bc..ce0287c3d 100644 --- a/src/orion/algo/tpe.py +++ b/src/orion/algo/tpe.py @@ -475,7 +475,7 @@ def _sample_real_point(self, dimension, below_points, above_points, is_log=False def _sample_int_point(self, dimension, below_points, above_points): """Sample one value for integer dimension based on the observed good and bad points""" low, high = dimension.interval() - choices = range(low, high) + choices = range(low, high + 1) below_points = numpy.array(below_points).astype(int) - low above_points = numpy.array(above_points).astype(int) - low @@ -589,7 +589,7 @@ def sample(self, num=1, attempts=10): f"Failed to sample in interval ({self.low}, {self.high})" ) pt = new_points.pop(0) - if self.low <= pt < self.high: + if self.low <= pt <= self.high: point.append(pt) break diff --git a/src/orion/benchmark/__init__.py b/src/orion/benchmark/__init__.py index 075eb5851..0cfee9844 100644 --- a/src/orion/benchmark/__init__.py +++ b/src/orion/benchmark/__init__.py @@ -328,7 +328,7 @@ def execute(self, n_workers=1): for _, experiment in self.experiments_info: # TODO: it is a blocking call - experiment.workon(self.task, max_trials, n_workers) + experiment.workon(self.task, n_workers=n_workers, max_trials=max_trials) def status(self): """Return status of the study""" diff --git a/src/orion/client/__init__.py b/src/orion/client/__init__.py index 0994d00e4..4b99b6496 100644 --- a/src/orion/client/__init__.py +++ b/src/orion/client/__init__.py @@ -335,7 +335,8 @@ def workon( producer = Producer(experiment) experiment_client = ExperimentClient(experiment, producer) - experiment_client.workon(function, n_workers=1, max_trials=max_trials) + with experiment_client.tmp_executor("singleexecutor", n_workers=1): + experiment_client.workon(function, n_workers=1, max_trials=max_trials) finally: # Restore singletons diff --git a/src/orion/client/experiment.py b/src/orion/client/experiment.py index f0a17485a..a7054104f 100644 --- a/src/orion/client/experiment.py +++ b/src/orion/client/experiment.py @@ -294,6 +294,15 @@ def fetch_trials_by_status(self, status, with_evc_tree=False): status, with_evc_tree=with_evc_tree ) + def fetch_pending_trials(self, with_evc_tree=False): + """Fetch all trials with status new, interrupted or suspended + + Trials are sorted based on ``Trial.submit_time`` + + :return: list of :class:`orion.core.worker.trial.Trial` objects + """ + return self._experiment.fetch_pending_trials(with_evc_tree=with_evc_tree) + def fetch_noncompleted_trials(self, with_evc_tree=False): """Fetch non-completed trials of this `Experiment` instance. diff --git a/src/orion/core/__init__.py b/src/orion/core/__init__.py index cf0c20df3..1da4d4248 100644 --- a/src/orion/core/__init__.py +++ b/src/orion/core/__init__.py @@ -300,6 +300,14 @@ def define_evc_config(config): # TODO: This should be built automatically like get_branching_args_group # After this, the cmdline parser should be built based on config. + evc_config.add_option( + "enable", + option_type=bool, + default=False, + env_var="ORION_EVC_ENABLE", + help="Enable the Experiment Version Control. Defaults to False.", + ) + evc_config.add_option( "auto_resolution", option_type=bool, diff --git a/src/orion/core/cli/db/upgrade.py b/src/orion/core/cli/db/upgrade.py index e2263da85..bafcdc748 100644 --- a/src/orion/core/cli/db/upgrade.py +++ b/src/orion/core/cli/db/upgrade.py @@ -12,7 +12,6 @@ import sys import orion.core.io.experiment_builder as experiment_builder -import orion.core.utils.backward as backward from orion.core.io.database.ephemeraldb import EphemeralCollection from orion.core.io.database.mongodb import MongoDB from orion.core.io.database.pickleddb import PickledDB @@ -126,7 +125,6 @@ def upgrade_documents(storage): """Upgrade scheme of the documents""" for experiment in storage.fetch_experiments({}): add_version(experiment) - add_space(experiment) storage.update_experiment(uid=experiment.pop("_id"), **experiment) @@ -135,11 +133,6 @@ def add_version(experiment): experiment.setdefault("version", 1) -def add_space(experiment): - """Add space to metadata if not present""" - backward.populate_space(experiment) - - def update_indexes(database): """Remove user from unique indices. diff --git a/src/orion/core/cli/evc.py b/src/orion/core/cli/evc.py index 6cab43a8a..b616075d4 100644 --- a/src/orion/core/cli/evc.py +++ b/src/orion/core/cli/evc.py @@ -10,6 +10,15 @@ from orion.core.evc.conflicts import Resolution +def _add_enable_argument(parser): + parser.add_argument( + "--enable-evc", + action="store_true", + default=None, + help="Enable the Experiment Version Control.", + ) + + def _add_auto_resolution_argument(parser): parser.add_argument( "--auto-resolution", @@ -106,6 +115,7 @@ def _add_branch_to_argument(parser, resolution_class): resolution_arguments = { + "enable": _add_enable_argument, "auto_resolution": _add_auto_resolution_argument, "manual_resolution": _add_manual_resolution_argument, "non_monitored_arguments": _add_non_monitored_arguments_argument, @@ -133,6 +143,7 @@ def get_branching_args_group(parser): description="Arguments to automatically resolved branching events.", ) + _add_enable_argument(branching_args_group) _add_manual_resolution_argument(branching_args_group) _add_non_monitored_arguments_argument(branching_args_group) _add_ignore_code_changes_argument(branching_args_group) diff --git a/src/orion/core/cli/hunt.py b/src/orion/core/cli/hunt.py index e88dfc9b6..4f4c1b8a3 100644 --- a/src/orion/core/cli/hunt.py +++ b/src/orion/core/cli/hunt.py @@ -198,8 +198,10 @@ def main(args): signal.signal(signal.SIGTERM, _handler) - workon( - experiment, - ignore_code_changes=config["branching"].get("ignore_code_changes"), - **worker_config - ) + # If EVC is not enabled, we force Consumer to ignore code changes. + if not config["branching"].get("enable", orion.core.config.evc.enable): + ignore_code_changes = True + else: + ignore_code_changes = config["branching"].get("ignore_code_changes") + + workon(experiment, ignore_code_changes=ignore_code_changes, **worker_config) diff --git a/src/orion/core/evc/adapters.py b/src/orion/core/evc/adapters.py index 9ea958efb..842008db6 100644 --- a/src/orion/core/evc/adapters.py +++ b/src/orion/core/evc/adapters.py @@ -33,6 +33,7 @@ import copy from abc import ABCMeta, abstractmethod +from orion.algo.space import Dimension from orion.core.io.space_builder import DimensionBuilder from orion.core.utils import Factory from orion.core.worker.trial import Trial @@ -278,6 +279,9 @@ def forward(self, trials): :meth:`orion.core.evc.adapters.BaseAdapter.forward` """ + if self.param.value is Dimension.NO_DEFAULT_VALUE: + return [] + adapted_trials = [] for trial in trials: diff --git a/src/orion/core/evc/conflicts.py b/src/orion/core/evc/conflicts.py index 93772d985..a330072ff 100644 --- a/src/orion/core/evc/conflicts.py +++ b/src/orion/core/evc/conflicts.py @@ -1169,9 +1169,15 @@ def detect(cls, old_config, new_config, branching_config=None): old_hash_commit = old_config["metadata"].get("VCS", None) new_hash_commit = new_config["metadata"].get("VCS") - ignore_code_changes = branching_config is not None and branching_config.get( - "ignore_code_changes", False - ) + # Will be overriden by global config if not set in branching_config + ignore_code_changes = None + # Try using user defined ignore_code_changes + if branching_config is not None: + ignore_code_changes = branching_config.get("ignore_code_changes", None) + # Otherwise use global conf's ignore_code_changes + if ignore_code_changes is None: + ignore_code_changes = orion.core.config.evc.ignore_code_changes + if ignore_code_changes: log.debug("Ignoring code changes") if ( @@ -1318,7 +1324,7 @@ def get_nameless_args( log.debug("User script config: %s", user_script_config) log.debug("Non monitored arguments: %s", non_monitored_arguments) - parser = OrionCmdlineParser(user_script_config) + parser = OrionCmdlineParser(user_script_config, allow_non_existing_files=True) parser.set_state_dict(config["metadata"]["parser"]) priors = parser.priors_to_normal() nameless_keys = set(parser.parser.arguments.keys()) - set(priors.keys()) @@ -1478,7 +1484,7 @@ def get_nameless_config(cls, config, user_script_config=None, **branching_kwargs if user_script_config is None: user_script_config = orion.core.config.worker.user_script_config - parser = OrionCmdlineParser(user_script_config) + parser = OrionCmdlineParser(user_script_config, allow_non_existing_files=True) parser.set_state_dict(config["metadata"]["parser"]) nameless_config = dict( diff --git a/src/orion/core/evc/experiment.py b/src/orion/core/evc/experiment.py index 35e5a40ca..c8d8b5ca3 100644 --- a/src/orion/core/evc/experiment.py +++ b/src/orion/core/evc/experiment.py @@ -14,6 +14,7 @@ analyzing an EVC tree. """ +import functools import logging from orion.core.evc.tree import TreeNode @@ -192,10 +193,11 @@ def retrieve_trials(node, parent_or_children): children_trials.set_parent(parent_trials) adapt_trials(children_trials) + return sum([node.item["trials"] for node in children_trials.root], []) -def _adapt_parent_trials(node, parent_trials_node): +def _adapt_parent_trials(node, parent_trials_node, ids): """Adapt trials from the parent recursively .. note:: @@ -203,11 +205,29 @@ def _adapt_parent_trials(node, parent_trials_node): To call with node.map(fct, node.parent) to connect with parents """ + # Ids from children are passed to prioritized them if they are also present in parent nodes. + node_ids = ( + set( + trial.compute_trial_hash(trial, ignore_lie=True, ignore_experiment=True) + for trial in node.item["trials"] + ) + | ids + ) if parent_trials_node is not None: adapter = node.item["experiment"].refers["adapter"] for parent in parent_trials_node.root: parent.item["trials"] = adapter.forward(parent.item["trials"]) + # if trial is in current exp, filter out + parent.item["trials"] = [ + trial + for trial in parent.item["trials"] + if trial.compute_trial_hash( + trial, ignore_lie=True, ignore_experiment=True + ) + not in node_ids + ] + return node.item, parent_trials_node @@ -219,15 +239,38 @@ def _adapt_children_trials(node, children_trials_nodes): To call with node.map(fct, node.children) to connect with children """ + ids = set( + trial.compute_trial_hash(trial, ignore_lie=True, ignore_experiment=True) + for trial in node.item["trials"] + ) + for child in children_trials_nodes: adapter = child.item["experiment"].refers["adapter"] for subchild in child: # Includes child itself subchild.item["trials"] = adapter.backward(subchild.item["trials"]) + # if trial is in current node, filter out + subchild.item["trials"] = [ + trial + for trial in subchild.item["trials"] + if trial.compute_trial_hash( + trial, ignore_lie=True, ignore_experiment=True + ) + not in ids + ] + return node.item, children_trials_nodes def adapt_trials(trials_tree): """Adapt trials recursively so that they are all compatible with current experiment.""" - trials_tree.map(_adapt_parent_trials, trials_tree.parent) trials_tree.map(_adapt_children_trials, trials_tree.children) + ids = set() + for child in trials_tree.children: + for trial in child.item["trials"]: + ids.add( + trial.compute_trial_hash(trial, ignore_lie=True, ignore_experiment=True) + ) + trials_tree.map( + functools.partial(_adapt_parent_trials, ids=ids), trials_tree.parent + ) diff --git a/src/orion/core/io/experiment_branch_builder.py b/src/orion/core/io/experiment_branch_builder.py index 953082af4..806add91d 100644 --- a/src/orion/core/io/experiment_branch_builder.py +++ b/src/orion/core/io/experiment_branch_builder.py @@ -54,7 +54,9 @@ class ExperimentBranchBuilder: """ - def __init__(self, conflicts, manual_resolution=None, **branching_arguments): + def __init__( + self, conflicts, enabled=True, manual_resolution=None, **branching_arguments + ): # TODO: handle all other arguments if manual_resolution is None: manual_resolution = orion.core.config.evc.manual_resolution @@ -69,7 +71,8 @@ def __init__(self, conflicts, manual_resolution=None, **branching_arguments): self.branching_arguments = branching_arguments self.conflicting_config.update(branching_arguments) - self.resolve_conflicts() + if enabled: + self.resolve_conflicts() @property def experiment_config(self): diff --git a/src/orion/core/io/experiment_builder.py b/src/orion/core/io/experiment_builder.py index 5af39330a..57e2fc263 100644 --- a/src/orion/core/io/experiment_builder.py +++ b/src/orion/core/io/experiment_builder.py @@ -200,28 +200,14 @@ def build(name, version=None, branching=None, **config): conflicts = _get_conflicts(experiment, branching) must_branch = len(conflicts.get()) > 1 or branching.get("branch_to") - if must_branch: - if len(conflicts.get()) > 1: - log.debug("Experiment must branch because of conflicts") - else: - assert branching.get("branch_to") - log.debug("Experiment branching forced with ``branch_to``") - branched_experiment = _branch_experiment( - experiment, conflicts, version, branching - ) - log.debug("Now attempting registration of branched experiment in DB.") - try: - _register_experiment(branched_experiment) - log.debug("Branched experiment successfully registered in DB.") - except DuplicateKeyError as e: - log.debug( - "Experiment registration failed. This is likely due to a race condition " - "during branching. Now rolling back and re-attempting building " - "the branched experiment." - ) - raise RaceCondition("There was a race condition during branching.") from e - return branched_experiment + if must_branch and branching.get("enable", orion.core.config.evc.enable): + return _attempt_branching(conflicts, experiment, version, branching) + elif must_branch: + log.warning( + "Running experiment in a different state:\n%s", + _get_branching_status_string(conflicts, branching), + ) log.debug("No branching required.") @@ -609,6 +595,36 @@ def _update_experiment(experiment): log.debug("Experiment configuration successfully updated in DB.") +def _attempt_branching(conflicts, experiment, version, branching): + if len(conflicts.get()) > 1: + log.debug("Experiment must branch because of conflicts") + else: + assert branching.get("branch_to") + log.debug("Experiment branching forced with ``branch_to``") + branched_experiment = _branch_experiment(experiment, conflicts, version, branching) + log.debug("Now attempting registration of branched experiment in DB.") + try: + _register_experiment(branched_experiment) + log.debug("Branched experiment successfully registered in DB.") + except DuplicateKeyError as e: + log.debug( + "Experiment registration failed. This is likely due to a race condition " + "during branching. Now rolling back and re-attempting building " + "the branched experiment." + ) + raise RaceCondition("There was a race condition during branching.") from e + + return branched_experiment + + +def _get_branching_status_string(conflicts, branching_arguments): + experiment_brancher = ExperimentBranchBuilder( + conflicts, enabled=False, **branching_arguments + ) + branching_prompt = BranchingPrompt(experiment_brancher) + return branching_prompt.get_status() + + def _branch_experiment(experiment, conflicts, version, branching_arguments): """Create a new branch experiment with adapters for the given conflicts""" experiment_brancher = ExperimentBranchBuilder(conflicts, **branching_arguments) @@ -720,6 +736,7 @@ def build_from_args(cmdargs): :func:`orion.core.io.experiment_builder.build` for more information on experiment creation. """ + cmd_config = get_cmd_config(cmdargs) if "name" not in cmd_config: diff --git a/src/orion/core/io/resolve_config.py b/src/orion/core/io/resolve_config.py index 40058a3ac..019127c66 100644 --- a/src/orion/core/io/resolve_config.py +++ b/src/orion/core/io/resolve_config.py @@ -99,14 +99,10 @@ def fetch_config_from_cmdargs(cmdargs): ) cmdargs_config["worker.max_trials"] = cmdargs.pop("worker_trials") - mappings = dict( - experiment=dict(exp_max_broken="max_broken", exp_max_trials="max_trials"), - worker=dict(worker_max_broken="max_broken", worker_max_trials="max_trials"), - ) - mappings = dict( experiment=dict(max_broken="exp_max_broken", max_trials="exp_max_trials"), worker=dict(max_broken="worker_max_broken", max_trials="worker_max_trials"), + evc=dict(enable="enable_evc"), ) global_config = orion.core.config.to_dict() diff --git a/src/orion/core/utils/__init__.py b/src/orion/core/utils/__init__.py index 4489a3a59..b4697f168 100644 --- a/src/orion/core/utils/__init__.py +++ b/src/orion/core/utils/__init__.py @@ -25,6 +25,25 @@ def nesteddict(): return defaultdict(nesteddict) +def float_to_digits_list(number): + """Convert a float into a list of digits, without conserving exponant""" + # Get rid of scientific-format exponant + str_number = str(number) + str_number = str_number.split("e")[0] + + res = [int(ele) for ele in str_number if ele.isdigit()] + + # Remove trailing 0s in front + while len(res) > 1 and res[0] == 0: + res.pop(0) + + # Remove training 0s at end + while len(res) > 1 and res[-1] == 0: + res.pop(-1) + + return res + + def get_all_subclasses(parent): """Get set of subclasses recursively""" subclasses = set() diff --git a/src/orion/core/worker/consumer.py b/src/orion/core/worker/consumer.py index 50132e102..9485b26e8 100644 --- a/src/orion/core/worker/consumer.py +++ b/src/orion/core/worker/consumer.py @@ -89,6 +89,7 @@ def __init__( if interrupt_signal_code is None: interrupt_signal_code = orion.core.config.worker.interrupt_signal_code + # NOTE: If ignore_code_changes is None, we can assume EVC is enabled. if ignore_code_changes is None: ignore_code_changes = orion.core.config.evc.ignore_code_changes @@ -235,9 +236,6 @@ def _consume(self, trial, workdirname): return results_file def _validate_code_version(self): - if self.ignore_code_changes: - return - old_config = self.experiment.configuration new_config = copy.deepcopy(old_config) new_config["metadata"]["VCS"] = infer_versioning_metadata( @@ -248,10 +246,17 @@ def _validate_code_version(self): from orion.core.evc.conflicts import CodeConflict conflicts = list(CodeConflict.detect(old_config, new_config)) - if conflicts: + if conflicts and not self.ignore_code_changes: raise BranchingEvent( f"Code changed between execution of 2 trials:\n{conflicts[0]}" ) + elif conflicts: + log.warning( + "Code changed between execution of 2 trials. Enable EVC with option " + "`ignore_code_changes` set to False to raise an error when trials are executed " + "with different versions. For more information, see documentation at " + "https://orion.readthedocs.io/en/stable/user/config.html#experiment-version-control" + ) # pylint: disable = no-self-use def execute_process(self, cmd_args, environ): diff --git a/src/orion/core/worker/experiment.py b/src/orion/core/worker/experiment.py index 8cbe623a9..2109b7ab5 100644 --- a/src/orion/core/worker/experiment.py +++ b/src/orion/core/worker/experiment.py @@ -16,6 +16,7 @@ from orion.core.evc.adapters import BaseAdapter from orion.core.evc.experiment import ExperimentNode +from orion.core.io.database import DuplicateKeyError from orion.core.utils.exceptions import UnsupportedOperation from orion.core.utils.flatten import flatten from orion.core.utils.singleton import update_singletons @@ -240,11 +241,13 @@ def reserve_trial(self, score_handle=None): self.fix_lost_trials() + self.duplicate_pending_trials() + selected_trial = self._storage.reserve_trial(self) log.debug("reserved trial (trial: %s)", selected_trial) return selected_trial - def fix_lost_trials(self): + def fix_lost_trials(self, with_evc_tree=True): """Find lost trials and set them to interrupted. A lost trial is defined as a trial whose heartbeat as not been updated since two times @@ -254,7 +257,18 @@ def fix_lost_trials(self): """ self._check_if_writable() - trials = self._storage.fetch_lost_trials(self) + + if self._node is not None and with_evc_tree: + for experiment in self._node.root: + if experiment.item is self: + continue + + # Ugly hack to allow resetting parent's lost trials. + experiment.item._mode = "w" + experiment.item.fix_lost_trials(with_evc_tree=False) + experiment.item._mode = "r" + + trials = self.fetch_lost_trials(with_evc_tree=False) for trial in trials: log.debug("Setting lost trial %s status to interrupted...", trial.id) @@ -265,6 +279,43 @@ def fix_lost_trials(self): except FailedUpdate: log.debug("failed") + def duplicate_pending_trials(self): + """Find pending trials in EVC and duplicate them in current experiment. + + An experiment cannot execute trials from parent experiments otherwise some trials + may have been executed in different environements of different experiment although they + belong to the same experiment. Instead, trials that are pending in parent and child + experiment are copied over to current experiment so that it can be reserved and executed. + The parent or child experiment will only see their original copy of the trial, and + the current experiment will only see the new copy of the trial. + """ + self._check_if_writable() + evc_pending_trials = self._select_evc_call( + with_evc_tree=True, function="fetch_pending_trials" + ) + exp_pending_trials = self._select_evc_call( + with_evc_tree=False, function="fetch_pending_trials" + ) + + exp_trials_ids = set( + trial.compute_trial_hash(trial, ignore_experiment=True) + for trial in exp_pending_trials + ) + + for trial in evc_pending_trials: + if ( + trial.compute_trial_hash(trial, ignore_experiment=True) + in exp_trials_ids + ): + continue + + trial.experiment = self.id + # Danger danger, race conditions! + try: + self._storage.register_trial(trial) + except DuplicateKeyError: + log.debug("Race condition while trying to duplicate trial %s", trial.id) + # pylint:disable=unused-argument def update_completed_trial(self, trial, results_file=None): """Inform database about an evaluated `trial` with results. @@ -354,6 +405,24 @@ def fetch_trials_by_status(self, status, with_evc_tree=False): """ return self._select_evc_call(with_evc_tree, "fetch_trials_by_status", status) + def fetch_pending_trials(self, with_evc_tree=False): + """Fetch all trials with status new, interrupted or suspended + + Trials are sorted based on `Trial.submit_time` + + :return: list of `Trial` objects + """ + return self._select_evc_call(with_evc_tree, "fetch_pending_trials") + + def fetch_lost_trials(self, with_evc_tree=False): + """Fetch all reserved trials that are lost (old heartbeat) + + Trials are sorted based on `Trial.submit_time` + + :return: list of `Trial` objects + """ + return self._select_evc_call(with_evc_tree, "fetch_lost_trials") + def fetch_noncompleted_trials(self, with_evc_tree=False): """Fetch non-completed trials of this `Experiment` instance. diff --git a/src/orion/core/worker/strategy.py b/src/orion/core/worker/strategy.py index 1b2a26ea8..7a9df2690 100644 --- a/src/orion/core/worker/strategy.py +++ b/src/orion/core/worker/strategy.py @@ -142,9 +142,11 @@ def configuration(self): def observe(self, points, results): """See BaseParallelStrategy.observe""" super(MaxParallelStrategy, self).observe(points, results) - self.max_result = max( + results = [ result["objective"] for result in results if result["objective"] is not None - ) + ] + if results: + self.max_result = max(results) def lie(self, trial): """See BaseParallelStrategy.lie""" @@ -175,9 +177,10 @@ def observe(self, points, results): objective_values = [ result["objective"] for result in results if result["objective"] is not None ] - self.mean_result = sum(value for value in objective_values) / float( - len(objective_values) - ) + if objective_values: + self.mean_result = sum(value for value in objective_values) / float( + len(objective_values) + ) def lie(self, trial): """See BaseParallelStrategy.lie""" diff --git a/src/orion/core/worker/transformer.py b/src/orion/core/worker/transformer.py index 9bb0fc58c..d9c618565 100644 --- a/src/orion/core/worker/transformer.py +++ b/src/orion/core/worker/transformer.py @@ -690,16 +690,12 @@ def shape(self): @property def cardinality(self): """Wrap original :class:`orion.algo.space.Dimension` capacity""" - if self.type == "real": - return Real.get_cardinality(self.shape, self.interval()) - elif self.type == "integer": + # May be a discretized real, must reduce cardinality + if self.type == "integer": return Integer.get_cardinality(self.shape, self.interval()) - elif self.type == "categorical": - return Categorical.get_cardinality(self.shape, self.interval()) - elif self.type == "fidelity": - return Fidelity.get_cardinality(self.shape, self.interval()) - else: - raise RuntimeError(f"No cardinality can be computed for type `{self.type}`") + + # Else we don't care what transformation is. + return self.original_dimension.cardinality class ReshapedDimension(TransformedDimension): diff --git a/src/orion/executor/single_backend.py b/src/orion/executor/single_backend.py new file mode 100644 index 000000000..42b300cb2 --- /dev/null +++ b/src/orion/executor/single_backend.py @@ -0,0 +1,29 @@ +# -*- coding: utf-8 -*- +""" +Executor without parallelism for debugging +========================================== + +""" +import functools + +from orion.executor.base import BaseExecutor + + +class SingleExecutor(BaseExecutor): + """Single thread executor + + Simple executor for debugging. No parameters. + + The submitted functions are wrapped with ``functools.partial`` + which are then executed in ``wait()``. + + """ + + def __init__(self, n_workers=1, **config): + super(SingleExecutor, self).__init__(n_workers=1) + + def wait(self, futures): + return [future() for future in futures] + + def submit(self, function, *args, **kwargs): + return functools.partial(function, *args, **kwargs) diff --git a/src/orion/plotting/backend_plotly.py b/src/orion/plotting/backend_plotly.py index 8ff75dcd3..b056dd765 100644 --- a/src/orion/plotting/backend_plotly.py +++ b/src/orion/plotting/backend_plotly.py @@ -173,12 +173,11 @@ def reformat_competitions(experiments): ): competitions = [] remaining = True - i = 0 n_competitions = len(next(iter(experiments.values()))) for ith_competition in range(n_competitions): competition = {} for name in experiments.keys(): - competition[name] = experiments[name][i] + competition[name] = experiments[name][ith_competition] competitions.append(competition) elif isinstance(experiments, dict): competitions = experiments @@ -636,7 +635,7 @@ def get_objective_name(experiments): name=name, ) if "best_var" in exp_data: - dy = exp_data["best_var"] + dy = numpy.sqrt(exp_data["best_var"]) fig.add_scatter( x=list(x) + list(x)[::-1], y=list(y - dy) + list(y + dy)[::-1], diff --git a/src/orion/testing/__init__.py b/src/orion/testing/__init__.py index 4e7441c70..e03e7fe26 100644 --- a/src/orion/testing/__init__.py +++ b/src/orion/testing/__init__.py @@ -8,6 +8,7 @@ """ # pylint: disable=protected-access +import contextlib import copy import datetime import os @@ -170,6 +171,15 @@ def utcnow(cls): return default_datetime() +@contextlib.contextmanager +def mocked_datetime(monkeypatch): + """Make ``datetime.datetime.utcnow()`` return an arbitrary date.""" + with monkeypatch.context() as m: + m.setattr(datetime, "datetime", MockDatetime) + + yield MockDatetime + + class AssertNewFile: def __init__(self, filename): self.filename = filename diff --git a/src/orion/testing/evc.py b/src/orion/testing/evc.py new file mode 100644 index 000000000..0cb8e3cc0 --- /dev/null +++ b/src/orion/testing/evc.py @@ -0,0 +1,86 @@ +import contextlib +import copy + +from orion.client import build_experiment, get_experiment + + +@contextlib.contextmanager +def disable_duplication(monkeypatch): + def stub(self): + pass + + with monkeypatch.context() as m: + m.setattr( + "orion.core.worker.experiment.Experiment.duplicate_pending_trials", stub + ) + + yield + + +def generate_trials(exp, trials): + """Generate trials for each item in trials. + + Items of trials can be either dictionary of valid hyperparameters based on exp.space and status + or `None`. + + If status not provided, 'new' is used by default. + + For items that are `None`, trials are suggested with exp.suggest(). + """ + for trial_config in trials: + trial_config = copy.deepcopy(trial_config) + status = trial_config.pop("status", None) if trial_config else None + if trial_config: + trial = exp.insert(params=trial_config) + else: + with exp.suggest() as trial: + # Releases suggested trial when leaving with-clause. + pass + + if status is not None: + exp._experiment._storage.set_trial_status( + trial, + status, + heartbeat=trial.submit_time if status == "reserved" else None, + ) + + +def build_root_experiment(space=None, trials=None): + """Build a root experiment and generate trials.""" + if space is None: + space = {"x": "uniform(0, 100)", "y": "uniform(0, 100)", "z": "uniform(0, 100)"} + if trials is None: + trials = [{"x": i, "y": i * 2, "z": i ** 2} for i in range(4)] + + root = build_experiment(name="root", max_trials=len(trials), space=space) + + generate_trials(root, trials) + + +def build_child_experiment(space=None, trials=None, name="child", parent="root"): + """Build a child experiment by branching from `parent` and generate trials.""" + if trials is None: + trials = [None for i in range(6)] + + max_trials = get_experiment(parent).max_trials + len(trials) + + child = build_experiment( + name=name, + space=space, + max_trials=max_trials, + branching={"branch_from": parent, "enable": True}, + ) + assert child.name == name + assert child.version == 1 + + generate_trials(child, trials) + + +def build_grand_child_experiment(space=None, trials=None): + """Build a grand-child experiment by branching from `child` and generate trials.""" + if trials is None: + trials = [None for i in range(5)] + + build_child_experiment( + space=space, trials=trials, name="grand-child", parent="child" + ) diff --git a/tests/conftest.py b/tests/conftest.py index 574bc2840..a7ef281d9 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -1,6 +1,7 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- """Common fixtures and utils for unittests and functional tests.""" +import datetime import getpass import os import tempfile @@ -21,7 +22,7 @@ from orion.core.worker.trial import Trial from orion.storage.base import Storage, get_storage, setup_storage from orion.storage.legacy import Legacy -from orion.testing import OrionState +from orion.testing import OrionState, mocked_datetime # So that assert messages show up in tests defined outside testing suite. pytest.register_assert_rewrite("orion.testing") @@ -390,3 +391,10 @@ def storage(setup_pickleddb_database): def with_user_userxyz(monkeypatch): """Make ``getpass.getuser()`` return ``'userxyz'``.""" monkeypatch.setattr(getpass, "getuser", lambda: "userxyz") + + +@pytest.fixture() +def random_dt(monkeypatch): + """Make ``datetime.datetime.utcnow()`` return an arbitrary date.""" + with mocked_datetime(monkeypatch) as datetime: + yield datetime.utcnow() diff --git a/tests/functional/algos/test_algos.py b/tests/functional/algos/test_algos.py index 5e54897c3..4538dc7f7 100644 --- a/tests/functional/algos/test_algos.py +++ b/tests/functional/algos/test_algos.py @@ -132,6 +132,14 @@ def test_cardinality_stop(algorithm): assert len(trials) == 16 assert trials[-1].status == "completed" + discrete_space["x"] = "loguniform(0.1, 1, precision=1)" + exp = workon(rosenbrock, discrete_space, algorithms=algorithm, max_trials=30) + print(exp.space.cardinality) + + trials = exp.fetch_trials() + assert len(trials) == 10 + assert trials[-1].status == "completed" + @pytest.mark.parametrize( "algorithm", algorithm_configs.values(), ids=list(algorithm_configs.keys()) @@ -217,7 +225,7 @@ def test_with_evc(algorithm): space=space_with_fidelity, algorithms=algorithm, max_trials=30, - branching={"branch_from": "exp"}, + branching={"branch_from": "exp", "enable": True}, ) assert exp.version == 2 @@ -277,7 +285,7 @@ def test_parallel_workers(algorithm): name=name, space=space_with_fidelity, algorithms=algorithm, - branching={"branch_from": name}, + branching={"branch_from": name, "enable": True}, ) assert exp.version == 2 diff --git a/tests/functional/backward_compatibility/test_versions.py b/tests/functional/backward_compatibility/test_versions.py index bbf827bc9..b249099b8 100644 --- a/tests/functional/backward_compatibility/test_versions.py +++ b/tests/functional/backward_compatibility/test_versions.py @@ -244,7 +244,6 @@ def test_db_upgrade(self): experiments = storage.fetch_experiments({}) assert "version" in experiments[0] - assert "priors" in experiments[0]["metadata"] def test_db_test(self): """Verify db test command""" diff --git a/tests/functional/backward_compatibility/versions.txt b/tests/functional/backward_compatibility/versions.txt index 61c46d278..64056e0cd 100644 --- a/tests/functional/backward_compatibility/versions.txt +++ b/tests/functional/backward_compatibility/versions.txt @@ -7,3 +7,4 @@ 0.1.12 0.1.13 0.1.14 +0.1.15 diff --git a/tests/functional/branching/test_branching.py b/tests/functional/branching/test_branching.py index fefa246cc..2f7e965d8 100644 --- a/tests/functional/branching/test_branching.py +++ b/tests/functional/branching/test_branching.py @@ -5,6 +5,7 @@ import os import pytest +import yaml import orion.core.cli import orion.core.io.experiment_builder as experiment_builder @@ -33,6 +34,36 @@ def init_full_x(setup_pickleddb_database, monkeypatch): orion.core.cli.main("insert -n {name} script -x=0".format(name=name).split(" ")) +@pytest.fixture +def init_no_evc(monkeypatch): + """Add y dimension but overwrite original""" + monkeypatch.chdir(os.path.dirname(os.path.abspath(__file__))) + name = "full_x" + branch = "wont_exist" + orion.core.cli.main( + ( + "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "./black_box_with_y.py " + "-x~uniform(-10,10) " + "-y~+uniform(-10,10,default_value=1)" + ) + .format(name=name, branch=branch) + .split(" ") + ) + orion.core.cli.main( + "insert -n {name} script -x=1 -y=1".format(name=name).split(" ") + ) + orion.core.cli.main( + "insert -n {name} script -x=-1 -y=1".format(name=name).split(" ") + ) + orion.core.cli.main( + "insert -n {name} script -x=1 -y=-1".format(name=name).split(" ") + ) + orion.core.cli.main( + "insert -n {name} script -x=-1 -y=-1".format(name=name).split(" ") + ) + + @pytest.fixture def init_full_x_full_y(init_full_x): """Add y dimension to original""" @@ -41,6 +72,7 @@ def init_full_x_full_y(init_full_x): orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box_with_y.py " "-x~uniform(-10,10) " "-y~+uniform(-10,10,default_value=1)" @@ -69,7 +101,9 @@ def init_half_x_full_y(init_full_x_full_y): branch = "half_x_full_y" orion.core.cli.main( ( - "hunt --init-only -n {branch} --branch-from {name} ./black_box_with_y.py " + "hunt --init-only -n {branch} --branch-from {name} " + "--enable-evc " + "./black_box_with_y.py " "-x~+uniform(0,10) " "-y~uniform(-10,10,default_value=1)" ) @@ -91,7 +125,9 @@ def init_full_x_half_y(init_full_x_full_y): branch = "full_x_half_y" orion.core.cli.main( ( - "hunt --init-only -n {branch} --branch-from {name} ./black_box_with_y.py " + "hunt --init-only -n {branch} --branch-from {name} " + "--enable-evc " + "./black_box_with_y.py " "-x~uniform(-10,10) " "-y~+uniform(0,10,default_value=1)" ) @@ -114,6 +150,7 @@ def init_full_x_rename_y_z(init_full_x_full_y): orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box_with_z.py -x~uniform(-10,10) -y~>z -z~uniform(-10,10,default_value=1)" ) .format(name=name, branch=branch) @@ -141,6 +178,7 @@ def init_full_x_rename_half_y_half_z(init_full_x_half_y): orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box_with_z.py -x~uniform(-10,10) -y~>z -z~uniform(0,10,default_value=1)" ) .format(name=name, branch=branch) @@ -162,6 +200,7 @@ def init_full_x_rename_half_y_full_z(init_full_x_half_y): orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box_with_z.py " "-x~uniform(-10,10) -y~>z " "-z~+uniform(-10,10,default_value=1)" @@ -191,6 +230,7 @@ def init_full_x_remove_y(init_full_x_full_y): orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box.py " "-x~uniform(-10,10) -y~-" ) @@ -213,6 +253,7 @@ def init_full_x_full_y_add_z_remove_y(init_full_x_full_y): orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box.py -x~uniform(-10,10) " "-z~uniform(-20,10,default_value=0)" ) @@ -235,6 +276,7 @@ def init_full_x_remove_z(init_full_x_rename_y_z): orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box.py " "-x~uniform(-10,10) -z~-" ) @@ -257,6 +299,7 @@ def init_full_x_remove_z_default_4(init_full_x_rename_y_z): orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box.py " "-x~uniform(-10,10) -z~-4" ) @@ -280,6 +323,7 @@ def init_full_x_new_algo(init_full_x): ( "hunt --init-only -n {branch} --branch-from {name} " "--algorithm-change --config new_algo_config.yaml " + "--enable-evc " "./black_box.py -x~uniform(-10,10)" ) .format(name=name, branch=branch) @@ -295,12 +339,13 @@ def init_full_x_new_algo(init_full_x): @pytest.fixture def init_full_x_new_cli(init_full_x): - """Remove z from full x full z and give a default value of 4""" + """Change commandline call""" name = "full_x" branch = "full_x_new_cli" orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --cli-change-type noeffect " + "--enable-evc " "./black_box_new.py -x~uniform(-10,10) --a-new argument" ) .format(name=name, branch=branch) @@ -320,7 +365,9 @@ def init_full_x_ignore_cli(init_full_x): name = "full_x_with_new_opt" orion.core.cli.main( ( - "hunt --init-only -n {name} --config orion_config.yaml ./black_box_new.py " + "hunt --init-only -n {name} --config orion_config.yaml " + "--enable-evc " + "./black_box_new.py " "-x~uniform(-10,10)" ) .format(name=name) @@ -331,7 +378,9 @@ def init_full_x_ignore_cli(init_full_x): orion.core.cli.main( ( "hunt --init-only -n {name} --non-monitored-arguments a-new " - "--config orion_config.yaml ./black_box_new.py " + "--config orion_config.yaml " + "--enable-evc " + "./black_box_new.py " "-x~uniform(-10,10) --a-new argument" ) .format(name=name) @@ -341,6 +390,37 @@ def init_full_x_ignore_cli(init_full_x): orion.core.cli.main("insert -n {name} script -x=-1.2".format(name=name).split(" ")) +@pytest.fixture +def init_full_x_new_config(init_full_x, tmp_path): + """Add configuration script""" + name = "full_x" + branch = "full_x_new_config" + + config_file = tmp_path / "config.yaml" + config_file.write_text( + yaml.dump( + {"new_arg": "some-value", "y": "orion~uniform(-10, 10, default_value=0)"} + ) + ) + + orion.core.cli.main( + ( + "hunt --enable-evc --init-only -n {branch} --branch-from {name} " + "--cli-change-type noeffect " + "--config-change-type unsure " + "./black_box_new.py -x~uniform(-10,10) --config {config_file}" + ) + .format(name=name, branch=branch, config_file=config_file) + .split(" ") + ) + orion.core.cli.main( + "insert -n {branch} script -x=1.2 -y=2".format(branch=branch).split(" ") + ) + orion.core.cli.main( + "insert -n {branch} script -x=-1.2 -y=3".format(branch=branch).split(" ") + ) + + @pytest.fixture def init_entire( init_half_x_full_y, # 1.1.1 @@ -372,11 +452,33 @@ def test_init(init_full_x): experiment = experiment_builder.load(name="full_x") assert experiment.refers["adapter"].configuration == [] + assert experiment.space.configuration == {"/x": "uniform(-10, 10)"} pairs = get_name_value_pairs(experiment.fetch_trials()) assert pairs == ((("/x", 0),),) +def test_no_evc_overwrite(setup_pickleddb_database, init_no_evc): + """Test that the experiment config is overwritten if --enable-evc is not passed""" + storage = get_storage() + assert len(get_storage().fetch_experiments({})) == 1 + experiment = experiment_builder.load(name="full_x") + + assert experiment.refers["adapter"].configuration == [] + assert experiment.space.configuration == { + "/x": "uniform(-10, 10)", + "/y": "uniform(-10, 10, default_value=1)", + } + + pairs = get_name_value_pairs(experiment.fetch_trials()) + assert pairs == ( + (("/x", 1), ("/y", 1)), + (("/x", -1), ("/y", 1)), + (("/x", 1), ("/y", -1)), + (("/x", -1), ("/y", -1)), + ) + + def test_full_x_full_y(init_full_x_full_y): """Test if full x full y is properly initialized and can fetch original trial""" experiment = experiment_builder.load(name="full_x_full_y") @@ -729,15 +831,15 @@ def test_run_entire_full_x_full_y(init_entire): orion.core.cli.main( ( - "-vv hunt --max-trials 20 --pool-size 1 -n full_x_full_y " + "-vv hunt --max-trials 30 --pool-size 1 -n full_x_full_y " "./black_box_with_y.py " "-x~uniform(-10,10) " "-y~uniform(-10,10,default_value=1)" ).split(" ") ) - assert len(experiment.fetch_trials(with_evc_tree=True)) == 39 - assert len(experiment.fetch_trials()) == 20 + assert len(experiment.fetch_trials(with_evc_tree=True)) == 30 + assert len(experiment.fetch_trials(with_evc_tree=False)) == 30 def test_run_entire_full_x_full_y_no_args(init_entire): @@ -748,11 +850,11 @@ def test_run_entire_full_x_full_y_no_args(init_entire): assert len(experiment.fetch_trials()) == 4 orion.core.cli.main( - ("-vv hunt --max-trials 20 --pool-size 1 -n full_x_full_y").split(" ") + ("-vv hunt --max-trials 30 --pool-size 1 -n full_x_full_y").split(" ") ) - assert len(experiment.fetch_trials(with_evc_tree=True)) == 39 - assert len(experiment.fetch_trials()) == 20 + assert len(experiment.fetch_trials(with_evc_tree=True)) == 30 + assert len(experiment.fetch_trials(with_evc_tree=False)) == 30 def test_new_algo(init_full_x_new_algo): @@ -770,8 +872,8 @@ def test_new_algo(init_full_x_new_algo): ("-vv hunt --max-trials 20 --pool-size 1 -n full_x_new_algo").split(" ") ) - assert len(experiment.fetch_trials(with_evc_tree=True)) == 21 - assert len(experiment.fetch_trials()) == 20 + assert len(experiment.fetch_trials(with_evc_tree=True)) == 20 + assert len(experiment.fetch_trials(with_evc_tree=False)) == 20 def test_new_algo_not_resolved(init_full_x, capsys): @@ -781,7 +883,9 @@ def test_new_algo_not_resolved(init_full_x, capsys): error_code = orion.core.cli.main( ( "hunt --init-only -n {branch} --branch-from {name} --config new_algo_config.yaml " - "--manual-resolution ./black_box.py -x~uniform(-10,10)" + "--manual-resolution " + "--enable-evc " + "./black_box.py -x~uniform(-10,10)" ) .format(name=name, branch=branch) .split(" ") @@ -800,7 +904,9 @@ def test_ignore_cli(init_full_x_ignore_cli): orion.core.cli.main( ( "hunt --init-only -n {name} --non-monitored-arguments a-new " - "--manual-resolution ./black_box.py -x~uniform(-10,10)" + "--manual-resolution " + "--enable-evc " + "./black_box.py -x~uniform(-10,10)" ) .format(name=name) .split(" ") @@ -814,7 +920,9 @@ def test_new_code_triggers_code_conflict(capsys): error_code = orion.core.cli.main( ( "hunt --init-only -n {name} " - "--manual-resolution ./black_box.py -x~uniform(-10,10)" + "--manual-resolution " + "--enable-evc " + "./black_box.py -x~uniform(-10,10)" ) .format(name=name) .split(" ") @@ -832,7 +940,7 @@ def test_new_code_triggers_code_conflict_with_name_only(capsys): """Test that a different git hash is generating a child, even if cmdline is not passed""" name = "full_x" error_code = orion.core.cli.main( - ("hunt --init-only -n {name} " "--manual-resolution") + ("hunt --init-only -n {name} --manual-resolution --enable-evc") .format(name=name) .split(" ") ) @@ -852,7 +960,9 @@ def test_new_code_ignores_code_conflict(): error_code = orion.core.cli.main( ( "hunt --worker-max-trials 2 -n {name} --ignore-code-changes " - "--manual-resolution ./black_box.py -x~uniform(-10,10)" + "--manual-resolution " + "--enable-evc " + "./black_box.py -x~uniform(-10,10)" ) .format(name=name) .split(" ") @@ -865,7 +975,9 @@ def test_new_orion_version_triggers_conflict(capsys): """Test that a different git hash is generating a child""" name = "full_x" error_code = orion.core.cli.main( - ("hunt --init-only -n {name} --manual-resolution").format(name=name).split(" ") + ("hunt --init-only -n {name} --manual-resolution --enable-evc") + .format(name=name) + .split(" ") ) assert error_code == 1 @@ -890,8 +1002,8 @@ def test_new_cli(init_full_x_new_cli): ("-vv hunt --max-trials 20 --pool-size 1 -n full_x_new_cli").split(" ") ) - assert len(experiment.fetch_trials(with_evc_tree=True)) == 21 - assert len(experiment.fetch_trials()) == 20 + assert len(experiment.fetch_trials(with_evc_tree=True)) == 20 + assert len(experiment.fetch_trials(with_evc_tree=False)) == 20 @pytest.mark.usefixtures("init_full_x") @@ -899,13 +1011,146 @@ def test_no_cli_no_branching(): """Test that no branching occurs when using same code and not passing cmdline""" name = "full_x" error_code = orion.core.cli.main( - ("hunt --init-only -n {name} " "--manual-resolution") + ("hunt --init-only -n {name} --manual-resolution --enable-evc") .format(name=name) .split(" ") ) assert error_code == 0 +def test_new_script(init_full_x, monkeypatch): + """Test that experiment can branch with new script path even if previous is not present""" + + name = "full_x" + experiment = experiment_builder.load(name=name) + + # Mess with DB to change script path + metadata = experiment.metadata + metadata["user_script"] = "oh_oh_idontexist.py" + metadata["user_args"][0] = "oh_oh_idontexist.py" + metadata["parser"]["parser"]["arguments"][0][1] = "oh_oh_idontexist.py" + get_storage().update_experiment(experiment, metadata=metadata) + + orion.core.cli.main( + ( + "hunt --enable-evc --init-only -n {name} --config orion_config.yaml ./black_box.py " + "-x~uniform(-10,10) --some-new args" + ) + .format(name=name) + .split(" ") + ) + + new_experiment = experiment_builder.load(name=name) + assert new_experiment.version == experiment.version + 1 + + assert new_experiment.refers["adapter"].configuration == [ + {"change_type": "break", "of_type": "commandlinechange"} + ] + + +def test_new_config(init_full_x_new_config, monkeypatch): + """Test experiment branching with new config""" + experiment = experiment_builder.load(name="full_x_new_config") + + assert experiment.refers["adapter"].configuration == [ + {"change_type": "noeffect", "of_type": "commandlinechange"}, + { + "of_type": "dimensionaddition", + "param": {"name": "/y", "type": "real", "value": 0}, + }, + {"change_type": "unsure", "of_type": "scriptconfigchange"}, + ] + + assert len(experiment.fetch_trials(with_evc_tree=True)) == 3 + assert len(experiment.fetch_trials()) == 2 + + +def test_missing_config(init_full_x_new_config, monkeypatch): + """Test that experiment can branch with new config if previous is not present""" + name = "full_x_new_config" + experiment = experiment_builder.load(name=name) + + # Mess with DB to change config path + metadata = experiment.metadata + bad_config_file = "ho_ho_idontexist.yaml" + config_file = metadata["parser"]["file_config_path"] + metadata["parser"]["file_config_path"] = bad_config_file + metadata["parser"]["parser"]["arguments"][2][1] = bad_config_file + metadata["user_args"][3] = bad_config_file + get_storage().update_experiment(experiment, metadata=metadata) + + orion.core.cli.main( + ( + "hunt --enable-evc --init-only -n {name} " + "--cli-change-type noeffect " + "--config-change-type unsure " + "./black_box_new.py -x~uniform(-10,10) --config {config_file}" + ) + .format(name=name, config_file=config_file) + .split(" ") + ) + + new_experiment = experiment_builder.load(name=name) + assert new_experiment.version == experiment.version + 1 + + assert new_experiment.refers["adapter"].configuration == [ + {"change_type": "noeffect", "of_type": "commandlinechange"} + ] + + +def test_missing_and_new_config(init_full_x_new_config, monkeypatch): + """Test that experiment can branch with new config if previous is not present, with correct + diff. + """ + name = "full_x_new_config" + experiment = experiment_builder.load(name=name) + + # Mess with DB to change config path + metadata = experiment.metadata + bad_config_file = "ho_ho_idontexist.yaml" + config_file = metadata["parser"]["file_config_path"] + metadata["parser"]["file_config_path"] = bad_config_file + metadata["parser"]["parser"]["arguments"][2][1] = bad_config_file + metadata["user_args"][3] = bad_config_file + + with open(config_file, "w") as f: + f.write( + yaml.dump( + { + "new_arg": "some-new-value", + "y": "orion~uniform(-10, 20, default_value=0)", + } + ) + ) + + get_storage().update_experiment(experiment, metadata=metadata) + + orion.core.cli.main( + ( + "hunt --enable-evc --init-only -n {name} " + "--cli-change-type noeffect " + "--config-change-type unsure " + "./black_box_new.py -x~uniform(-10,10) --config {config_file}" + ) + .format(name=name, config_file=config_file) + .split(" ") + ) + + new_experiment = experiment_builder.load(name=name) + assert new_experiment.version == experiment.version + 1 + + assert new_experiment.refers["adapter"].configuration == [ + { + "name": "/y", + "new_prior": "uniform(-10, 20, default_value=0)", + "of_type": "dimensionpriorchange", + "old_prior": "uniform(-10, 10, default_value=0)", + }, + {"change_type": "noeffect", "of_type": "commandlinechange"}, + {"change_type": "unsure", "of_type": "scriptconfigchange"}, + ] + + def test_auto_resolution_does_resolve(init_full_x_full_y, monkeypatch): """Test that auto-resolution does resolve all conflicts""" # Patch cmdloop to avoid autoresolution's prompt @@ -917,7 +1162,9 @@ def test_auto_resolution_does_resolve(init_full_x_full_y, monkeypatch): # experiment orion.core.cli.main( ( - "hunt --init-only -n {branch} --branch-from {name} ./black_box_with_y.py " + "hunt --init-only -n {branch} --branch-from {name} " + "--enable-evc " + "./black_box_with_y.py " "-x~uniform(0,10) " "-w~choices(['a','b'])" ) @@ -956,7 +1203,9 @@ def test_auto_resolution_with_fidelity(init_full_x_full_y, monkeypatch): # experiment orion.core.cli.main( ( - "hunt --init-only -n {branch} --branch-from {name} ./black_box_with_y.py " + "hunt --init-only -n {branch} --branch-from {name} " + "--enable-evc " + "./black_box_with_y.py " "-x~uniform(0,10) " "-w~fidelity(1,10)" ) @@ -991,14 +1240,18 @@ def test_init_w_version_from_parent_w_children( monkeypatch.chdir(os.path.dirname(os.path.abspath(__file__))) execute( "hunt --init-only -n experiment --config orion_config.yaml " + "--enable-evc " "./black_box.py -x~normal(0,1)" ) execute( - "hunt --init-only -n experiment ./black_box.py -x~normal(0,1) -y~+normal(0,1)" + "hunt --init-only -n experiment " + "--enable-evc " + "./black_box.py -x~normal(0,1) -y~+normal(0,1)" ) execute( "hunt --init-only -n experiment -v 1 " + "--enable-evc " "./black_box.py -x~normal(0,1) -y~+normal(0,1) -z~normal(0,1)", assert_code=1, ) @@ -1014,13 +1267,18 @@ def test_init_w_version_from_exp_wout_child(setup_pickleddb_database, monkeypatc monkeypatch.chdir(os.path.dirname(os.path.abspath(__file__))) execute( "hunt --init-only -n experiment --config orion_config.yaml " + "--enable-evc " "./black_box.py -x~normal(0,1)" ) execute( - "hunt --init-only -n experiment ./black_box.py -x~normal(0,1) -y~+normal(0,1)" + "hunt --init-only -n experiment " + "--enable-evc " + "./black_box.py -x~normal(0,1) -y~+normal(0,1)" ) execute( - "hunt --init-only -n experiment -v 2 ./black_box.py " + "hunt --init-only -n experiment -v 2 " + "--enable-evc " + "./black_box.py " "-x~normal(0,1) -y~+normal(0,1) -z~+normal(0,1)" ) @@ -1033,13 +1291,18 @@ def test_init_w_version_gt_max(setup_pickleddb_database, monkeypatch): monkeypatch.chdir(os.path.dirname(os.path.abspath(__file__))) execute( "hunt --init-only -n experiment --config orion_config.yaml " + "--enable-evc " "./black_box.py -x~normal(0,1)" ) execute( - "hunt --init-only -n experiment ./black_box.py -x~normal(0,1) -y~+normal(0,1)" + "hunt --init-only -n experiment " + "--enable-evc " + "./black_box.py -x~normal(0,1) -y~+normal(0,1)" ) execute( - "hunt --init-only -n experiment -v 2000 ./black_box.py " + "hunt --init-only -n experiment -v 2000 " + "--enable-evc " + "./black_box.py " "-x~normal(0,1) -y~+normal(0,1) -z~+normal(0,1)" ) @@ -1052,14 +1315,18 @@ def test_init_check_increment_w_children(setup_pickleddb_database, monkeypatch): monkeypatch.chdir(os.path.dirname(os.path.abspath(__file__))) execute( "hunt --init-only -n experiment --config orion_config.yaml " + "--enable-evc " "./black_box.py -x~normal(0,1)" ) execute( - "hunt --init-only -n experiment --branch-to experiment_2 ./black_box.py " + "hunt --init-only -n experiment --branch-to experiment_2 " + "--enable-evc " + "./black_box.py " "-x~normal(0,1) -y~+normal(0,1)" ) execute( - "hunt --init-only -n experiment ./black_box.py -x~normal(0,1) -z~+normal(0,1)" + "hunt --init-only -n experiment --enable-evc " + "./black_box.py -x~normal(0,1) -z~+normal(0,1)" ) exp = get_storage().fetch_experiments({"name": "experiment", "version": 2}) @@ -1071,13 +1338,18 @@ def test_branch_from_selected_version(setup_pickleddb_database, monkeypatch): monkeypatch.chdir(os.path.dirname(os.path.abspath(__file__))) execute( "hunt --init-only -n experiment --config orion_config.yaml " + "--enable-evc " "./black_box.py -x~normal(0,1)" ) execute( - "hunt --init-only -n experiment ./black_box.py -x~normal(0,1) -y~+normal(0,1)" + "hunt --init-only -n experiment " + "--enable-evc " + "./black_box.py -x~normal(0,1) -y~+normal(0,1)" ) execute( - "hunt --init-only -n experiment --version 1 -b experiment_2 ./black_box.py " + "hunt --init-only -n experiment --version 1 -b experiment_2 " + "--enable-evc " + "./black_box.py " "-x~normal(0,1) -z~+normal(0,1)" ) diff --git a/tests/functional/commands/conftest.py b/tests/functional/commands/conftest.py index 1bc64047a..03b09b1e7 100644 --- a/tests/functional/commands/conftest.py +++ b/tests/functional/commands/conftest.py @@ -179,6 +179,7 @@ def two_experiments(monkeypatch, storage): [ "hunt", "--init-only", + "--enable-evc", "-n", "test_double_exp", "--branch-to", @@ -204,7 +205,7 @@ def family_with_trials(two_experiments): x["value"] = x_value y["value"] = x_value trial = Trial(experiment=exp.id, params=[x], status=status) - x["value"] = x_value + x["value"] = x_value + 0.5 # To avoid duplicates trial2 = Trial(experiment=exp2.id, params=[x, y], status=status) x_value += 1 Database().write("trials", trial.to_dict()) @@ -239,6 +240,7 @@ def three_experiments_family(two_experiments, storage): [ "hunt", "--init-only", + "--enable-evc", "-n", "test_double_exp", "--branch-to", @@ -260,7 +262,7 @@ def three_family_with_trials(three_experiments_family, family_with_trials): x_value = 0 for status in Trial.allowed_stati: - x["value"] = x_value + x["value"] = x_value + 0.75 # To avoid duplicates z["value"] = x_value * 100 trial = Trial(experiment=exp.id, params=[x, z], status=status) x_value += 1 @@ -274,6 +276,7 @@ def three_experiments_family_branch(two_experiments, storage): [ "hunt", "--init-only", + "--enable-evc", "-n", "test_double_exp_child", "--branch-to", @@ -302,7 +305,7 @@ def three_family_branch_with_trials( x_value = 0 for status in Trial.allowed_stati: - x["value"] = x_value + x["value"] = x_value + 0.25 # To avoid duplicates y["value"] = x_value * 10 z["value"] = x_value * 100 trial = Trial(experiment=exp.id, params=[x, y, z], status=status) @@ -317,6 +320,7 @@ def two_experiments_same_name(one_experiment, storage): [ "hunt", "--init-only", + "--enable-evc", "-n", "test_single_exp", "./black_box.py", @@ -336,6 +340,7 @@ def three_experiments_family_same_name(two_experiments_same_name, storage): [ "hunt", "--init-only", + "--enable-evc", "-n", "test_single_exp", "-v", @@ -359,6 +364,7 @@ def three_experiments_branch_same_name(two_experiments_same_name, storage): [ "hunt", "--init-only", + "--enable-evc", "-n", "test_single_exp", "-b", @@ -379,6 +385,7 @@ def three_experiments_same_name(two_experiments_same_name, storage): [ "hunt", "--init-only", + "--enable-evc", "-n", "test_single_exp", "./black_box.py", @@ -397,6 +404,7 @@ def three_experiments_same_name_with_trials(two_experiments_same_name, storage): [ "hunt", "--init-only", + "--enable-evc", "-n", "test_single_exp", "./black_box.py", @@ -416,7 +424,7 @@ def three_experiments_same_name_with_trials(two_experiments_same_name, storage): z = {"name": "/z", "type": "real"} x_value = 0 for status in Trial.allowed_stati: - x["value"] = x_value + x["value"] = x_value + 0.1 # To avoid duplicates y["value"] = x_value * 10 z["value"] = x_value * 100 trial = Trial(experiment=exp.id, params=[x], status=status) diff --git a/tests/functional/commands/test_insert_command.py b/tests/functional/commands/test_insert_command.py index 4c2e070a3..b6f7c4562 100644 --- a/tests/functional/commands/test_insert_command.py +++ b/tests/functional/commands/test_insert_command.py @@ -222,6 +222,7 @@ def test_insert_with_version(storage, monkeypatch, script_path): [ "hunt", "--init-only", + "--enable-evc", "-n", "experiment", "-c", diff --git a/tests/functional/commands/test_status_command.py b/tests/functional/commands/test_status_command.py index 6dc9129c5..6edafe5fe 100644 --- a/tests/functional/commands/test_status_command.py +++ b/tests/functional/commands/test_status_command.py @@ -501,12 +501,12 @@ def test_two_related_w_a_wout_c(family_with_trials, capsys): ======================== id status -------------------------------- ----------- - 890b4f07685ed020f5d9e28cac9316e1 broken - ff81ff46da5ffe6bd623fb38a06df993 completed - 78a3e60699eee1d0b9bc51a049168fce interrupted - 13cd454155748351790525e3079fb620 new - d1b7ecbd3621de9195a42c76defb6603 reserved - 33d6208ef03cb236a8f3b567665c357d suspended + 2b08bdbad673e60fef739b7f162d4120 broken + af45e26f349f9b186e5c05a91d854fb5 completed + b334ea9e2c86873ddb18b206cf72cc27 interrupted + 16b024079173ca3903eb956c478afa3d new + 17ce012a15a7398d3e7703d0c13e21c2 reserved + d4721fe7f50df1fe3ba60424df6dec67 suspended """ @@ -537,12 +537,12 @@ def test_three_unrelated_w_a_wout_c(three_experiments_with_trials, capsys): ======================== id status -------------------------------- ----------- - 890b4f07685ed020f5d9e28cac9316e1 broken - ff81ff46da5ffe6bd623fb38a06df993 completed - 78a3e60699eee1d0b9bc51a049168fce interrupted - 13cd454155748351790525e3079fb620 new - d1b7ecbd3621de9195a42c76defb6603 reserved - 33d6208ef03cb236a8f3b567665c357d suspended + 2b08bdbad673e60fef739b7f162d4120 broken + af45e26f349f9b186e5c05a91d854fb5 completed + b334ea9e2c86873ddb18b206cf72cc27 interrupted + 16b024079173ca3903eb956c478afa3d new + 17ce012a15a7398d3e7703d0c13e21c2 reserved + d4721fe7f50df1fe3ba60424df6dec67 suspended test_single_exp-v1 @@ -585,24 +585,24 @@ def test_three_related_w_a_wout_c(three_family_with_trials, capsys): ======================== id status -------------------------------- ----------- - 890b4f07685ed020f5d9e28cac9316e1 broken - ff81ff46da5ffe6bd623fb38a06df993 completed - 78a3e60699eee1d0b9bc51a049168fce interrupted - 13cd454155748351790525e3079fb620 new - d1b7ecbd3621de9195a42c76defb6603 reserved - 33d6208ef03cb236a8f3b567665c357d suspended + 2b08bdbad673e60fef739b7f162d4120 broken + af45e26f349f9b186e5c05a91d854fb5 completed + b334ea9e2c86873ddb18b206cf72cc27 interrupted + 16b024079173ca3903eb956c478afa3d new + 17ce012a15a7398d3e7703d0c13e21c2 reserved + d4721fe7f50df1fe3ba60424df6dec67 suspended test_double_exp_child2-v1 ========================= id status -------------------------------- ----------- - 1c238040d6b6d8423d99a08551fe0998 broken - 2c13424a9212ab92ea592bdaeb1c13e9 completed - a2680fbda1faa9dfb94946cf25536f44 interrupted - abbda454d0577ded5b8e784a9d6d5abb new - df58aa8fd875f129f7faa84eb15ca453 reserved - 71657e86bad0f2e8b06098a64cb883b6 suspended + f736224f9687f86c493a004696abd95b broken + 2def838a2eb199820f283e1948e7c37a completed + 75752e1ba3c9007e42616249087a7fef interrupted + 5f5e1c8d886ef0b0c0666d6db7bf1723 new + 2623a01bd2483a5e18fac9bc3dfbdee2 reserved + 2eecad70c53bb52c99efad36f2d9502f suspended """ @@ -633,24 +633,24 @@ def test_three_related_branch_w_a_wout_c(three_family_branch_with_trials, capsys ======================== id status -------------------------------- ----------- - 890b4f07685ed020f5d9e28cac9316e1 broken - ff81ff46da5ffe6bd623fb38a06df993 completed - 78a3e60699eee1d0b9bc51a049168fce interrupted - 13cd454155748351790525e3079fb620 new - d1b7ecbd3621de9195a42c76defb6603 reserved - 33d6208ef03cb236a8f3b567665c357d suspended + 2b08bdbad673e60fef739b7f162d4120 broken + af45e26f349f9b186e5c05a91d854fb5 completed + b334ea9e2c86873ddb18b206cf72cc27 interrupted + 16b024079173ca3903eb956c478afa3d new + 17ce012a15a7398d3e7703d0c13e21c2 reserved + d4721fe7f50df1fe3ba60424df6dec67 suspended test_double_exp_grand_child-v1 ============================== id status -------------------------------- ----------- - e374d8f802aed52c07763545f46228a7 broken - f9ee14ff9ef0b95ed7a24860731c85a9 completed - 3f7dff101490727d5fa0efeb36ca6366 interrupted - 40838d46dbf7778a3cb51b7a09118391 new - b7860a18b2700cce4e8009cde543975c reserved - cd406126bc350ad82ac77c75174cc8a2 suspended + e1c929d9c4d48eca4dcd463690e4096d broken + e8eec526a7f7fdea5e4a30d969ec69ae completed + e88f1d26158efb393ae1278c6ef115fe interrupted + 4510dc7d16a692c7415dd2898faced9f new + 523881db96da0de9dd972ef8f3545f81 reserved + f967423d15c50ddf88c242f511997ff7 suspended """ @@ -853,7 +853,7 @@ def test_two_related_w_ac(family_with_trials, capsys): cbb766d729294f77f0ca86ff2bf72707 completed ca6576848f17201852225d816fb71fcc interrupted 28097ba31dbdffc0aa265c6bc5c98b0f new -4c409da13bdc93c54f6997797c296356 new +cd16dd40955335aae3bd40371e636b71 new adbe6c400cd1e667696e28fbecd000a0 reserved 5679af6c6bb54aa8042043008ab2bc1f suspended @@ -878,7 +878,7 @@ def test_three_unrelated_w_ac(three_experiments_with_trials, capsys): cbb766d729294f77f0ca86ff2bf72707 completed ca6576848f17201852225d816fb71fcc interrupted 28097ba31dbdffc0aa265c6bc5c98b0f new -4c409da13bdc93c54f6997797c296356 new +cd16dd40955335aae3bd40371e636b71 new adbe6c400cd1e667696e28fbecd000a0 reserved 5679af6c6bb54aa8042043008ab2bc1f suspended @@ -915,8 +915,8 @@ def test_three_related_w_ac(three_family_with_trials, capsys): cbb766d729294f77f0ca86ff2bf72707 completed ca6576848f17201852225d816fb71fcc interrupted 28097ba31dbdffc0aa265c6bc5c98b0f new -4c409da13bdc93c54f6997797c296356 new -b97518f91e006cd4a2805657c596b11c new +cd16dd40955335aae3bd40371e636b71 new +8d5652cba225224d6702107e97a53cd9 new adbe6c400cd1e667696e28fbecd000a0 reserved 5679af6c6bb54aa8042043008ab2bc1f suspended @@ -941,8 +941,8 @@ def test_three_related_branch_w_ac(three_family_branch_with_trials, capsys): cbb766d729294f77f0ca86ff2bf72707 completed ca6576848f17201852225d816fb71fcc interrupted 28097ba31dbdffc0aa265c6bc5c98b0f new -4c409da13bdc93c54f6997797c296356 new -5183ee9c28601cc78c0a148a386df9f9 new +cd16dd40955335aae3bd40371e636b71 new +17fab2503ac14ae55e207c7cca1b8f1f new adbe6c400cd1e667696e28fbecd000a0 reserved 5679af6c6bb54aa8042043008ab2bc1f suspended diff --git a/tests/functional/configuration/test_all_options.py b/tests/functional/configuration/test_all_options.py index d6a360dcd..56d64195b 100644 --- a/tests/functional/configuration/test_all_options.py +++ b/tests/functional/configuration/test_all_options.py @@ -3,6 +3,8 @@ import datetime import os import random +import shutil +import tempfile from contextlib import contextmanager import pytest @@ -25,6 +27,30 @@ ) +def with_storage_fork(func): + """Copy PickledDB to a tmp adress and work in the tmp path within the func execution. + + Functions decorated with this decorator should only be called after the storage has been + initialized. + """ + + def call(*args, **kwargs): + + with tempfile.NamedTemporaryFile(delete=True) as tmp_file: + storage = get_storage() + old_path = storage._db.host + storage._db.host = tmp_file.name + shutil.copyfile(old_path, tmp_file.name) + + rval = func(*args, **kwargs) + + storage._db.host = old_path + + return rval + + return call + + class ConfigurationTestSuite: """Test suite for the configuration groups""" @@ -113,7 +139,7 @@ def test_db_config(self, tmp_path): with self.setup_db_config(tmp_path): self.check_db_config() - @pytest.mark.usefixtures("with_user_userxyz") + @pytest.mark.usefixtures("with_user_userxyz", "version_XYZ") def test_local_config(self, tmp_path, monkeypatch): """Test that local config overrides db/global config""" update_singletons() @@ -513,7 +539,7 @@ def check_local_config(self, tmp_path, conf_file, monkeypatch): def check_cmd_args_config(self, tmp_path, conf_file, monkeypatch): """Check that cmdargs configuration overrides global/envvars/local configuration""" - command = f"hunt --worker-max-trials 0 -c {conf_file} --branch-from test-name" + command = f"hunt --worker-max-trials 0 -c {conf_file} --branch-from test-name --enable-evc" command += " " + " ".join( "--{} {}".format(name, value) for name, value in self.cmdargs.items() ) @@ -751,6 +777,7 @@ class TestEVCConfig(ConfigurationTestSuite): config = { "evc": { + "enable": False, "auto_resolution": False, "manual_resolution": True, "non_monitored_arguments": ["test", "one"], @@ -764,6 +791,7 @@ class TestEVCConfig(ConfigurationTestSuite): } env_vars = { + "ORION_EVC_ENABLE": "true", "ORION_EVC_MANUAL_RESOLUTION": "", "ORION_EVC_NON_MONITORED_ARGUMENTS": "test:two:others", "ORION_EVC_IGNORE_CODE_CHANGES": "", @@ -776,9 +804,10 @@ class TestEVCConfig(ConfigurationTestSuite): local = { "evc": { + "enable": False, "manual_resolution": True, "non_monitored_arguments": ["test", "local"], - "ignore_code_changes": True, + "ignore_code_changes": False, "algorithm_change": True, "code_change_type": "break", "cli_change_type": "break", @@ -788,9 +817,10 @@ class TestEVCConfig(ConfigurationTestSuite): } cmdargs = { + "enable-evc": True, "manual-resolution": False, "non-monitored-arguments": "test:cmdargs", - "ignore-code-changes": False, + "ignore-code-changes": True, "algorithm-change": False, "code-change-type": "noeffect", "cli-change-type": "unsure", @@ -820,37 +850,54 @@ def check_global_config(self, tmp_path, monkeypatch): assert orion.core.config.to_dict()["evc"] == self.config["evc"] name = "global-test" - command = ( - f"hunt --worker-max-trials 0 -n {name} python {script} -x~uniform(0,1)" - ) + command = f"hunt --enable-evc --worker-max-trials 0 -n {name} python {script} -x~uniform(0,1)" assert orion.core.cli.main(command.split(" ")) == 0 - # Test that manual_resolution is True and it branches when changing cli + # Test that manual_resolution is True and it branches when changing cli (thus crash) assert orion.core.cli.main(f"{command} --cli-change ".split(" ")) == 1 command = "hunt --auto-resolution " + command[5:] - command = self._check_cli_change( - name, command, version=1, change_type="noeffect" - ) + self._check_enable(name, command.replace(" --enable-evc", ""), enabled=False) + + self._check_cli_change(name, command, change_type="noeffect") + self._check_non_monitored_arguments( - name, command, version=2, non_monitored_arguments=["test", "one"] + name, command, non_monitored_arguments=["test", "one"] ) + self._check_script_config_change( - tmp_path, name, command, version=2, change_type="noeffect" - ) - self._check_code_change( - monkeypatch, - name, - command, - version=3, - mock_ignore_code_changes=None, - ignore_code_changes=self.config["evc"]["ignore_code_changes"], - change_type=self.config["evc"]["code_change_type"], + tmp_path, name, command, change_type="noeffect" ) + # EVC not enabled, code change should be ignored even if option is set to True + assert self.config["evc"]["enable"] is False + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command.replace("--enable-evc ", ""), + mock_ignore_code_changes=True, + ignore_code_changes=True, + change_type=self.config["evc"]["code_change_type"], + enable_evc=False, + ) + + # EVC is enabled, option should be honored + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command, + mock_ignore_code_changes=None, + ignore_code_changes=self.config["evc"]["ignore_code_changes"], + change_type=self.config["evc"]["code_change_type"], + enable_evc=True, + ) + def check_env_var_config(self, tmp_path, monkeypatch): """Check that env vars overrides global configuration""" + assert orion.core.config.evc.enable assert not orion.core.config.evc.manual_resolution assert not orion.core.config.evc.ignore_code_changes assert not orion.core.config.evc.algorithm_change @@ -870,24 +917,42 @@ def check_env_var_config(self, tmp_path, monkeypatch): ) assert orion.core.cli.main(command.split(" ")) == 0 - # TODO: Anything to test still??? - command = self._check_cli_change(name, command, version=1, change_type="unsure") - command = self._check_non_monitored_arguments( - name, command, version=2, non_monitored_arguments=["test", "two", "others"] - ) - self._check_script_config_change( - tmp_path, name, command, version=2, change_type="unsure" - ) + self._check_enable(name, command, enabled=True) - self._check_code_change( - monkeypatch, - name, - command, - version=3, - mock_ignore_code_changes=None, - ignore_code_changes=bool(self.env_vars["ORION_EVC_IGNORE_CODE_CHANGES"]), - change_type=self.env_vars["ORION_EVC_CODE_CHANGE"], + self._check_cli_change(name, command, change_type="unsure") + self._check_non_monitored_arguments( + name, command, non_monitored_arguments=["test", "two", "others"] ) + self._check_script_config_change(tmp_path, name, command, change_type="unsure") + + # Enable EVC, ignore_code_changes is False + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command, + mock_ignore_code_changes=None, + ignore_code_changes=bool( + self.env_vars["ORION_EVC_IGNORE_CODE_CHANGES"] + ), + change_type=self.env_vars["ORION_EVC_CODE_CHANGE"], + enable_evc=True, + ) + + # Disable EVC, ignore_code_changes is True for Consumer + os.environ["ORION_EVC_ENABLE"] = "" + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command, + mock_ignore_code_changes=None, + ignore_code_changes=bool( + self.env_vars["ORION_EVC_IGNORE_CODE_CHANGES"] + ), + change_type=self.env_vars["ORION_EVC_CODE_CHANGE"], + enable_evc=False, + ) def check_db_config(self): """No Storage config in DB, no test""" @@ -897,7 +962,7 @@ def check_local_config(self, tmp_path, conf_file, monkeypatch): """Check that local configuration overrides global/envvars configuration""" name = "local-test" command = ( - f"hunt --worker-max-trials 0 -n {name} -c {conf_file} " + f"hunt --enable-evc --worker-max-trials 0 -n {name} -c {conf_file} " f"python {script} -x~uniform(0,1)" ) @@ -908,61 +973,75 @@ def check_local_config(self, tmp_path, conf_file, monkeypatch): command = "hunt --auto-resolution " + command[5:] - command = self._check_cli_change( - name, command, version=1, change_type=self.local["evc"]["cli_change_type"] + self._check_enable(name, command.replace(" --enable-evc", ""), enabled=False) + + self._check_cli_change( + name, command, change_type=self.local["evc"]["cli_change_type"] ) - command = self._check_non_monitored_arguments( + self._check_non_monitored_arguments( name, command, - version=2, non_monitored_arguments=self.local["evc"]["non_monitored_arguments"], ) self._check_script_config_change( tmp_path, name, command, - version=2, change_type=self.local["evc"]["config_change_type"], ) - self._check_code_change( - monkeypatch, - name, - command, - version=3, - mock_ignore_code_changes=True, - ignore_code_changes=self.local["evc"]["ignore_code_changes"], - change_type=self.local["evc"]["code_change_type"], - ) + + # enabled evc, ignore code changes so to True + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command, + mock_ignore_code_changes=False, + ignore_code_changes=self.local["evc"]["ignore_code_changes"], + change_type=self.local["evc"]["code_change_type"], + enable_evc=True, + ) + + # disabled evc, ignore code changes so to True + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command.replace("--enable-evc ", ""), + mock_ignore_code_changes=False, + ignore_code_changes=self.local["evc"]["ignore_code_changes"], + change_type=self.local["evc"]["code_change_type"], + enable_evc=False, + ) def check_cmd_args_config(self, tmp_path, conf_file, monkeypatch): """Check that cmdargs configuration overrides global/envvars/local configuration""" name = "cmd-test" command = ( f"hunt --worker-max-trials 0 -c {conf_file} -n {name} " + "--enable-evc " "--auto-resolution " - "--non-monitored-arguments test:cmdargs " - "--code-change-type noeffect " - "--cli-change-type unsure " - "--config-change-type break " f"python {script} -x~uniform(0,1)" ) assert orion.core.cli.main(command.split(" ")) == 0 - command = self._check_cli_change( - name, command, version=1, change_type=self.cmdargs["cli-change-type"] + self._check_enable(name, command, enabled=True) + + self._check_cli_change( + name, + command="hunt --cli-change-type unsure " + command[5:], + change_type=self.cmdargs["cli-change-type"], ) - command = self._check_non_monitored_arguments( + self._check_non_monitored_arguments( name, - command, - version=2, + command="hunt --non-monitored-arguments test:cmdargs " + command[5:], non_monitored_arguments=self.cmdargs["non-monitored-arguments"].split(":"), ) self._check_script_config_change( tmp_path, name, - command, - version=2, + command="hunt --config-change-type break " + command[5:], change_type=self.cmdargs["config-change-type"], ) @@ -977,68 +1056,95 @@ def mock_local(cmdargs): monkeypatch.setattr(orion.core.io.resolve_config, "fetch_config", mock_local) # Check that ignore_code_changes is rightly False - self._check_code_change( - monkeypatch, - name, - command, - version=3, - mock_ignore_code_changes=False, - ignore_code_changes=False, - change_type=self.cmdargs["code-change-type"], - ) - - command = "hunt --ignore-code-changes " + command[5:] - - # Check that ignore_code_changes is now True - self._check_code_change( - monkeypatch, - name, - command, - version=4, - mock_ignore_code_changes=True, - ignore_code_changes=True, - change_type=self.cmdargs["code-change-type"], - ) + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command="hunt --code-change-type noeffect " + command[5:], + mock_ignore_code_changes=False, + ignore_code_changes=False, + change_type=self.cmdargs["code-change-type"], + enable_evc=True, + ) + + # Check that ignore_code_changes is now True because --ignore-code-changes was added + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command="hunt --ignore-code-changes --code-change-type noeffect " + + command[5:], + mock_ignore_code_changes=True, + ignore_code_changes=True, + change_type=self.cmdargs["code-change-type"], + enable_evc=True, + ) + + # Check that ignore_code_changes is forced to True in consumer + # even if --ignore-code-changes is not passed + with monkeypatch.context() as m: + self._check_code_change( + m, + name, + command.replace("--enable-evc ", ""), + mock_ignore_code_changes=False, + ignore_code_changes=False, + change_type=self.cmdargs["code-change-type"], + enable_evc=False, + ) + + @with_storage_fork + def _check_enable(self, name, command, enabled): + command += " --cli-change " + experiment = get_experiment(name) + if enabled: + assert orion.core.cli.main(command.split(" ")) == 0 + assert get_experiment(name).version == experiment.version + 1 + else: + assert orion.core.cli.main(command.split(" ")) == 0 + assert get_experiment(name).version == experiment.version - def _check_cli_change(self, name, command, version, change_type): - command += " --cli-change" + @with_storage_fork + def _check_cli_change(self, name, command, change_type): + command += " --cli-change " + experiment = get_experiment(name) # Test that manual_resolution is False and it branches when changing cli assert orion.core.cli.main(command.split(" ")) == 0 - experiment = get_experiment(name, version=version + 1) - assert experiment.version == version + 1 - assert experiment.refers["adapter"].configuration[0] == { + new_experiment = get_experiment(name) + + assert new_experiment.version == experiment.version + 1 + assert new_experiment.refers["adapter"].configuration[0] == { "of_type": "commandlinechange", "change_type": change_type, } - return command - - def _check_non_monitored_arguments( - self, name, command, version, non_monitored_arguments - ): + @with_storage_fork + def _check_non_monitored_arguments(self, name, command, non_monitored_arguments): for argument in non_monitored_arguments: command += f" --{argument} " + experiment = get_experiment(name) # Test that cli change with non-monitored args do not cause branching assert orion.core.cli.main(command.split(" ")) == 0 - experiment = get_experiment(name, version=version + 1) - assert experiment.version == version - - return command + assert get_experiment(name).version == experiment.version + @with_storage_fork def _check_code_change( self, monkeypatch, name, command, - version, mock_ignore_code_changes, ignore_code_changes, change_type, + enable_evc, ): + """Check if code changes are correctly ignored during experiment build and by consumer + between two trial executions. + """ # Test that code change is handled with 'no-effect' def fixed_dictionary(user_script): @@ -1064,27 +1170,40 @@ def mock_detect(old_config, new_config, branching_config=None): assert ( branching_config["ignore_code_changes"] is mock_ignore_code_changes ) - branching_config["ignore_code_changes"] = False + # branching_config["ignore_code_changes"] = False return detect(old_config, new_config, branching_config) monkeypatch.setattr( orion.core.evc.conflicts.CodeConflict, "detect", mock_detect ) - assert orion.core.cli.main(command.split(" ")) == 0 - self._check_consumer({"ignore_code_changes": ignore_code_changes}) - experiment = get_experiment(name, version=version + 1) - assert experiment.version == version + 1 - assert experiment.refers["adapter"].configuration[0] == { - "of_type": "codechange", - "change_type": change_type, - } + experiment = get_experiment(name) + + assert orion.core.cli.main(command.split(" ")) == 0 + self._check_consumer( + { + "ignore_code_changes": ( + (enable_evc and ignore_code_changes) or not enable_evc + ) + } + ) - monkeypatch.undo() + new_experiment = get_experiment(name) + if enable_evc and not ignore_code_changes: + assert new_experiment.version == experiment.version + 1 + assert new_experiment.refers["adapter"].configuration[0] == { + "of_type": "codechange", + "change_type": change_type, + } + elif enable_evc: # But code change ignored, so no branching event. + assert new_experiment.version == experiment.version + else: + assert new_experiment.version == experiment.version + + @with_storage_fork + def _check_script_config_change(self, tmp_path, name, command, change_type): - def _check_script_config_change( - self, tmp_path, name, command, version, change_type - ): + experiment = get_experiment(name) # Test that config change is handled with 'break' with self.setup_user_script_config(tmp_path) as user_script_config: @@ -1092,11 +1211,11 @@ def _check_script_config_change( command += f" --config {user_script_config}" assert orion.core.cli.main(command.split(" ")) == 0 - experiment = get_experiment(name, version=version + 1) - assert experiment.version == version + 1 - print(experiment.refers["adapter"].configuration) - assert len(experiment.refers["adapter"].configuration) == 2 - assert experiment.refers["adapter"].configuration[1] == { + new_experiment = get_experiment(name) + + assert new_experiment.version == experiment.version + 1 + assert len(new_experiment.refers["adapter"].configuration) == 2 + assert new_experiment.refers["adapter"].configuration[1] == { "of_type": "scriptconfigchange", "change_type": change_type, } diff --git a/tests/functional/core/worker/test_experiment_functional.py b/tests/functional/core/worker/test_experiment_functional.py new file mode 100644 index 000000000..bd20b8b77 --- /dev/null +++ b/tests/functional/core/worker/test_experiment_functional.py @@ -0,0 +1,165 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +"""Collection of functional tests for :mod:`orion.core.worker.experiment`.""" +import logging + +from orion.client import build_experiment, get_experiment +from orion.core.io.database import DuplicateKeyError +from orion.core.worker.trial import Trial +from orion.testing import mocked_datetime +from orion.testing.evc import ( + build_child_experiment, + build_grand_child_experiment, + build_root_experiment, + disable_duplication, +) + +SPACE = {"x": "uniform(0, 100)"} +N_PENDING = 3 # new, interrupted and suspended + + +def generate_trials_list(level, stati=Trial.allowed_stati): + return [ + {"status": trial_status, "x": i + len(stati) * level} + for i, trial_status in enumerate(stati) + ] + + +status = [] + + +def build_evc_tree(levels): + build_root_experiment(space=SPACE, trials=generate_trials_list(levels[0])) + names = ["root", "parent", "experiment", "child", "grand-child"] + for level, (parent, name) in zip(levels[1:], zip(names, names[1:])): + build_child_experiment( + space=SPACE, name=name, parent=parent, trials=generate_trials_list(level) + ) + + +def test_duplicate_pending_trials(storage, monkeypatch): + """Test that only pending trials are duplicated""" + with disable_duplication(monkeypatch): + build_evc_tree(list(range(5))) + + for exp in ["root", "parent", "experiment", "child", "grand-child"]: + assert len(get_experiment(name=exp).fetch_trials(with_evc_tree=False)) == len( + Trial.allowed_stati + ) + + experiment = build_experiment(name="experiment") + experiment._experiment.duplicate_pending_trials() + + for exp in ["root", "parent", "child", "grand-child"]: + assert len(get_experiment(name=exp).fetch_trials(with_evc_tree=False)) == len( + Trial.allowed_stati + ) + + assert ( + len(experiment.fetch_trials(with_evc_tree=False)) + == len(Trial.allowed_stati) + N_PENDING * 4 + ) + + +def test_duplicate_closest_duplicated_pending_trials(storage, monkeypatch): + """Test that only closest duplicated pending trials are duplicated""" + with disable_duplication(monkeypatch): + build_evc_tree([0, 0, 1, 2, 2]) + + for exp in ["root", "parent", "experiment", "child", "grand-child"]: + assert len(get_experiment(name=exp).fetch_trials(with_evc_tree=False)) == len( + Trial.allowed_stati + ) + + experiment = build_experiment(name="experiment") + experiment._experiment.duplicate_pending_trials() + + for exp in ["root", "parent", "child", "grand-child"]: + assert len(get_experiment(name=exp).fetch_trials(with_evc_tree=False)) == len( + Trial.allowed_stati + ) + + assert ( + len(experiment.fetch_trials(with_evc_tree=False)) + == len(Trial.allowed_stati) + N_PENDING * 2 + ) + + +def test_duplicate_only_once(storage, monkeypatch): + """Test that trials may not be duplicated twice""" + with disable_duplication(monkeypatch): + build_evc_tree(list(range(5))) + + for exp in ["root", "parent", "experiment", "child", "grand-child"]: + assert len(get_experiment(name=exp).fetch_trials(with_evc_tree=False)) == len( + Trial.allowed_stati + ) + + experiment = build_experiment(name="experiment") + experiment._experiment.duplicate_pending_trials() + + for exp in ["root", "parent", "child", "grand-child"]: + assert len(get_experiment(name=exp).fetch_trials(with_evc_tree=False)) == len( + Trial.allowed_stati + ) + + assert ( + len(experiment.fetch_trials(with_evc_tree=False)) + == len(Trial.allowed_stati) + N_PENDING * 4 + ) + + experiment._experiment.duplicate_pending_trials() + + for exp in ["root", "parent", "child", "grand-child"]: + assert len(get_experiment(name=exp).fetch_trials(with_evc_tree=False)) == len( + Trial.allowed_stati + ) + + assert ( + len(experiment.fetch_trials(with_evc_tree=False)) + == len(Trial.allowed_stati) + N_PENDING * 4 + ) + + +def test_duplicate_race_conditions(storage, monkeypatch, caplog): + """Test that duplication does not raise an error during race conditions.""" + with disable_duplication(monkeypatch): + build_evc_tree(list(range(2))) + + experiment = build_experiment(name="parent") + + def register_race_condition(trial): + raise DuplicateKeyError("Race condition!") + + monkeypatch.setattr( + experiment._experiment._storage, "register_trial", register_race_condition + ) + + assert len(experiment.fetch_trials(with_evc_tree=False)) == len(Trial.allowed_stati) + + with caplog.at_level(logging.DEBUG): + experiment._experiment.duplicate_pending_trials() + + assert "Race condition while trying to duplicate trial" in caplog.text + + +def test_fix_lost_trials_in_evc(storage, monkeypatch): + """Test that lost trials from parents can be fixed as well. + + `fix_lost_trials` is tested more carefully in experiment's unit-tests (without the EVC). + """ + with disable_duplication(monkeypatch), mocked_datetime(monkeypatch): + build_evc_tree(list(range(5))) + + for exp_name in ["root", "parent", "experiment", "child", "grand-child"]: + exp = get_experiment(name=exp_name) + assert len(exp.fetch_trials(with_evc_tree=False)) == len(Trial.allowed_stati) + assert len(exp.fetch_trials_by_status("reserved", with_evc_tree=False)) == 1 + + experiment = build_experiment(name="experiment") + experiment._experiment.fix_lost_trials() + + for exp_name in ["root", "parent", "experiment", "child", "grand-child"]: + exp = get_experiment(name=exp_name) + assert len(exp.fetch_trials(with_evc_tree=False)) == len(Trial.allowed_stati) + assert len(exp.fetch_trials_by_status("reserved", with_evc_tree=False)) == 0 diff --git a/tests/functional/demo/test_demo.py b/tests/functional/demo/test_demo.py index 88a45be30..f9ffed13e 100644 --- a/tests/functional/demo/test_demo.py +++ b/tests/functional/demo/test_demo.py @@ -355,7 +355,7 @@ def test_workon(): assert len(params) == 1 px = params["/x"] assert isinstance(px, float) - assert (px - 34.56789) < 5 + assert (px - 34.56789) < 20 def test_stress_unique_folder_creation(storage, monkeypatch, tmpdir, capfd): diff --git a/tests/functional/serving/test_trials_resource.py b/tests/functional/serving/test_trials_resource.py index a7a4f0eaa..d64b35d59 100644 --- a/tests/functional/serving/test_trials_resource.py +++ b/tests/functional/serving/test_trials_resource.py @@ -55,11 +55,12 @@ def add_experiment(**kwargs): """Adds experiment to the dummy orion instance""" base_experiment.update(copy.deepcopy(kwargs)) experiment_builder.build( - branching=dict(branch_from=base_experiment["name"]), **base_experiment + branching=dict(branch_from=base_experiment["name"], enable=True), + **base_experiment ) -def add_trial(experiment: int, status: str = None, **kwargs): +def add_trial(experiment: int, status: str = None, value=10, **kwargs): """ Add trials to the dummy orion instance @@ -79,6 +80,7 @@ def add_trial(experiment: int, status: str = None, **kwargs): kwargs["status"] = status base_trial.update(copy.deepcopy(kwargs)) + base_trial["params"][0]["value"] = value get_storage().register_trial(Trial(**base_trial)) @@ -180,9 +182,10 @@ def test_trials_for_all_versions(self, client): add_experiment(name="a", version=2, _id=2) add_experiment(name="a", version=3, _id=3) - add_trial(experiment=1, id_override="00") - add_trial(experiment=2, id_override="01") - add_trial(experiment=3, id_override="02") + # Specify values to avoid duplicates + add_trial(experiment=1, id_override="00", value=1) + add_trial(experiment=2, id_override="01", value=2) + add_trial(experiment=3, id_override="02", value=3) # Happy case default response = client.simulate_get("/trials/a?ancestors=true") @@ -255,10 +258,10 @@ def test_trials_by_from_specific_version_by_status_with_ancestors(self, client): add_experiment(name="a", version=2, _id=3) add_experiment(name="a", version=3, _id=4) - add_trial(experiment=1, id_override="00", status="completed") - add_trial(experiment=3, id_override="01", status="broken") - add_trial(experiment=3, id_override="02", status="completed") - add_trial(experiment=2, id_override="03", status="completed") + add_trial(experiment=1, id_override="00", value=1, status="completed") + add_trial(experiment=3, id_override="01", value=2, status="broken") + add_trial(experiment=3, id_override="02", value=3, status="completed") + add_trial(experiment=2, id_override="03", value=4, status="completed") response = client.simulate_get( "/trials/a?ancestors=true&version=2&status=completed" diff --git a/tests/unittests/algo/test_space.py b/tests/unittests/algo/test_space.py index 27c70f828..d5749eb9f 100644 --- a/tests/unittests/algo/test_space.py +++ b/tests/unittests/algo/test_space.py @@ -317,6 +317,50 @@ def test_cast_array(self): dim = Real("yolo", "uniform", -3, 4) assert np.all(dim.cast(np.array(["1", "2"])) == np.array([1.0, 2.0])) + def test_basic_cardinality(self): + """Brute force test for a simple cardinality use case""" + dim = Real("yolo", "reciprocal", 0.043, 2.3, precision=2) + order_0012 = np.arange(43, 99 + 1) + order_010 = np.arange(10, 99 + 1) + order_23 = np.arange(10, 23 + 1) + assert dim.cardinality == sum(map(len, [order_0012, order_010, order_23])) + + @pytest.mark.parametrize( + "prior_name,min_bound,max_bound,precision,cardinality", + [ + ("uniform", 0, 10, 2, np.inf), + ("reciprocal", 1e-10, 1e-2, None, np.inf), + ("reciprocal", 0.1, 1, 2, 90 + 1), + ("reciprocal", 0.1, 1.2, 2, 90 + 2 + 1), + ("reciprocal", 0.1, 1.25, 2, 90 + 2 + 1), + ("reciprocal", 1e-4, 1e-2, 2, 90 * 2 + 1), + ("reciprocal", 1e-5, 1e-2, 2, 90 + 90 * 2 + 1), + ("reciprocal", 5.234e-3, 1.5908e-2, 2, (90 - 52) + 15 + 1), + ("reciprocal", 5.234e-3, 1.5908e-2, 4, (9 * 10 ** 3 - 5234) + 1590 + 1), + ( + "reciprocal", + 5.234e-5, + 1.5908e-2, + 4, + (9 * 10 ** 3 * 3 - 5234) + 1590 + 1, + ), + ("uniform", 1e-5, 1e-2, 2, np.inf), + ("uniform", -3, 4, 3, np.inf), + ], + ) + def test_cardinality( + self, prior_name, min_bound, max_bound, precision, cardinality + ): + """Check whether cardinality is correct""" + dim = Real( + "yolo", prior_name, min_bound, max_bound, precision=precision, shape=None + ) + assert dim.cardinality == cardinality + dim = Real( + "yolo", prior_name, min_bound, max_bound, precision=precision, shape=(2, 3) + ) + assert dim.cardinality == cardinality ** (2 * 3) + class TestInteger(object): """Test methods of a `Integer` object.""" diff --git a/tests/unittests/algo/test_tpe.py b/tests/unittests/algo/test_tpe.py index 8437aadae..ebefb5b63 100644 --- a/tests/unittests/algo/test_tpe.py +++ b/tests/unittests/algo/test_tpe.py @@ -755,11 +755,6 @@ def sample(self, num): assert exc.match(f"Failed to sample in interval \({low}, {high}\)") - def test_int_data(self, mocker, num, attr): - if num > 0: - pytest.skip("See https://github.com/Epistimio/orion/issues/600") - super(TestTPE, self).test_int_data(mocker, num, attr) - def test_is_done_cardinality(self): # TODO: Support correctly loguniform(discrete=True) # See https://github.com/Epistimio/orion/issues/566 diff --git a/tests/unittests/benchmark/test_benchmark.py b/tests/unittests/benchmark/test_benchmark.py index 7cbcba4bb..517576a1f 100644 --- a/tests/unittests/benchmark/test_benchmark.py +++ b/tests/unittests/benchmark/test_benchmark.py @@ -233,6 +233,8 @@ def test_execute(self, study): name = "benchmark007_AverageResult_RosenBrock_0_0" experiment = experiment_builder.build(name) + assert len(experiment.fetch_trials()) == study.task.max_trials + assert experiment is not None @pytest.mark.usefixtures("version_XYZ") diff --git a/tests/unittests/client/test_client.py b/tests/unittests/client/test_client.py index 29bd0ce27..e940cfda6 100644 --- a/tests/unittests/client/test_client.py +++ b/tests/unittests/client/test_client.py @@ -266,7 +266,9 @@ def test_create_experiment_hit_branch(self): """Test creating a differing experiment that cause branching.""" with OrionState(experiments=[config]): experiment = create_experiment( - config["name"], space={"y": "uniform(0, 10)"} + config["name"], + space={"y": "uniform(0, 10)"}, + branching={"enable": True}, ) assert experiment.name == config["name"] @@ -289,7 +291,11 @@ def test_create_experiment_race_condition(self, monkeypatch): """ with OrionState(experiments=[config]): parent = create_experiment(config["name"]) - child = create_experiment(config["name"], space={"y": "uniform(0, 10)"}) + child = create_experiment( + config["name"], + space={"y": "uniform(0, 10)"}, + branching={"enable": True}, + ) def insert_race_condition(self, query): is_auto_version_query = query == { @@ -315,7 +321,9 @@ def insert_race_condition(self, query): ) experiment = create_experiment( - config["name"], space={"y": "uniform(0, 10)"} + config["name"], + space={"y": "uniform(0, 10)"}, + branching={"enable": True}, ) assert insert_race_condition.count == 1 @@ -326,7 +334,11 @@ def test_create_experiment_race_condition_broken(self, monkeypatch): """Test that two or more race condition leads to raise""" with OrionState(experiments=[config]): parent = create_experiment(config["name"]) - child = create_experiment(config["name"], space={"y": "uniform(0, 10)"}) + child = create_experiment( + config["name"], + space={"y": "uniform(0, 10)"}, + branching={"enable": True}, + ) def insert_race_condition(self, query): is_auto_version_query = query == { @@ -350,7 +362,11 @@ def insert_race_condition(self, query): ) with pytest.raises(RaceCondition) as exc: - create_experiment(config["name"], space={"y": "uniform(0, 10)"}) + create_experiment( + config["name"], + space={"y": "uniform(0, 10)"}, + branching={"enable": True}, + ) assert insert_race_condition.count == 2 assert "There was a race condition during branching and new version" in str( @@ -361,10 +377,17 @@ def test_create_experiment_hit_manual_branch(self): """Test creating a differing experiment that cause branching.""" new_space = {"y": "uniform(0, 10)"} with OrionState(experiments=[config]): - create_experiment(config["name"], space=new_space) + create_experiment( + config["name"], space=new_space, branching={"enable": True} + ) with pytest.raises(BranchingEvent) as exc: - create_experiment(config["name"], version=1, space=new_space) + create_experiment( + config["name"], + version=1, + space=new_space, + branching={"enable": True}, + ) assert "Configuration is different and generates" in str(exc.value) diff --git a/tests/unittests/client/test_experiment_client.py b/tests/unittests/client/test_experiment_client.py index e502a7f49..c518f6f25 100644 --- a/tests/unittests/client/test_experiment_client.py +++ b/tests/unittests/client/test_experiment_client.py @@ -104,6 +104,12 @@ def test_experiment_fetch_trials_by_status(): ) +def test_experiment_fetch_pending_trials(): + """Test compliance of client and experiment `fetch_pending_trials()`""" + with create_experiment(config, base_trial) as (cfg, experiment, client): + compare_trials(experiment.fetch_pending_trials(), client.fetch_pending_trials()) + + def test_experiment_fetch_non_completed_trials(): """Test compliance of client and experiment `fetch_noncompleted_trials()`""" with create_experiment(config, base_trial) as (cfg, experiment, client): diff --git a/tests/unittests/core/conftest.py b/tests/unittests/core/conftest.py index e0411d868..cc419a108 100644 --- a/tests/unittests/core/conftest.py +++ b/tests/unittests/core/conftest.py @@ -15,7 +15,7 @@ from orion.core.io.convert import JSONConverter, YAMLConverter from orion.core.io.space_builder import DimensionBuilder from orion.core.worker.trial import Trial -from orion.testing import MockDatetime, default_datetime +from orion.testing import MockDatetime TEST_DIR = os.path.dirname(os.path.abspath(__file__)) YAML_SAMPLE = os.path.join(TEST_DIR, "sample_config.yml") @@ -149,13 +149,6 @@ def with_user_dendi(monkeypatch): monkeypatch.setattr(getpass, "getuser", lambda: "dendi") -@pytest.fixture() -def random_dt(monkeypatch): - """Make ``datetime.datetime.utcnow()`` return an arbitrary date.""" - monkeypatch.setattr(datetime, "datetime", MockDatetime) - return default_datetime() - - dendi_exp_config = dict( name="supernaedo2-dendi", space={ diff --git a/tests/unittests/core/evc/test_experiment_tree.py b/tests/unittests/core/evc/test_experiment_tree.py new file mode 100644 index 000000000..4c2b11b69 --- /dev/null +++ b/tests/unittests/core/evc/test_experiment_tree.py @@ -0,0 +1,551 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +"""Collection of tests for :mod:`orion.core.evc.experiment`.""" + +import pytest + +from orion.client import build_experiment, get_experiment +from orion.core.evc.adapters import Adapter, CodeChange +from orion.core.evc.experiment import ExperimentNode +from orion.testing.evc import ( + build_child_experiment, + build_grand_child_experiment, + build_root_experiment, + disable_duplication, +) + +ROOT_SPACE_WITH_DEFAULTS = { + "x": "uniform(0, 100, default_value=0)", + "y": "uniform(0, 100, default_value=2)", + "z": "uniform(0, 100, default_value=4)", +} + +CHILD_SPACE_WITH_DEFAULTS = { + "x": "uniform(0, 100, default_value=0)", + "y": "uniform(0, 100, default_value=2)", +} + +GRAND_CHILD_SPACE_WITH_DEFAULTS = { + "x": "uniform(0, 100, default_value=0)", +} + + +CHILD_SPACE_DELETION = { + "x": "uniform(0, 100)", + "y": "uniform(0, 100)", +} + +GRAND_CHILD_SPACE_DELETION = { + "x": "uniform(0, 100)", +} + + +CHILD_SPACE_PRIOR_CHANGE = { + "x": "uniform(0, 8)", + "y": "uniform(0, 8)", + "z": "uniform(0, 8)", +} + +GRAND_CHILD_SPACE_PRIOR_CHANGE = { + "x": "uniform(0, 3)", + "y": "uniform(0, 3)", + "z": "uniform(0, 3)", +} + + +CHILD_TRIALS_DUPLICATES = [{"x": i, "y": i * 2, "z": i ** 2} for i in range(2, 8)] + +GRAND_CHILD_TRIALS_DUPLICATES = [ + {"x": i, "y": i * 2, "z": i ** 2} for i in list(range(1, 4)) + list(range(8, 10)) +] + + +CHILD_TRIALS_DELETION = [{"x": i, "y": i * 2} for i in range(4, 10)] + +GRAND_CHILD_TRIALS_DELETION = [{"x": i} for i in range(10, 15)] + + +CHILD_TRIALS_PRIOR_CHANGE = [{"x": i, "y": i / 2, "z": i / 4} for i in range(1, 8)] + +GRAND_CHILD_TRIALS_PRIOR_CHANGE = [ + {"x": i * 2 / 10, "y": i / 10, "z": i / 20} for i in range(1, 10) +] + + +def generic_tree_test( + experiment_name, + parent_name=None, + grand_parent_name=None, + children_names=tuple(), + grand_children_names=tuple(), + node_trials=0, + parent_trials=0, + grand_parent_trials=0, + children_trials=tuple(), + grand_children_trials=tuple(), + total_trials=0, +): + """Test fetching of trials from experiments in the EVC tree. + + Parameters + ---------- + experiment_name: str + The name of the experiment that will be the main node for the tests. + parent_name: str or None + The name of the parent experiment, this will be used to fetch the trials from the parent + experiment directly (not in EVC) for comparison. + grand_parent_name: str or None + The name of the grand parent experiment, this will be used to fetch the trials from the + grand parent experiment directly (not in EVC) for comparison. + children_names: list or str + The names of the children experiments, this will be used to fetch the trials from the + children experiments directly (not in EVC) for comparison. + grand_children_names: list or str + The names of the grand children experiments, this will be used to fetch the trials from the + grand children experiments directly (not in EVC) for comparison. All grand children names + may be included in the list even though they are associated to different children. + node_trials: int, + The number of trials that should be fetched from current node experiment. + parent_trials: int, + The number of trials that should be fetched from parent experiment (not using EVC tree). + grand_parent_trials: int, + The number of trials that should be fetched from grand parent experiment (not using EVC tree). + children_trials: list of int, + The number of trials that should be fetched from each children experiment (not using EVC tree). + grand_children_trials: list of int, + The number of trials that should be fetched from each grand children experiment (not using EVC tree). + total_trials: int, + The number of trials that should be fetched from current node experiment when fetching + recursively from the EVC tree. This may not be equal to the sum of all trials in parent and + children experiments depending on the adapters. + + """ + + experiment = get_experiment(experiment_name) + exp_node = experiment.node + + assert exp_node.item.name == experiment_name + + num_nodes = 1 + + if parent_name: + assert exp_node.parent.item.name == parent_name + num_nodes += 1 + if grand_parent_name: + assert exp_node.parent.parent.item.name == grand_parent_name + num_nodes += 1 + + assert len(exp_node.children) == len(children_names) + if children_names: + assert [child.item.name for child in exp_node.children] == children_names + num_nodes += len(children_names) + + if grand_children_names: + grand_children = sum([child.children for child in exp_node.children], []) + assert [child.item.name for child in grand_children] == grand_children_names + num_nodes += len(grand_children_names) + + assert len(list(exp_node.root)) == num_nodes + + assert len(experiment.fetch_trials()) == node_trials + if parent_name: + assert len(exp_node.parent.item.fetch_trials()) == parent_trials + if grand_parent_name: + assert len(exp_node.parent.parent.item.fetch_trials()) == grand_parent_trials + + if children_names: + assert [ + len(child.item.fetch_trials()) for child in exp_node.children + ] == children_trials + + if grand_children_names: + grand_children = sum([child.children for child in exp_node.children], []) + assert [ + len(child.item.fetch_trials()) for child in grand_children + ] == grand_children_trials + + for trial in experiment.fetch_trials(with_evc_tree=True): + print( + trial, + trial.compute_trial_hash(trial, ignore_lie=True, ignore_experiment=True), + ) + + assert len(experiment.fetch_trials(with_evc_tree=True)) == total_trials + + all_ids = [trial.id for trial in experiment.fetch_trials(with_evc_tree=True)] + exp_ids = [trial.id for trial in experiment.fetch_trials(with_evc_tree=False)] + + # Ensure all trials of experiment are fetched when fetching from all EVC + # It could happen that some trials are missing if duplicates are incorrectly filtered out + # from current node instead of from parent or child. + assert set(exp_ids) - set(all_ids) == set() + + +parametrization = { + "no-adapter-parent": ( + {}, + {}, + None, + dict( + experiment_name="child", + parent_name="root", + node_trials=6, + parent_trials=4, + total_trials=10, + ), + ), + "no-adapter-children": ( + {}, + {}, + None, + dict( + experiment_name="root", + children_names=["child"], + node_trials=4, + children_trials=[6], + total_trials=10, + ), + ), + "no-adapter-parent-children": ( + {}, + {}, + {}, + dict( + experiment_name="child", + parent_name="root", + children_names=["grand-child"], + node_trials=6, + parent_trials=4, + children_trials=[5], + total_trials=15, + ), + ), + "no-adapter-parent-parent": ( + {}, + {}, + {}, + dict( + experiment_name="grand-child", + parent_name="child", + grand_parent_name="root", + node_trials=5, + parent_trials=6, + grand_parent_trials=4, + total_trials=15, + ), + ), + "no-adapter-children-children": ( + {}, + {}, + {}, + dict( + experiment_name="root", + children_names=["child"], + grand_children_names=["grand-child"], + node_trials=4, + children_trials=[6], + grand_children_trials=[5], + total_trials=15, + ), + ), + "duplicates-parent": ( + {}, + dict(trials=CHILD_TRIALS_DUPLICATES), + None, + dict( + experiment_name="child", + parent_name="root", + node_trials=6, + parent_trials=4, + total_trials=8, + ), + ), + "duplicates-children": ( + {}, + dict(trials=CHILD_TRIALS_DUPLICATES), + None, + dict( + experiment_name="root", + children_names=["child"], + node_trials=4, + children_trials=[6], + total_trials=8, + ), + ), + "duplicates-parent-children": ( + {}, + dict(trials=CHILD_TRIALS_DUPLICATES), + dict(trials=GRAND_CHILD_TRIALS_DUPLICATES), + dict( + experiment_name="child", + parent_name="root", + children_names=["grand-child"], + node_trials=6, + parent_trials=4, + children_trials=[5], + total_trials=6 + + 1 # Only 1 trial from root + + 1 # 1 trial from grand_child with i=1 + + 2, # 2 trials from grand_child with i>=8, + ), + ), + "duplicates-parent-parent": ( + {}, + dict(trials=CHILD_TRIALS_DUPLICATES), + dict(trials=GRAND_CHILD_TRIALS_DUPLICATES), + dict( + experiment_name="grand-child", + parent_name="child", + grand_parent_name="root", + node_trials=5, + parent_trials=6, + grand_parent_trials=4, + total_trials=5 + + 4 # 4 trials from `child` experiment (parent) + + 1, # 1 trial from `root` experiment (grand-parent) + ), + ), + "duplicates-children-children": ( + {}, + dict(trials=CHILD_TRIALS_DUPLICATES), + dict(trials=GRAND_CHILD_TRIALS_DUPLICATES), + dict( + experiment_name="root", + children_names=["child"], + grand_children_names=["grand-child"], + node_trials=4, + children_trials=[6], + grand_children_trials=[5], + total_trials=4 + + 4 # 4 trials from `child` experiment + + 2, # 2 trials from `grand-child` experiment + ), + ), + "deletion-with-default-forward": ( + dict(space=ROOT_SPACE_WITH_DEFAULTS), + dict(space=CHILD_SPACE_WITH_DEFAULTS), + None, + dict( + experiment_name="child", + parent_name="root", + node_trials=9, + parent_trials=4, + total_trials=10, + ), + ), + "deletion-with-default-backward": ( + dict(space=ROOT_SPACE_WITH_DEFAULTS), + dict(space=CHILD_SPACE_WITH_DEFAULTS), + None, + dict( + experiment_name="root", + children_names=["child"], + node_trials=4, + children_trials=[9], + total_trials=13, + ), + ), + "deletion-without-default-forward": ( + dict(), + dict(space=CHILD_SPACE_DELETION), + None, + dict( + experiment_name="child", + parent_name="root", + node_trials=10, + parent_trials=4, + total_trials=10, + ), + ), + "deletion-without-default-backward": ( + dict(), + dict(space=CHILD_SPACE_DELETION), + None, + dict( + experiment_name="root", + children_names=["child"], + node_trials=4, + children_trials=[10], + total_trials=4, + ), + ), + "deletion-with-default-forward-forward": ( + dict(space=ROOT_SPACE_WITH_DEFAULTS), + dict(space=CHILD_SPACE_WITH_DEFAULTS, trials=CHILD_TRIALS_DELETION), + dict(space=GRAND_CHILD_SPACE_WITH_DEFAULTS), + dict( + experiment_name="grand-child", + parent_name="child", + grand_parent_name="root", + node_trials=15, + parent_trials=6, + grand_parent_trials=4, + total_trials=15, + ), + ), + "deletion-with-default-forward-backward": ( + dict(space=ROOT_SPACE_WITH_DEFAULTS), + dict(space=CHILD_SPACE_WITH_DEFAULTS, trials=CHILD_TRIALS_DELETION), + dict(space=GRAND_CHILD_SPACE_WITH_DEFAULTS), + dict( + experiment_name="child", + parent_name="root", + children_names=["grand-child"], + node_trials=6, + parent_trials=4, + children_trials=[15], + total_trials=6 + 1 + 15, + ), + ), + "deletion-with-default-backward-backward": ( + dict(space=ROOT_SPACE_WITH_DEFAULTS), + dict(space=CHILD_SPACE_WITH_DEFAULTS, trials=CHILD_TRIALS_DELETION), + dict(space=GRAND_CHILD_SPACE_WITH_DEFAULTS), + dict( + experiment_name="root", + children_names=["child"], + grand_children_names=["grand-child"], + node_trials=4, + children_trials=[6], + grand_children_trials=[15], + total_trials=4 + 6 + 15, + ), + ), + "deletion-without-default-forward-forward": ( + dict(), + dict(space=CHILD_SPACE_DELETION, trials=CHILD_TRIALS_DELETION), + dict(space=GRAND_CHILD_SPACE_DELETION), + dict( + experiment_name="grand-child", + parent_name="child", + grand_parent_name="root", + node_trials=15, + parent_trials=6, + grand_parent_trials=4, + total_trials=15, + ), + ), + "deletion-without-default-forward-backward": ( + dict(), + dict(space=CHILD_SPACE_DELETION, trials=CHILD_TRIALS_DELETION), + dict(space=GRAND_CHILD_SPACE_DELETION), + dict( + experiment_name="child", + parent_name="root", + children_names=["grand-child"], + node_trials=6, + parent_trials=4, + children_trials=[15], + total_trials=6, + ), + ), + "deletion-without-default-backward-backward": ( + dict(), + dict(space=CHILD_SPACE_DELETION, trials=CHILD_TRIALS_DELETION), + dict(space=GRAND_CHILD_SPACE_DELETION), + dict( + experiment_name="root", + children_names=["child"], + grand_children_names=["grand-child"], + node_trials=4, + children_trials=[6], + grand_children_trials=[15], + total_trials=4, + ), + ), + "prior-change-forward": ( + dict(), + dict(space=CHILD_SPACE_PRIOR_CHANGE, trials=CHILD_TRIALS_PRIOR_CHANGE), + None, + dict( + experiment_name="child", + parent_name="root", + node_trials=len(CHILD_TRIALS_PRIOR_CHANGE), + parent_trials=4, + total_trials=len(CHILD_TRIALS_PRIOR_CHANGE) + 4 - 1, # One is out of bound + ), + ), + "prior-change-backward": ( + dict(), + dict(space=CHILD_SPACE_PRIOR_CHANGE, trials=CHILD_TRIALS_PRIOR_CHANGE), + None, + dict( + experiment_name="root", + children_names=["child"], + node_trials=4, + children_trials=[len(CHILD_TRIALS_PRIOR_CHANGE)], + total_trials=len(CHILD_TRIALS_PRIOR_CHANGE) + 4, # They are all included + ), + ), + "prior-change-forward-forward": ( + dict(), + dict(space=CHILD_SPACE_PRIOR_CHANGE, trials=CHILD_TRIALS_PRIOR_CHANGE), + dict( + space=GRAND_CHILD_SPACE_PRIOR_CHANGE, trials=GRAND_CHILD_TRIALS_PRIOR_CHANGE + ), + dict( + experiment_name="grand-child", + parent_name="child", + grand_parent_name="root", + node_trials=len(GRAND_CHILD_TRIALS_PRIOR_CHANGE), + parent_trials=len(CHILD_TRIALS_PRIOR_CHANGE), + grand_parent_trials=4, + total_trials=len(GRAND_CHILD_TRIALS_PRIOR_CHANGE) + + sum(trial["x"] <= 3 for trial in CHILD_TRIALS_PRIOR_CHANGE) + + 2, # Only 2 of root trials are compatible with grand-child space + ), + ), + "prior-change-backward-forward": ( + dict(), + dict(space=CHILD_SPACE_PRIOR_CHANGE, trials=CHILD_TRIALS_PRIOR_CHANGE), + dict( + space=GRAND_CHILD_SPACE_PRIOR_CHANGE, trials=GRAND_CHILD_TRIALS_PRIOR_CHANGE + ), + dict( + experiment_name="child", + parent_name="root", + children_names=["grand-child"], + node_trials=len(CHILD_TRIALS_PRIOR_CHANGE), + parent_trials=4, + children_trials=[len(GRAND_CHILD_TRIALS_PRIOR_CHANGE)], + total_trials=len(GRAND_CHILD_TRIALS_PRIOR_CHANGE) + + len(CHILD_TRIALS_PRIOR_CHANGE) # All trials are compatible + + 3, # Only 3 of root trials are compatible with grand-child space + ), + ), + "prior-change-backward-backward": ( + dict(), + dict(space=CHILD_SPACE_PRIOR_CHANGE, trials=CHILD_TRIALS_PRIOR_CHANGE), + dict( + space=GRAND_CHILD_SPACE_PRIOR_CHANGE, trials=GRAND_CHILD_TRIALS_PRIOR_CHANGE + ), + dict( + experiment_name="root", + children_names=["child"], + grand_children_names=["grand-child"], + node_trials=4, + children_trials=[len(CHILD_TRIALS_PRIOR_CHANGE)], + grand_children_trials=[len(GRAND_CHILD_TRIALS_PRIOR_CHANGE)], + total_trials=len(GRAND_CHILD_TRIALS_PRIOR_CHANGE) + + len(CHILD_TRIALS_PRIOR_CHANGE) + + 4, # All trials are compatible + ), + ), +} + + +@pytest.mark.parametrize( + "root, child, grand_child, test_kwargs", + list(parametrization.values()), + ids=list(parametrization.keys()), +) +def test_evc_fetch_adapters( + monkeypatch, storage, root, child, grand_child, test_kwargs +): + """Test the recursive fetch of trials in the EVC tree.""" + with disable_duplication(monkeypatch): + build_root_experiment(**root) + build_child_experiment(**child) + if grand_child is not None: + build_grand_child_experiment(**grand_child) + generic_tree_test(**test_kwargs) diff --git a/tests/unittests/core/io/interactive_commands/test_branching_prompt.py b/tests/unittests/core/io/interactive_commands/test_branching_prompt.py index e84766065..e76103d18 100644 --- a/tests/unittests/core/io/interactive_commands/test_branching_prompt.py +++ b/tests/unittests/core/io/interactive_commands/test_branching_prompt.py @@ -81,7 +81,7 @@ def conflicts( @pytest.fixture def branch_builder(conflicts): """Generate the experiment branch builder""" - return ExperimentBranchBuilder(conflicts, {"manual_resolution": True}) + return ExperimentBranchBuilder(conflicts, manual_resolution=True) @pytest.fixture diff --git a/tests/unittests/core/io/test_experiment_builder.py b/tests/unittests/core/io/test_experiment_builder.py index 2f863cf4c..3af8ddcc3 100644 --- a/tests/unittests/core/io/test_experiment_builder.py +++ b/tests/unittests/core/io/test_experiment_builder.py @@ -3,6 +3,7 @@ """Example usage and tests for :mod:`orion.core.io.experiment_builder`.""" import copy import datetime +import logging import pytest @@ -322,20 +323,20 @@ def test_build_from_args_no_hit(config_file, random_dt, script_path, new_config) exp = experiment_builder.build_from_args(cmdargs) - assert exp.name == cmdargs["name"] - assert exp.configuration["refers"] == { - "adapter": [], - "parent_id": None, - "root_id": exp._id, - } - assert exp.metadata["datetime"] == random_dt - assert exp.metadata["user"] == "dendi" - assert exp.metadata["user_script"] == cmdargs["user_args"][0] - assert exp.metadata["user_args"] == cmdargs["user_args"] - assert exp.pool_size == 1 - assert exp.max_trials == 100 - assert exp.max_broken == 5 - assert exp.algorithms.configuration == {"random": {"seed": None}} + assert exp.name == cmdargs["name"] + assert exp.configuration["refers"] == { + "adapter": [], + "parent_id": None, + "root_id": exp._id, + } + assert exp.metadata["datetime"] == random_dt + assert exp.metadata["user"] == "dendi" + assert exp.metadata["user_script"] == cmdargs["user_args"][0] + assert exp.metadata["user_args"] == cmdargs["user_args"] + assert exp.pool_size == 1 + assert exp.max_trials == 100 + assert exp.max_broken == 5 + assert exp.algorithms.configuration == {"random": {"seed": None}} @pytest.mark.usefixtures( @@ -453,22 +454,22 @@ def test_build_no_hit(config_file, random_dt, script_path): name, space=space, max_trials=max_trials, max_broken=max_broken ) - assert exp.name == name - assert exp.configuration["refers"] == { - "adapter": [], - "parent_id": None, - "root_id": exp._id, - } - assert exp.metadata == { - "datetime": random_dt, - "user": "tsirif", - "orion_version": "XYZ", - } - assert exp.configuration["space"] == space - assert exp.max_trials == max_trials - assert exp.max_broken == max_broken - assert not exp.is_done - assert exp.algorithms.configuration == {"random": {"seed": None}} + assert exp.name == name + assert exp.configuration["refers"] == { + "adapter": [], + "parent_id": None, + "root_id": exp._id, + } + assert exp.metadata == { + "datetime": random_dt, + "user": "tsirif", + "orion_version": "XYZ", + } + assert exp.configuration["space"] == space + assert exp.max_trials == max_trials + assert exp.max_broken == max_broken + assert not exp.is_done + assert exp.algorithms.configuration == {"random": {"seed": None}} def test_build_no_commandline_config(): @@ -546,7 +547,9 @@ def test_build_from_args_without_cmd(old_config_file, script_path, new_config): assert exp.algorithms.configuration == new_config["algorithms"] -@pytest.mark.usefixtures("with_user_tsirif", "version_XYZ") +@pytest.mark.usefixtures( + "with_user_tsirif", "version_XYZ", "mock_infer_versioning_metadata" +) class TestExperimentVersioning(object): """Create new Experiment with auto-versioning.""" @@ -566,12 +569,46 @@ def test_new_experiment_w_version(self, space): assert exp.version == 1 + def test_experiment_overwritten_evc_disabled(self, parent_version_config, caplog): + """Build an existing experiment with different config, overwritting previous config.""" + parent_version_config.pop("version") + with OrionState(experiments=[parent_version_config]): + + with caplog.at_level(logging.WARNING): + + exp = experiment_builder.build(name=parent_version_config["name"]) + assert "Running experiment in a different state" not in caplog.text + + assert exp.version == 1 + assert exp.configuration["algorithms"] == {"random": {"seed": None}} + + with caplog.at_level(logging.WARNING): + + exp = experiment_builder.build( + name=parent_version_config["name"], algorithms="gradient_descent" + ) + assert "Running experiment in a different state" in caplog.text + + assert exp.version == 1 + assert list(exp.configuration["algorithms"].keys())[0] == "gradient_descent" + + caplog.clear() + with caplog.at_level(logging.WARNING): + + exp = experiment_builder.load(name=parent_version_config["name"]) + assert "Running experiment in a different state" not in caplog.text + + assert exp.version == 1 + assert list(exp.configuration["algorithms"].keys())[0] == "gradient_descent" + def test_backward_compatibility_no_version(self, parent_version_config): """Branch from parent that has no version field.""" parent_version_config.pop("version") with OrionState(experiments=[parent_version_config]): exp = experiment_builder.build( - name=parent_version_config["name"], space={"y": "uniform(0, 10)"} + name=parent_version_config["name"], + space={"y": "uniform(0, 10)"}, + branching={"enable": True}, ) assert exp.version == 2 @@ -854,7 +891,7 @@ def test_new_child_with_branch(self): child_name = "child" child = experiment_builder.build( - name=name, branching={"branch_to": child_name} + name=name, branching={"branch_to": child_name, "enable": True} ) assert child.name == child_name @@ -864,7 +901,7 @@ def test_new_child_with_branch(self): child_name = "child2" child = experiment_builder.build( - name=child_name, branching={"branch_from": name} + name=child_name, branching={"branch_from": name, "enable": True} ) assert child.name == child_name @@ -878,14 +915,19 @@ def test_no_increment_when_child_exist(self): with OrionState(experiments=[], trials=[]): parent = experiment_builder.build(name=name, space=space) - child = experiment_builder.build(name=name, space={"x": "loguniform(1,10)"}) + child = experiment_builder.build( + name=name, space={"x": "loguniform(1,10)"}, branching={"enable": True} + ) assert child.name == parent.name assert parent.version == 1 assert child.version == 2 with pytest.raises(BranchingEvent) as exc_info: experiment_builder.build( - name=name, version=1, space={"x": "loguniform(1,10)"} + name=name, + version=1, + space={"x": "loguniform(1,10)"}, + branching={"enable": True}, ) assert "Configuration is different and generates a branching" in str( exc_info.value @@ -900,7 +942,9 @@ def test_race_condition_wout_version(self, monkeypatch): with OrionState(experiments=[], trials=[]): parent = experiment_builder.build(name, space=space) - child = experiment_builder.build(name=name, space={"x": "loguniform(1,10)"}) + child = experiment_builder.build( + name=name, space={"x": "loguniform(1,10)"}, branching={"enable": True} + ) assert child.name == parent.name assert parent.version == 1 assert child.version == 2 @@ -939,7 +983,11 @@ def insert_race_condition_1(self, query): ) with pytest.raises(RaceCondition) as exc_info: - experiment_builder.build(name=name, space={"x": "loguniform(1,10)"}) + experiment_builder.build( + name=name, + space={"x": "loguniform(1,10)"}, + branching={"enable": True}, + ) assert "There was likely a race condition during version" in str( exc_info.value ) @@ -968,7 +1016,11 @@ def insert_race_condition_2(self, query): ) with pytest.raises(RaceCondition) as exc_info: - experiment_builder.build(name=name, space={"x": "loguniform(1,10)"}) + experiment_builder.build( + name=name, + space={"x": "loguniform(1,10)"}, + branching={"enable": True}, + ) assert "There was a race condition during branching." in str(exc_info.value) def test_race_condition_w_version(self, monkeypatch): @@ -985,7 +1037,9 @@ def test_race_condition_w_version(self, monkeypatch): with OrionState(experiments=[], trials=[]): parent = experiment_builder.build(name, space=space) - child = experiment_builder.build(name=name, space={"x": "loguniform(1,10)"}) + child = experiment_builder.build( + name=name, space={"x": "loguniform(1,10)"}, branching={"enable": True} + ) assert child.name == parent.name assert parent.version == 1 assert child.version == 2 @@ -1025,7 +1079,10 @@ def insert_race_condition_1(self, query): with pytest.raises(BranchingEvent) as exc_info: experiment_builder.build( - name=name, version=1, space={"x": "loguniform(1,10)"} + name=name, + version=1, + space={"x": "loguniform(1,10)"}, + branching={"enable": True}, ) assert "Configuration is different and generates" in str(exc_info.value) @@ -1054,7 +1111,10 @@ def insert_race_condition_2(self, query): with pytest.raises(RaceCondition) as exc_info: experiment_builder.build( - name=name, version=1, space={"x": "loguniform(1,10)"} + name=name, + version=1, + space={"x": "loguniform(1,10)"}, + branching={"enable": True}, ) assert "There was a race condition during branching." in str(exc_info.value) diff --git a/tests/unittests/core/io/test_resolve_config.py b/tests/unittests/core/io/test_resolve_config.py index 754df42cf..fd31cca2f 100644 --- a/tests/unittests/core/io/test_resolve_config.py +++ b/tests/unittests/core/io/test_resolve_config.py @@ -289,6 +289,7 @@ def mocked_config(file_object): # Test evc subconfig evc_config = config.pop("evc") + assert evc_config.pop("enable") is orion.core.config.evc.enable assert evc_config.pop("auto_resolution") == orion.core.config.evc.auto_resolution assert ( evc_config.pop("manual_resolution") == orion.core.config.evc.manual_resolution diff --git a/tests/unittests/core/test_strategy.py b/tests/unittests/core/test_strategy.py index d679a12e8..609815fd3 100644 --- a/tests/unittests/core/test_strategy.py +++ b/tests/unittests/core/test_strategy.py @@ -52,6 +52,7 @@ def corrupted_trial(): def test_handle_corrupted_trials(caplog, strategy, corrupted_trial): """Verify that corrupted trials are handled properly""" with caplog.at_level(logging.WARNING, logger="orion.core.worker.strategy"): + Strategy(strategy).observe([corrupted_trial], [{"objective": 1}]) lie = Strategy(strategy).lie(corrupted_trial) match = "Trial `{}` has an objective but status is not completed".format( @@ -64,9 +65,10 @@ def test_handle_corrupted_trials(caplog, strategy, corrupted_trial): @pytest.mark.parametrize("strategy", strategies) -def test_handle_uncorrupted_trials(caplog, strategy, incomplete_trial): +def test_handle_uncompleted_trials(caplog, strategy, incomplete_trial): """Verify that no warning is logged if trial is valid""" with caplog.at_level(logging.WARNING, logger="orion.core.worker.strategy"): + Strategy(strategy).observe([incomplete_trial], [{"objective": None}]) Strategy(strategy).lie(incomplete_trial) assert "Trial `{}` has an objective but status is not completed" not in caplog.text diff --git a/tests/unittests/core/test_transformer.py b/tests/unittests/core/test_transformer.py index 5c8d9041d..53d90f08a 100644 --- a/tests/unittests/core/test_transformer.py +++ b/tests/unittests/core/test_transformer.py @@ -1095,11 +1095,15 @@ def test_reshape(self, rspace): def test_cardinality(self, dim2): """Check cardinality of reshaped space""" space = Space() - space.register(Real("yolo0", "uniform", 0, 2, shape=(2, 2))) + space.register(Real("yolo0", "reciprocal", 0.1, 1, precision=1, shape=(2, 2))) space.register(dim2) rspace = build_required_space(space, shape_requirement="flattened") - assert rspace.cardinality == numpy.inf + assert rspace.cardinality == (10 ** (2 * 2)) * 4 + + space = Space() + space.register(Real("yolo0", "uniform", 0, 2, shape=(2, 2))) + space.register(dim2) rspace = build_required_space( space, type_requirement="integer", shape_requirement="flattened" diff --git a/tests/unittests/core/test_utils.py b/tests/unittests/core/utils/test_utils.py similarity index 69% rename from tests/unittests/core/test_utils.py rename to tests/unittests/core/utils/test_utils.py index 3733d79ba..348f827b0 100644 --- a/tests/unittests/core/test_utils.py +++ b/tests/unittests/core/utils/test_utils.py @@ -4,7 +4,7 @@ import pytest -from orion.core.utils import Factory +from orion.core.utils import Factory, float_to_digits_list def test_factory_subclasses_detection(): @@ -55,3 +55,21 @@ class Random(Base): pass assert type(MyFactory(of_type="random")) is Random + + +@pytest.mark.parametrize( + "number,digits_list", + [ + (float("inf"), []), + (0.0, [0]), + (0.00001, [1]), + (12.0, [1, 2]), + (123000.0, [1, 2, 3]), + (10.0001, [1, 0, 0, 0, 0, 1]), + (1e-50, [1]), + (5.32156e-3, [5, 3, 2, 1, 5, 6]), + ], +) +def test_float_to_digits_list(number, digits_list): + """Test that floats are correctly converted to list of digits""" + assert float_to_digits_list(number) == digits_list diff --git a/tests/unittests/core/worker/test_consumer.py b/tests/unittests/core/worker/test_consumer.py index 9ecb97216..b95718271 100644 --- a/tests/unittests/core/worker/test_consumer.py +++ b/tests/unittests/core/worker/test_consumer.py @@ -1,6 +1,7 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- """Collection of tests for :mod:`orion.core.worker.consumer`.""" +import logging import os import signal import subprocess @@ -69,16 +70,15 @@ def test_trial_working_dir_is_changed(config): assert trial.working_dir == con.working_dir + "/exp_" + trial.id -@pytest.mark.usefixtures("storage") -def test_code_changed(config, monkeypatch): - """Check that trial has its working_dir attribute changed.""" +def setup_code_change_mock(config, monkeypatch, ignore_code_changes): + """Mock create experiment and trials, and infer_versioning_metadata""" exp = experiment_builder.build(**config) trial = tuple_to_trial((1.0,), exp.space) exp.register_trial(trial, status="reserved") - con = Consumer(exp) + con = Consumer(exp, ignore_code_changes=ignore_code_changes) def code_changed(user_script): return dict( @@ -91,6 +91,26 @@ def code_changed(user_script): monkeypatch.setattr(consumer, "infer_versioning_metadata", code_changed) + return con, trial + + +@pytest.mark.usefixtures("storage") +def test_code_changed_evc_disabled(config, monkeypatch, caplog): + """Check that trial has its working_dir attribute changed.""" + + con, trial = setup_code_change_mock(config, monkeypatch, ignore_code_changes=True) + + with caplog.at_level(logging.WARNING): + con(trial) + assert "Code changed between execution of 2 trials" in caplog.text + + +@pytest.mark.usefixtures("storage") +def test_code_changed_evc_enabled(config, monkeypatch): + """Check that trial has its working_dir attribute changed.""" + + con, trial = setup_code_change_mock(config, monkeypatch, ignore_code_changes=False) + with pytest.raises(BranchingEvent) as exc: con(trial) diff --git a/tests/unittests/core/worker/test_experiment.py b/tests/unittests/core/worker/test_experiment.py index f44d652a2..3fc36e9e1 100644 --- a/tests/unittests/core/worker/test_experiment.py +++ b/tests/unittests/core/worker/test_experiment.py @@ -428,6 +428,22 @@ def test_fetch_all_trials(): assert trials == cfg.trials +def test_fetch_pending_trials(): + """Fetch a list of the trials that are pending + + trials.status in ['new', 'interrupted', 'suspended'] + """ + pending_stati = ["new", "interrupted", "suspended"] + stati = pending_stati + ["completed", "broken", "reserved"] + with OrionState(trials=generate_trials(stati)) as cfg: + exp = Experiment("supernaekei", mode="x") + exp._id = cfg.trials[0]["experiment"] + + trials = exp.fetch_pending_trials() + assert len(trials) == 3 + assert set(trial.status for trial in trials) == set(pending_stati) + + def test_fetch_non_completed_trials(): """Fetch a list of the trials that are not completed @@ -571,6 +587,8 @@ def test_experiment_pickleable(): read_only_methods = [ "algorithms", "configuration", + "fetch_lost_trials", + "fetch_pending_trials", "fetch_noncompleted_trials", "fetch_trials", "fetch_trials_by_status", @@ -598,6 +616,7 @@ def test_experiment_pickleable(): "register_trial", "set_trial_status", "update_completed_trial", + "duplicate_pending_trials", ] execute_only_methods = [ "reserve_trial", diff --git a/tests/unittests/core/worker/test_producer.py b/tests/unittests/core/worker/test_producer.py index ae5456076..0e2d349a4 100644 --- a/tests/unittests/core/worker/test_producer.py +++ b/tests/unittests/core/worker/test_producer.py @@ -625,7 +625,9 @@ def test_original_seeding(producer): def test_evc(monkeypatch, producer): """Verify that producer is using available trials from EVC""" experiment = producer.experiment - new_experiment = build(experiment.name, algorithms="random") + new_experiment = build( + experiment.name, algorithms="random", branching={"enable": True} + ) # Replace parent with hacked exp, otherwise parent ID does not match trials in DB # and fetch_trials() won't return anything. @@ -652,7 +654,9 @@ def update_naive_algo(trials): def test_evc_duplicates(monkeypatch, producer): """Verify that producer wont register samples that are available in parent experiment""" experiment = producer.experiment - new_experiment = build(experiment.name, algorithms="random") + new_experiment = build( + experiment.name, algorithms="random", branching={"enable": True} + ) # Replace parent with hacked exp, otherwise parent ID does not match trials in DB # and fetch_trials() won't return anything. diff --git a/tests/unittests/plotting/test_plotly_backend.py b/tests/unittests/plotting/test_plotly_backend.py index be995dc30..32b4fb0ed 100644 --- a/tests/unittests/plotting/test_plotly_backend.py +++ b/tests/unittests/plotting/test_plotly_backend.py @@ -755,7 +755,7 @@ def test_list_of_experiments(self, monkeypatch): experiment, ): child = orion.client.create_experiment( - experiment.name, branching={"branch_to": "child"} + experiment.name, branching={"branch_to": "child", "enable": True} ) plot = rankings([experiment, child]) @@ -774,7 +774,8 @@ def test_list_of_experiments_name_conflict(self, monkeypatch): experiment, ): child = orion.client.create_experiment( - experiment.name, branching={"branch_to": experiment.name} + experiment.name, + branching={"branch_to": experiment.name, "enable": True}, ) assert child.name == experiment.name assert child.version == experiment.version + 1 @@ -962,7 +963,7 @@ def test_list_of_experiments(self, monkeypatch): experiment, ): child = orion.client.create_experiment( - experiment.name, branching={"branch_to": "child"} + experiment.name, branching={"branch_to": "child", "enable": True} ) plot = regrets([experiment, child]) @@ -981,7 +982,8 @@ def test_list_of_experiments_name_conflict(self, monkeypatch): experiment, ): child = orion.client.create_experiment( - experiment.name, branching={"branch_to": experiment.name} + experiment.name, + branching={"branch_to": experiment.name, "enable": True}, ) assert child.name == experiment.name assert child.version == experiment.version + 1