Skip to content

Commit

Permalink
add hint for html visualization in docs and examples
Browse files Browse the repository at this point in the history
  • Loading branch information
jhj0411jhj committed Mar 8, 2023
1 parent f270af1 commit 7474330
Show file tree
Hide file tree
Showing 64 changed files with 574 additions and 304 deletions.
5 changes: 1 addition & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,12 +281,9 @@ We also provide **HTML Visualization**. Enable it by setting additional options
`visualization`=`basic`/`advanced` and `auto_open_html=True`(optional) in `Optimizer`:

```python
opt = Optimizer(
...,
opt = Optimizer(...,
visualization='advanced', # or 'basic'. For 'advanced', run 'pip install "openbox[extra]"' first
auto_open_html=True, # open the visualization page in your browser automatically
task_id='example_task',
logging_dir='logs',
)
history = opt.run()
```
Expand Down
5 changes: 1 addition & 4 deletions README_zh_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,12 +273,9 @@ if __name__ == "__main__":
`visualization`=`basic`/`advanced` 以及 `auto_open_html=True`(可选) 来启用该功能:

```python
opt = Optimizer(
...,
opt = Optimizer(...,
visualization='advanced', # or 'basic'. For 'advanced', run 'pip install "openbox[extra]"' first
auto_open_html=True, # open the visualization page in your browser automatically
task_id='example_task',
logging_dir='logs',
)
history = opt.run()
```
Expand Down
2 changes: 1 addition & 1 deletion docs/en/articles/openbox_LightGBM.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=0.2, stratify=

def objective_function(config):
# convert Configuration to dict
params = config.get_dictionary()
params = config.get_dictionary().copy()

# fit model
model = LGBMClassifier(**params)
Expand Down
2 changes: 1 addition & 1 deletion docs/en/articles/openbox_XGBoost.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=0.2, stratify=

def objective_function(config):
# convert Configuration to dict
params = config.get_dictionary()
params = config.get_dictionary().copy()

# fit model
model = XGBClassifier(**params, use_label_encoder=False)
Expand Down
2 changes: 1 addition & 1 deletion docs/en/articles/openbox_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=0.2, stratify=

def objective_function(config):
# convert Configuration to dict
params = config.get_dictionary()
params = config.get_dictionary().copy()

# fit model
model = LGBMClassifier(**params)
Expand Down
63 changes: 42 additions & 21 deletions docs/en/examples/multi_objective.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,14 @@ def objective_function(config: sp.Configuration):
return result
```

After evaluation, the objective function returns a <font color=#FF0000>**dict (Recommended)**.</font>
After evaluation, the objective function returns a `dict` **(Recommended)**.
The result dictionary should contain:

+ **'objectives'**: A **list/tuple** of **objective values (to be minimized)**.
In this example, we have two objectives so the tuple contains two values.
+ `'objectives'`: A **list/tuple** of **objective values (to be minimized)**.
In this example, we have only one objective so the tuple contains a single value.

+ **'constraints**': A **list/tuple** of **constraint values**.
If the problem is not constrained, return **None** or do not include this key in the dict.
+ `'constraints'`: A **list/tuple** of **constraint values**.
If the problem is not constrained, return **None** or do not include this key in the dictionary.
Non-positive constraint values (**"<=0"**) imply feasibility.

## Optimization
Expand All @@ -60,46 +60,55 @@ opt = Optimizer(
ref_point=prob.ref_point,
task_id='mo',
random_state=1,
# Have a try on the new HTML visualization feature!
# visualization='advanced', # or 'basic'. For 'advanced', run 'pip install "openbox[extra]"' first
# auto_open_html=True, # open the visualization page in your browser automatically
)
history = opt.run()
```

Here we create a <font color=#FF0000>**Optimizer**</font> instance, and pass the objective function
Here we create a `Optimizer` instance, and pass the objective function
and the search space to it.
The other parameters are:

+ **num_objectives** and **num_constraints** set how many objectives and constraints the objective function will return.
In this example, **num_objectives=2**.
+ `num_objectives` and `num_constraints` set how many objectives and constraints the objective function will return.
In this example, `num_objectives=2`.

+ **max_runs=50** means the optimization will take 50 rounds (optimizing the objective function 50 times).
+ `max_runs=50` means the optimization will take 50 rounds (optimizing the objective function 50 times).

+ **surrogate_type='gp'**. For mathematical problem, we suggest using Gaussian Process (**'gp'**) as Bayesian surrogate
model. For practical problems such as hyperparameter optimization (HPO), we suggest using Random Forest (**'prf'**).
+ `surrogate_type='gp'`. For mathematical problem, we suggest using Gaussian Process (`'gp'`) as Bayesian surrogate
model. For practical problems such as hyperparameter optimization (HPO), we suggest using Random Forest (`'prf'`).

+ **acq_type='ehvi'**. Use **EHVI(Expected Hypervolume Improvement)** as Bayesian acquisition function. For problems with more than 3 objectives, please
use **MESMO('mesmo')** or **USEMO('usemo')**.
+ `acq_type='ehvi'`. Use **EHVI(Expected Hypervolume Improvement)** as Bayesian acquisition function. For problems with more than 3 objectives, please
use **MESMO**(`'mesmo'`) or **USEMO**(`'usemo'`).

+ **acq_optimizer_type='random_scipy'**. For mathematical problems, we suggest using **'random_scipy'** as
+ `acq_optimizer_type='random_scipy'`. For mathematical problems, we suggest using `'random_scipy'` as
acquisition function optimizer. For practical problems such as hyperparameter optimization (HPO), we suggest
using **'local_random'**.
using `'local_random'`.

+ **initial_runs** sets how many configurations are suggested by **init_strategy** before the optimization loop.
+ `initial_runs` sets how many configurations are suggested by `init_strategy` before the optimization loop.

+ **init_strategy='sobol'** sets the strategy to suggest the initial configurations.
+ `init_strategy='sobol'` sets the strategy to suggest the initial configurations.

+ **ref_point** specifies the reference point, which is the upper bound on the objectives used for computing
+ `ref_point` specifies the reference point, which is the upper bound on the objectives used for computing
hypervolume. If using EHVI method, a reference point must be provided. In practice, the reference point can be
set 1) using domain knowledge to be slightly worse than the upper bound of objective values, where the upper bound is
the maximum acceptable value of interest for each objective, or 2) using a dynamic reference point selection strategy.

+ **task_id** is set to identify the optimization process.
+ `task_id` is set to identify the optimization process.

Then, <font color=#FF0000>**opt.run()**</font> is called to start the optimization process.
+ `visualization`: `'none'`, `'basic'` or `'advanced'`.
See {ref}`HTML Visualization <visualization/visualization:HTML Visualization>`.

+ `auto_open_html`: whether to open the visualization page in your browser automatically.
See {ref}`HTML Visualization <visualization/visualization:HTML Visualization>`.

Then, `opt.run()` is called to start the optimization process.

## Visualization

Since we optimize both objectives at the same time, we get a pareto front as the result.
Call <font color=#FF0000>**opt.get_history().plot_pareto_front()**</font> to plot the pareto front.
Call `opt.get_history().plot_pareto_front()` to plot the pareto front.
Please note that `plot_pareto_front` only works when the number of objectives is 2 or 3.

```python
Expand All @@ -123,3 +132,15 @@ plt.show()
```

<img src="../../imgs/plot_hypervolume_zdt2.png" width="60%" class="align-center">

<font color=#FF0000>(New Feature!)</font>
Call `history.visualize_html()` to visualize the optimization process in an HTML page.
For `show_importance` and `verify_surrogate`, run `pip install "openbox[extra]"` first.
See {ref}`HTML Visualization <visualization/visualization:HTML Visualization>` for more details.

```python
history.visualize_html(open_html=True, show_importance=True,
verify_surrogate=True, optimizer=opt)
```

<img src="../../imgs/visualization/html_example_mo.jpg" width="80%" class="align-center">
59 changes: 40 additions & 19 deletions docs/en/examples/multi_objective_with_constraint.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,13 +38,13 @@ def objective_funtion(config: sp.Configuration):
return result
```

After evaluation, the objective function returns a <font color=#FF0000>**dict (Recommended)**.</font>
After evaluation, the objective function returns a `dict` **(Recommended)**.
The result dictionary should contain:

+ **'objectives'**: A **list/tuple** of **objective values (to be minimized)**.
In this example, we have two objectives so the tuple contains two values.
+ `'objectives'`: A **list/tuple** of **objective values (to be minimized)**.
In this example, we have only one objective so the tuple contains a single value.

+ **'constraints**': A **list/tuple** of **constraint values**.
+ `'constraints'`: A **list/tuple** of **constraint values**.
Non-positive constraint values (**"<=0"**) imply feasibility.

## Optimization
Expand All @@ -65,46 +65,55 @@ opt = Optimizer(
ref_point=prob.ref_point,
task_id='moc',
random_state=1,
# Have a try on the new HTML visualization feature!
# visualization='advanced', # or 'basic'. For 'advanced', run 'pip install "openbox[extra]"' first
# auto_open_html=True, # open the visualization page in your browser automatically
)
history = opt.run()
```

Here we create a <font color=#FF0000>**Optimizer**</font> instance, and pass the objective function
Here we create a `Optimizer` instance, and pass the objective function
and the search space to it.
The other parameters are:

+ **num_objectives** and **num_constraints** set how many objectives and constraints the objective function will return.
In this example, **num_objectives=2** and **num_constraints=2**.
+ `num_objectives` and `num_constraints` set how many objectives and constraints the objective function will return.
In this example, `num_objectives=2` and `num_constraints=2`.

+ **max_runs=100** means the optimization will take 100 rounds (optimizing the objective function 100 times).
+ `max_runs=100` means the optimization will take 100 rounds (optimizing the objective function 100 times).

+ **surrogate_type='gp'**. For mathematical problem, we suggest using Gaussian Process (**'gp'**) as Bayesian surrogate
model. For practical problems such as hyperparameter optimization (HPO), we suggest using Random Forest (**'prf'**).
+ `surrogate_type='gp'`. For mathematical problem, we suggest using Gaussian Process (`'gp'`) as Bayesian surrogate
model. For practical problems such as hyperparameter optimization (HPO), we suggest using Random Forest (`'prf'`).

+ **acq_type='ehvic'**. Use **EHVIC(Expected Hypervolume Improvement with Constraint)**
+ `acq_type='ehvic'`. Use **EHVIC(Expected Hypervolume Improvement with Constraint)**
as Bayesian acquisition function.

+ **acq_optimizer_type='random_scipy'**. For mathematical problems, we suggest using **'random_scipy'** as
+ `acq_optimizer_type='random_scipy'`. For mathematical problems, we suggest using `'random_scipy'` as
acquisition function optimizer. For practical problems such as hyperparameter optimization (HPO), we suggest
using **'local_random'**.
using `'local_random'`.

+ **initial_runs** sets how many configurations are suggested by **init_strategy** before the optimization loop.
+ `initial_runs` sets how many configurations are suggested by `init_strategy` before the optimization loop.

+ **init_strategy='sobol'** sets the strategy to suggest the initial configurations.
+ `init_strategy='sobol'` sets the strategy to suggest the initial configurations.

+ **ref_point** specifies the reference point, which is the upper bound on the objectives used for computing
+ `ref_point` specifies the reference point, which is the upper bound on the objectives used for computing
hypervolume. If using EHVI method, a reference point must be provided. In practice, the reference point can be
set 1) using domain knowledge to be slightly worse than the upper bound of objective values, where the upper bound is
the maximum acceptable value of interest for each objective, or 2) using a dynamic reference point selection strategy.

+ **task_id** is set to identify the optimization process.
+ `task_id` is set to identify the optimization process.

Then, <font color=#FF0000>**opt.run()**</font> is called to start the optimization process.
+ `visualization`: `'none'`, `'basic'` or `'advanced'`.
See {ref}`HTML Visualization <visualization/visualization:HTML Visualization>`.

+ `auto_open_html`: whether to open the visualization page in your browser automatically.
See {ref}`HTML Visualization <visualization/visualization:HTML Visualization>`.

Then, `opt.run()` is called to start the optimization process.

## Visualization

Since we optimize both objectives at the same time, we get a pareto front as the result.
Call <font color=#FF0000>**opt.get_history().plot_pareto_front()**</font> to plot the pareto front.
Call `opt.get_history().plot_pareto_front()` to plot the pareto front.
Please note that `plot_pareto_front` only works when the number of objectives is 2 or 3.

```python
Expand All @@ -128,3 +137,15 @@ plt.show()
```

<img src="../../imgs/plot_hypervolume_constr.png" width="60%" class="align-center">

<font color=#FF0000>(New Feature!)</font>
Call `history.visualize_html()` to visualize the optimization process in an HTML page.
For `show_importance` and `verify_surrogate`, run `pip install "openbox[extra]"` first.
See {ref}`HTML Visualization <visualization/visualization:HTML Visualization>` for more details.

```python
history.visualize_html(open_html=True, show_importance=True,
verify_surrogate=True, optimizer=opt)
```

<img src="../../imgs/visualization/html_example_moc.jpg" width="80%" class="align-center">
Loading

0 comments on commit 7474330

Please sign in to comment.