Skip to content

Releases: optimizationBenchmarking/documentation-technical-intro-slides

Presentation on 2016-07-20 at BO-BBOB, GECCO in Denver, CO, USA

21 Jul 03:16
Compare
Choose a tag to compare

This is the presentation given at the Genetic and Evolutionary Computation Conference (GECCO 2016) in the Hyatt Regency Denver Tech Center in Denver, Colorado, USA. The presentation was on Wednesday, 2016-07-20, from 15:05 to 15:30 in room Wind Star B as the third talk of Session III of the workshop Bi-Objective Black Box Optimization Benchmarking 2016 (BO-BBOB 2016).

1. Contained Failes

This release consists of the following files:

  1. technical-intro-slides.pdf is the set of slides that I presented at GECCO.
  2. evaluatorGui.jar is the version of the software used for demonstration: Release 0.8.7 of the evaluator GUI.
  3. maxsat-example-report.pdf is the example report generated during the demo, for the MAX-SAT problem.
  4. maxsat-example-report-sources.zip contains the LaTeX sources of the example report (see above).
  5. all-example-data.zip contains all presently available example data sets.
  6. all-example-report-sources.zip contains all the reports that can be generated with the software used for demonstration from the above data sets.

2. Abstract

Optimization algorithms have become a standard tool in many application areas such as management, logistics, engineering, design, chemistry, and medicine. They provide close-to-optimal approximate solutions for computationally hard problems within feasible time. This field has grown and evolved for the past 50 years and has several top-level journals dedicated to it. Research in optimization is focused on reducing the algorithm runtime and increasing the result quality. For such research to succeed and publications to have true impact on the real world, we need to be able to

  • analyze the performance of an algorithm, to
  • analyze the influence of different features of an optimization problem on its hardness, and to
  • compare the performance of different algorithms in a fair and sound fashion.

This presentation is a technical introduction, mainly intended for an audience who is already familiar with optimization and benchmarking. It is relatively short (shorter than our general introduction found at https://optimizationbenchmarking.github.io/introSlides.html) and centered around a hands-on example of how to use our software.

Many optimization methods are anytime algorithms, meaning that they start with a (usually bad) guess about the solution and step-by-step improve their approximation quality. All evolutionary algorithms, all local search algorithms (such as Simulated Annealing and Tabu Search), all swarm intelligence methods for optimization (such as ant colony and particle swarm optimization), CMA-ES and memetic algorithms, but also several exact and deterministic methods such as branch and bound belong into this class, just to name a few.

The comparison and evaluation of anytime algorithms must consider the whole runtime behavior of the algorithms in order to avoid misleading conclusions. Also, performance data has to be gathered from multiple independent runs on multiple different benchmark instances. It is easy to see that a thorough analysis and comparison of optimization algorithms is complicated and cumbersome. We present an open source software which can do this for you. You gather the data from your experiments, the software analyzes it. Our goal is to support researchers and practitioners as much as possible by automating the evaluation of experimental results. The software does not require any programming, just your benchmarking data. It imposes no limits, neither on the type of algorithms to be compared nor on the type of problem they are benchmarked on. Our software produces human-readable conclusions and reports in either XHTML or LaTeX format. You can freely select and configure different diagram types and group your data according to different aspects to get a better understanding of the behavior of your algorithm. Figures are styled for direct re-use in journals such as IEEE Transactions or conference proceedings such as LNCS. The software is dockerized, meaning that you can directly apply it with minimal installation effort.

We demonstrate the utility of this software on the example of the investigation of six primitive heuristics on the Maximum Satisfiability Problem (MAX-SAT). Similar examples are provided for download for numerical optimization and the Traveling Salesman Problem.

More information can be found at http://optimizationbenchmarking.github.io/.

References

  • Thomas Weise, Raymond Chiong, Ke Tang, Jörg Lässig, Shigeyoshi Tsutsui, Wenxiang Chen, Zbigniew Michalewicz, and Xin Yao. Benchmarking Optimization Algorithms: An Open Source Framework for the Traveling Salesman Problem. IEEE Computational Intelligence Magazine (CIM), 9(3):40-52, August 2014. Featured article and selected paper at the website of the IEEE Computational Intelligence Society (http://cis.ieee.org/).
    details / doi:10.1109/MCI.2014.2326101 / pdf
  • Thomas Weise, Yuezhong Wu, Raymond Chiong, Ke Tang, and Jörg Lässig. Global versus Local Search: The Impact of Population Sizes on Evolutionary Algorithm Performance. Journal of Global Optimization. accepted 12 February, 2016, published first online: 23 February, 2016.
    details / doi:10.1007/s10898-016-0417-5 / pdf

3. Short Bio of Presenter Dr. Thomas Weise

Dr. Thomas Weise received a Diplom-Informatiker from Chemnitz University of Technology, Germany in 2005 and a Ph.D. in Computer Science from the University of Kassel, Germany in 2009. In the same year, he joined the University of Science and Technology in China (USTC) as a PostDoc. He is now member of the USTC-Birmingham Joint Research Institute in Intelligent Computation and Its Applications (UBRI). Since 2011, he is Associate Professor at the same group.

Dr. Weise has made contributions to the fields of optimization, logistic planning, and evolutionary computation. He has authored or co-authored more than 80 publications in international journals and conferences. He is the author of the book Global Optimization Algorithms – Theory and Application, which has been cited more than 600 times according to Google Scholar. He is also the developer of the open source software optimizationBenchmarking. This software supports researchers in the fields of optimization and machine learning by mining performance information from large data sets or log files gathered from their experiments. The predecessor of this system, the TSP Suite, was also developed by him.

More information about Dr. Weise can be found on his personal website http://www.it-weise.de.