Scroll down to
explore more

GenOpt

Generalization-based contest in global optimization

NOTE: This page refers to the past edition of GENOPT, held in 2016 in conjunction with the LION10 conference. For the current competition, please refer to the GENOPT Homepage.

Genopt 2016 is now finished. The winners are in the final leaderboard .


Genopt award ceremony ("high jump and biathlon") at LION 10, Ischia, June 2016.


Genopt award ceremony ("target shooting") at NUMTA 2016, Pizzo Calabro, June 2016.

Special session in the LION10 conference (Ischia Island, Italy, 29 May - 1 June, 2016).

Organizers:

  • Roberto Battiti, Head of LIONlab for "Machine Learning and Intelligent Optimization", University of Trento (Italy);
  • Yaroslav Sergeyev, Head of Numerical Calculus Laboratory, DIMES, University of Calabria (Italy);
  • Mauro Brunato, LIONlab, University of Trento (Italy);
  • Dmitri Kvasov, DIMES, University of Calabria (Italy).

While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. The motivated researcher can "persecute" the algorithm choices and parameters until the final designed algorithm "confesses" positive results for the specific benchmark.

With a similar goal, to avoid the negative data mining effect, the GENOPT contest is based on randomized function generators, with fixed statistical characteristics but individual variation of the generated instances.

The generators are available to the participants to test offline and online tuning schemes, but the final competition is based on random seeds communicated in the last phase.

A dashboard reflects the current ranking of the participants, who are encouraged to exchange preliminary results and opinions.

The final "generalization" ranking is going be confirmed in the last competition phase.

The GENOPT manifesto

The document detailing the motivations and rules of the GENOPT challenge (aka the GENOPT Manifesto, version Feb 16, 2016) is available for download.

Schedule

The public phase of the competition is over. Competitors with at least one entry on the leaderboard will be invited to submit their final results.
  • March 15 at 23:59:59 GMT public phase ends; existing competitors will have one week to make a final submission.
  • March 22 at 23:59:59 GMT competition ends, winners for the different categories are determined and asked to submit a paper describing the approach and the detailed results (papers are reviewed by the normal LION rules but with submission deadline April 7);
  • April 30 decisions about paper acceptance communicated to authors.
  • LION10 conference: 29 May - 1 June, 2016 Reviewed and accepted papers are presented, Competition winners are publicly recognized.
  • After LION special issue of good-quality journal dedicated to results obtained by the Winning and reviewed papers.

Participating and submitting

Benchmark function library

Functions to be optimized are made available as binary libraries with wrappers for various languages and platforms. Usage examples are provided in the zip file and below.

A report file will be created which you can then submit to the GENOPT website for ranking in the leaderboard.

The library is written in C. Other languages can directly link the libraries (e.g., Fortran) or access them through wrappers (Java, MATLAB). The avaliable combinations of language and platform are shown in Table 1.

Table 1. Language/platform matrix
Windows
(native, 32- and 64-bit)
Windows
(MinGW, 32- and 64-bit)
Windows
(Cygwin, 32- and 64-bit)
Linux (32- and 64-bit) Mac OS X
(32- and 64-bit, Intel only)
C/C++ Yes Yes Yes Yes Yes
Fortran Yes (G95, Lahey) Yes (GNU, G95) Yes (GNU, G95) Yes Yes (GNU, G95)
Java Yes No No Yes Yes (64bit only)
MATLAB/Octave Yes Untested Yes Yes Yes

If you would like libraries for another platform, or a wrapper for another language, please contribute to the Genopt Forum. Volunteers are particularly welcome! We are also considering suggestions for additional benchmark functions. Ideally, benchmarks should be designed with controllable parameters to answer specific scientific questions. E.g., about the relationship between problems structure and optimal (possibly self-tuned) algorithms, about scalabilty to large dimensionality, etc.

Download

The current version is genopt-20160221.zip.
MD5 sum: 863e72c0e07cb0ef5c005b24443719f6

Documentation

All documentation is also included in the zip file.

Submitting your Results

Warning — as of March 16, public submission is over. The leaderboard page contains now the entire history and the winners.

  • Please make sure that your code is linked with the latest version of the GenOpt libraries provided above.
  • The initialization function (genopt_init in the C and Fortran code, Genopt.init in Java, genopt in MATLAB) takes two integer numbers:
    - a function type index, which in the submission must vary between 0 and 17 inclusive, and
    - an integer seed, to be varied between 1 and 100 inclusive.
  • Run your optimization algorithm for every function type from 0 to 17 inclusive and for every seed from 1 to 100 inclusive for 1,000,000 evaluations. You can set the 1,000,000 evaluation limit by calling the appropriate function in the GenOpt library, or have your algorithm stop shortly after the limit is reached.
  • Every run will generate a report file, for a total of 18x100=1800 files. Compress all report files as a ZIP file.
  • Login (if you don't have your credentials, please send us a message via the contact form above in this page) and upload your ZIP File.
    You can choose any name for your submission (your registration name and email are not public).
    We suggest a default name in this form: participant-algorithm-number so that you will be able to submit different runs for different algorithms.
  • When the upload is complete, the Leaderboard Page will open with your new submission highlighted.
Submissions are ranked by composing different evaluation criteria. For more details, the description of the functions being optimized and the ranking methodology, you can refer to the GENOPT Manifesto.