Scroll down to
explore more


Generalization-based contest in global optimization

If you are interested in participating, please send us your name and email. You will be inserted into the mailing list and get a password for the submission page and to contribute to the discussion in the GenOpt Forum.
Also use this form to send us any short message related to GenOpt.

Your email will be used only for communications related to GenOpt.

Special session in the LION10 conference (Ischia Island, Italy, 29 May - 1 June, 2016).


  • Roberto Battiti, Head of LIONlab for "Machine Learning and Intelligent Optimization", University of Trento (Italy);
  • Yaroslav Sergeyev, Head of Numerical Calculus Laboratory, DIMES, University of Calabria (Italy);
  • Mauro Brunato, LIONlab, University of Trento (Italy);
  • Dmitri Kvasov, DIMES, University of Calabria (Italy).

While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. The motivated researcher can "persecute" the algorithm choices and parameters until the final designed algorithm "confesses" positive results for the specific benchmark.

With a similar goal, to avoid the negative data mining effect, the GENOPT contest will be based on randomized function generators, with fixed statistical characteristics but individual variation of the generated instances.

The generators will be made available to the participants to test offline and online tuning schemes, but the final competition will be based on random seeds communicated in the last phase.

A dashboard will reflect the current ranking of the participants, who are encouraged to exchange preliminary results and opinions.

The final "generalization" ranking will be confirmed in the last competition phase.

The GENOPT manifesto

A preliminary warmup document about the GENOPT challenge is available (Dec 4, 2015).


  • Dec 16 first function generators (see below), supporting website and forum made available to participants;
  • Jan 27 submission system, leaderboard;
  • March 15 competition ends, winners for the different categories are determined and asked to submit a paper describing the approach and the detailed results (papers are reviewed by the normal LION rules but with submission deadline March 31);
  • April 30 decisions about paper acceptance communicated to authors.
  • LION10 conference: 29 May - 1 June, 2016 Reviewed and accepted papers are presented, Competition winners are publicly recognized.
  • After LION special issue of good-quality journal dedicated to results obtained by the Winning and reviewed papers.

Participating and submitting

Benchmark function library

Functions to be optimized are made available as binary libraries with wrappers for various languages and platforms. Usage examples are provided in the zip file and below.

A report file will be created which you can then submit to the GENOPT website for ranking in the leaderboard.

The library is written in C. Other languages can directly link the libraries (e.g., Fortran) or access them through wrappers (Java, MATLAB). The avaliable combinations of language and platform are shown in Table 1.

Table 1. Language/platform matrix
(native, 32- and 64-bit)
(MinGW, 32- and 64-bit)
(Cygwin, 32- and 64-bit)
Linux (32- and 64-bit) Mac OS X
(32- and 64-bit, Intel only)
C/C++ Yes Yes Yes Yes Yes
Fortran (GNU and G95) Yes (G95 only) Yes Yes Yes Yes
Java Yes No No Yes Yes (64bit only)
MATLAB/Octave Untested Untested Yes Yes Yes

If you would like libraries for another platform, or a wrapper for another language, please contribute to the Genopt Forum. Volunteers are particularly welcome! We are also considering suggestions for additional benchmark functions. Ideally, benchmarks should be designed with controllable parameters to answer specific scientific questions. E.g., about the relationship between problems structure and optimal (possibly self-tuned) algorithms, about scalabilty to large dimensionality, etc.


The current version is
MD5 sum: 7be4243834ad3d73d50ab654eec82d97


All documentation is also included in the zip file.

Submission of the Results

  • Please make sure that your code is linked with the latest version of the GenOpt libraries provided above.
  • Execute your algorithm on the following set of function instances provided by the GenOpt library, for 30 runs of 100,000 evaluations each. You can set the 100,000 evaluation limit by calling the appropriate function in the GenOpt library, or have your algorithm stop shortly after the limit is reached.
    Table 2. Function instances for result submission
    NameGenOpt IDDimension
    Shekel Order 544
    Shekel Order 754
    Shekel Order 1064
    GKLS (continuously differentiable)910
    GKLS (continuously differentiable)930
    GKLS (twice continuously differentiable)1010
    GKLS (twice continuously differentiable)1030
    GKLS (non differentiable)1110
    GKLS (non differentiable)1130
  • Every run of every instance will generate a report file, for a total of 600 files. Compress all report files as a ZIP file.
  • Use the Upload Form to submit your ZIP File.
    Note that you can choose any name for your submission, so that you can leave your name out of it.
    However, you must have your login credentials; if you don't have your credentials, please send us a message via the contact form above in this page.
  • When the upload is complete, the Leaderboard Page will open with your new submission highlighted.


  • For each of the 20 function instances, submissions will be ranked on the basis of their median best value (across the 30 runs) after 1,000, 10,000 and 100,000 evaluations, resulting in 3 rankings per function instance.
  • The overall ranking is the average of the 60 partial rankings.