Generalization-based contest in global optimization
To see the results of the previous edition, please refer to the GENOPT 2016 page.
If you are interested in the GENOPT competition, please send us your name and email.
You will be inserted into the mailing list and get a password.
Also use this form to send us any short message related to GenOpt.
Special session in the LION11 conference (Nizhny Novgorod, Russia, June 19-21, 2017).
- Roberto Battiti, Head of LIONlab for "Machine Learning and Intelligent Optimization", University of Trento (Italy) and "Lobachevsky" University of Nizhny Novgorod (Russia);
- Yaroslav Sergeyev, Head of Numerical Calculus Laboratory, DIMES, University of Calabria (Italy) and "Lobachevsky" University of Nizhny Novgorod (Russia);
- Mauro Brunato, LIONlab, University of Trento (Italy);
- Dmitri Kvasov, DIMES, University of Calabria (Italy) and "Lobachevsky" University of Nizhny Novgorod (Russia).
GENOPT website maintainer:
- Andrea Mariello, LIONlab, University of Trento (Italy).
While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. The motivated researcher can "persecute" the algorithm choices and parameters until the final designed algorithm "confesses" positive results for the specific benchmark.
With a similar goal, to avoid the negative data mining effect, the GENOPT contest is based on randomized function generators, with fixed statistical characteristics but individual variation of the generated instances.
The generators are available to the participants to test offline and online tuning schemes, but the final competition is based on random seeds communicated in the last phase.
A dashboard reflects the current ranking of the participants, who are encouraged to exchange preliminary results and opinions.
The final "generalization" ranking is going be confirmed in the last competition phase.
The GENOPT manifesto
The document detailing the motivations and rules of the GENOPT challenge
(aka the GENOPT Manifesto, version Dec 31, 2016) is available for download.
March 31 at 23:59:59 GMT public phase ends; existing competitors will have one week to make a final submission.
Update 1: existing competitors can retrieve the seed for the final submission from the leaderboard page after login.
Update 2: detailed instructions for the final submission have been sent by email to the contestants.
April 7 at 23:59:59 GMT competition ends, winners for the different categories are determined and asked to submit a paper describing the approach and the detailed results (papers are reviewed by the normal LION rules but with submission deadline April 21);
- May 19 decisions about paper acceptance communicated to authors.
- LION11 conference: 19-21 June, 2017 Reviewed and accepted papers are presented, Competition winners are publicly recognized.
- After LION special issue of good-quality journal dedicated to results obtained by the Winning and reviewed papers.