Generalization-based contest in global optimization
NOTE: This page refers to the past edition of GENOPT, held in 2016 in conjunction with the LION10 conference.
For the current competition, please refer to the GENOPT Homepage.
Genopt 2016 is now finished.
The winners are in the final leaderboard .
Genopt award ceremony ("high jump and biathlon") at LION 10, Ischia, June 2016.
Genopt award ceremony ("target shooting") at NUMTA 2016, Pizzo Calabro, June 2016.
Special session in the LION10 conference (Ischia Island, Italy, 29 May - 1 June, 2016).
- Roberto Battiti, Head of LIONlab for "Machine Learning and Intelligent Optimization", University of Trento (Italy);
- Yaroslav Sergeyev, Head of Numerical Calculus Laboratory, DIMES, University of Calabria (Italy);
- Mauro Brunato, LIONlab, University of Trento (Italy);
- Dmitri Kvasov, DIMES, University of Calabria (Italy).
While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. The motivated researcher can "persecute" the algorithm choices and parameters until the final designed algorithm "confesses" positive results for the specific benchmark.
With a similar goal, to avoid the negative data mining effect, the GENOPT contest is based on randomized function generators, with fixed statistical characteristics but individual variation of the generated instances.
The generators are available to the participants to test offline and online tuning schemes, but the final competition is based on random seeds communicated in the last phase.
A dashboard reflects the current ranking of the participants, who are encouraged to exchange preliminary results and opinions.
The final "generalization" ranking is going be confirmed in the last competition phase.
The GENOPT manifesto
The document detailing the motivations and rules of the GENOPT challenge
(aka the GENOPT Manifesto, version Feb 16, 2016) is available for download.
The public phase of the competition is over. Competitors with at least one entry on the leaderboard
will be invited to submit their final results.
- March 15 at 23:59:59 GMT public phase ends; existing competitors will have one week to make a final submission.
- March 22 at 23:59:59 GMT competition ends, winners for the different categories are determined and asked to submit a paper describing the approach and the detailed results (papers are reviewed by the normal LION rules but with submission deadline April 7);
- April 30 decisions about paper acceptance communicated to authors.
- LION10 conference: 29 May - 1 June, 2016 Reviewed and accepted papers are presented, Competition winners are publicly recognized.
- After LION special issue of good-quality journal dedicated to results obtained by the Winning and reviewed papers.