1. the complete title of one (or more) paper(s) published in the open literature describing the work that the author claims describes a human-competitive result;
Title: “AutoMH: Automatically Create Evolutionary Metaheuristic Algorithms Using Reinforcement Learning”.
2. the name, complete physical mailing address, e-mail address, and phone number of EACH author of EACH paper(s);
Name: Boris Almonacid;
Address: Volcán San Jose street 34 Recreo Alto, Postal Code 2581018, Viña del Mar, Chile;
E-mail: boris.almonacid@globalchange.science;
USE THIS: boris.almonacid.g@mail.pucv.cl
Phone: +56 966545082
3. the name of the corresponding author (i.e., the author to whom notices will be sent concerning the competition);
Boris Almonacid, boris.almonacid@globalchange.science
4. the abstract of the paper(s);
Machine learning research has been able to solve problems in multiple domains. Machine learning represents an open area of research for solving optimisation problems. The optimisation problems can be solved using a metaheuristic algorithm, which can find a solution in a reasonable amount of time. However, the time required to find an appropriate metaheuristic algorithm, that would have the convenient configurations to solve a set of optimisation problems properly presents a problem.
The proposal described in this article contemplates an approach that automatically creates metaheuristic algorithms given a set of optimisation problems. These metaheuristic algorithms are created by modifying their logical structure via the execution of an evolutionary process. This process employs an extension of the reinforcement learning approach that considers multi-agents in their environment, and a learning agent composed of an analysis process and a process of modification of the algorithms. The approach succeeded in creating a metaheuristic algorithm that managed to solve different continuous domain optimisation problems from the experiments performed. The implications of this work are immediate because they describe a basis for the generation of metaheuristic algorithms in an online evolution.
5. a list containing one or more of the eight letters (A, B, C, D, E, F, G, or H) that correspond to the criteria (see above) that the author claims that the work satisfies;
(B) The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal.
(D) The result is publishable in its own right as a new scientific result independent of the fact that the result was mechanically created.
(F) The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered.
(G) The result solves a problem of indisputable difficulty in its field.
6. a statement stating why the result satisfies the criteria that the contestant claims (see examples of statements of human-competitiveness as a guide to aid in constructing this part of the submission);
(B) The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal.
The AutoMH framework has allowed, through an online evolution process, the automatic generation of viable evolutionary metaheuristic algorithms that are capable of solving a portfolio of optimisation problems posed by the user.
The algorithm generated by the AutoMH framework has proven to be capable of solving optimisation problems with equal or superior performance compared to the 14 evolutionary algorithms considered in the paper. Additionally, the evolutionary algorithm created by AutoMH showcased a concise search trajectory, emphasizing solution intensification over space exploration, leading to more efficient and directed convergence. Its visual demonstration of fast and robust convergence further highlighted its superiority in terms of convergence speed compared to other evolutionary algorithms.
(D) The result is publishable in its own right as a new scientific result independent of the fact that the result was mechanically created.
The paper focuses on a very hot topic in AI right now, namely the intersection of machine learning and optimization.
More precisely, it addresses the problem of the automatic creation of meta-heuristics through Reinforcement Learning (RL).
The author proposes a new framework that exploits a set of agents in charge of executing actions on a set of optimization problems.
The learning phase is controlled by a meta-agent that will learn the actions to perform, based on the results of the different agents.
These actions correspond to the instructions that will guide the search process for each agent.
(F) The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered.
The research is stated to have made significant contributions to the field of machine learning optimization, specifically in the integration of reinforcement learning for solving optimization problems.
The AutoMH framework, based on reinforcement learning, enables the automatic generation of viable evolutionary metaheuristic algorithms that outperform the 14 considered evolutionary algorithms such as:
Bat Algorithm (BAT): Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74.
Cuckoo Search (CS): Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214.
Differential Evolution (DE): Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006.
FireFly Algorithm (FFA): Yang, X.S. Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178.
Genetic Algorithm (GA): Whitley, D. A genetic algorithm tutorial. Stat. Comput. 1994, 4, 65–85.
Grey Wolf Optimiser (GWO): Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61.
Harris Hawks Optimization (HHO): Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872.
Jaya algorithm (JAYA): Rao, R. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34.
Moth-Flame Optimization (MFO): Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249.
Multi-Verse Optimiser (MVO): Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513.
Particle Swarm Optimisation (PSO): Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995.
Sine Cosine Optimization Algorithm (SCA): Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133.
Salp Swarm Algorithm (SSA): Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191.
14. Whale Optimization Algorithm (WOA): Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67.
(G) The result solves a problem of indisputable difficulty in its field.
The research focuses on solving two problems:
(1) Design of metaheuristic algorithms: Designing effective metaheuristic algorithms is a non-trivial task that requires domain knowledge, experimentation, and experience.
The author's approach seeks to automate this process by automatically creating metaheuristic algorithms by reinforcement learning, which could address the difficulty of algorithm design.
It is recognized in the research that finding an appropriate metaheuristic algorithm with adequate configurations to solve a set of optimization problems is a problem in itself.
(2) Continuous Domain Optimization Problems: Continuous domain optimization problems often involve optimizing variables that can take on any real value within a specified range.
Such problems can be challenging due to the infinite search space and the need for detailed optimization techniques.
7. a full citation of the paper (that is, author names; title, publication date; name of journal, conference, or book in which article appeared; name of editors, if applicable, of the journal or edited book; publisher name; publisher city; page numbers, if applicable);
Almonacid, B. (2022). AutoMH: Automatically Create Evolutionary Metaheuristic Algorithms Using Reinforcement Learning. Entropy, 24(7), 957.
Doi: https://doi.org/10.3390/e24070957
8. a statement either that "any prize money, if any, is to be divided equally among the co-authors" OR a specific percentage breakdown as to how the prize money, if any, is to be divided among the co-authors;
The articles have just one author.
just one author.
9. a statement stating why the authors expect that their entry would be the "best," and
the author expects his approach to be the "best" because it offers the automatic generation of metaheuristic algorithms, employs an evolutionary process, incorporates multi-agent reinforcement learning, includes analysis and modification processes, has demonstrated successful results, and has implications for online evolution. These features distinguish the approach from it and make it a strong contender for solving optimization problems effectively.
10. An indication of the general type of genetic or evolutionary computation used, such as GA (genetic algorithms), GP (genetic programming), ES (evolution strategies), EP (evolutionary programming), LCS (learning classifier systems), GI (genetic improvement), GE (grammatical evolution), GEP (gene expression programming), DE (differential evolution), etc.
AutoMH (Automatically Metaheuristic) & RL (Reinforcement Learning)
11. The date of publication of each paper. If the date of publication is not on or before the deadline for submission, but instead, the paper has been unconditionally accepted for publication and is “in press” by the deadline for this competition, the entry must include a copy of the documentation establishing that the paper meets the "in press" requirement.
In press: 10 July 2022