Entry to the 18th Annual Humies Awards for Human-Competitive Results ----------------------------------------------------------- 1. The complete title of one (or more) paper(s) published in the open literature describing the work that the author claims describes a human-competitive result 'Discovering Representations for Black-box Optimization' ----------------------------------------------------------- 2. The name, complete physical mailing address, e-mail address, and phone number of EACH author of EACH paper(s) The following provides the contact information for each of the authors involved in the work presented for consideration. Adam Gaier (Corresponding Author) Gerhard-Samuel-Strasse 12 Bonn, Germany 53129 Email Address: adam.gaier@autodesk.com Alexander Asteroth Bonn-Rhein-Sieg University of Applied Sciences Grantham-Allee 20, 53757 Sankt Augustin, Germany Email Address: alexander.asteroth@h-brs.de Jean-Baptiste Mouret Inria Nancy - Grand Est 615 Rue du Jardin-Botanique, 54600 Villers-lès-Nancy, France Email Address: jean-baptiste.mouret@inria.fr ----------------------------------------------------------- 3. The name of the corresponding author (i.e., the author to whom notices will be sent concerning the competition) The corresponding author is Adam Gaier. ----------------------------------------------------------- 4. The abstract of the paper(s) The encoding of solutions in black-box optimization is a delicate, handcrafted balance between expressiveness and domain knowledge — between exploring a wide variety of solutions, and ensuring that those solutions are useful. Our main insight is that this process can be automated by generating a dataset of high-performing solutions with a quality diversity algorithm (here, MAP-Elites), then learning a representation with a generative model (here, a Variational Autoencoder) from that dataset. Our second insight is that this representation can be used to scale quality diversity optimization to higher dimensions — but only if we carefully mix solutions generated with the learned representation and those generated with traditional variation operators. We demonstrate these capabilities by learning an low-dimensional encoding for the inverse kinematics of a thousand joint planar arm. The results show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites, and that, once solved, the produced encoding can be used for rapid optimization of novel, but similar, tasks. The presented techniques not only scale up quality diversity algorithms to high dimensions, but show that black-box optimization encodings can be automatically learned, rather than hand designed. ----------------------------------------------------------- 5. A list containing one or more of the eight letters (A, B, C, D, E, F, G, or H) that correspond to the criteria (see above) that the author claims that the work satisfies (G) The result solves a problem of indisputable difficulty in its field. ----------------------------------------------------------- 6. A statement stating why the result satisfies the criteria that the contestant claims (see examples of statements of human-competitiveness as a guide to aid in constructing this part of the submission) G ("The result solves a problem of indisputable difficulty in its field.") Encodings are of such critical importance to black-box optimization that often the design of the representation, rather than the search algorithm used, is the focus of research and the key to solving a problem. Yet, creation of these encodings has resisted automation -- instead they are painstakingly hand crafted for each domain and specific application. Our technique provides a way of automating this process by exploiting the capabilities of evolutionary optimization to produce a variety of high performing solutions, and of generative machine learning models to uncover patterns in data and generalize their underlying factors. Generative models trained on data produced by evolution can then be used as learned representations fit to a class of problem. In our test case of solving inverse kinematics in high dimensional robot arms (with numbers of joints in the 100s) when searching the computer vs. human designed representations, a black box optimizer (CMA-ES) found solutions 1) more than an order of magnitude more accurate with the same budget 2) followed constraints (smoothness) without explicit reward The ability to implicitly follow constraints is perhaps even more impactful than the ability to find higher performing solutions. A computer generated representation which learns to encode only legal solutions removes the need for explicit and algorithm-specific constraint handling mechanisms. ----------------------------------------------------------- 7. A full citation of the paper (that is, author names; publication date; name of journal, conference, technical report, thesis, book, or book chapter; name of editors, if applicable, of the journal or edited book; publisher name; publisher city; page numbers, if applicable) Gaier, A., Asteroth, A., & Mouret, J. B. (2020, June). Discovering representations for black-box optimization. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference (pp. 103-111). ----------------------------------------------------------- 8. A statement either that "any prize money, if any, is to be divided equally among the co-authors" OR a specific percentage breakdown as to how the prize money, if any, is to be divided among the co-authors Any prize money, if any, is to be divided equally among the co-authors. ----------------------------------------------------------- 9. A statement stating why the authors expect that their entry would be the "best" - By discovering encodings for optimization our approach automates the portion of black black box optimization which requires the greatest human effort. - Automating the discovery of encodings shifts the paradigm of representation design from specifying details of an encoding to specifying the characteristics of the encoding -- and so embodies the goal of automation: freeing humans from tedious work to focus on larger goals. - Encodings created via our techniques are not limited to evolutionary techniques for later use, but any optimization algorithm. As such the potential for sharing the fruits of such 'representation mining' is far reaching, as practitioners in any field can gain from using the discovered encodings. - Other researchers [1,2] have already shown that our approach allows quality diversity systems to scale to the complexity and number of parameters of reinforcement learning domains -- demonstrating that our technique is broadly applicable and core ideas easily reproducible. -- [1] Rakicevic, N., Cully, A., & Kormushev, P. (2020). Policy Manifold Search for Improving Diversity-based Neuroevolution. In Beyond Backpropagation: Novel Ideas for Training Neural Architectures Workshop at NeurIPS 2020 [2] Rakicevic, N., Cully, A., & Kormushev, P. (2021). Policy Manifold Search: Exploring the Manifold Hypothesis for Diversity-based Neuroevolution. In Proceedings of the Genetic and Evolutionary Computation Conference 2021 ----------------------------------------------------------- 10. An indication of the general type of genetic or evolutionary computation used, such as GA (genetic algorithms), GP (genetic programming), ES (evolution strategies), EP (evolutionary programming), LCS (learning classifier systems), GE (grammatical evolution), GEP (gene expression programming), DE (differential evolution), etc. A Quality Diversity (QD) algorithm was used to create the dataset of diverse and high performing solutions which acts as a dataset of a machine learning algorithm. The resulting encoding can then be searched with other black box optimization techniques -- in our paper an evolution strategy (ES) was used. ----------------------------------------------------------- 11. The date of publication of each paper. If the date of publication is not on or before the deadline for submission, but instead, the paper has been unconditionally accepted for publication and is press the deadline for this competition, the entry must include a copy of the documentation establishing that the paper meets the "in press" requirement. The paper was published on 25 June 2020.