1. Title of the Publication Unlocking the Potential of Global Human Expertise 2. Author Information Elliot Meyerson^1 - elliot.meyerson@cognizant.com Olivier Francon^1 - olivier.francon@cognizant.com Darren Sargent^1 - darren.sargent@cognizant.com Babak Hodjat^1 - babak@cognizant.com Risto Miikkulainen^1,2 - risto@cs.utexas.edu 1 Cognizant AI Lab, San Francisco, CA, USA 2 The University of Texas at Austin, Austin, TX, USA 3. Corresponding Author Elliot Meyerson, elliot.meyerson@cognizant.com 4. Abstract Solving societal problems on a global scale requires the collection and processing of ideas and methods from diverse sets of international experts. As the number and diversity of human experts increase, so does the likelihood that elements in this collective knowledge can be brought together to discover novel and better solutions. However, it is difficult to identify, combine, and refine complementary information in an increasingly large and diverse knowledge base. This paper argues that evolutionary AI can play a crucial role in this process. An evolutionary AI framework, termed RHEA, fills this role by distilling knowledge from diverse models created by human experts into equivalent neural networks, which are then recombined and refined in a population-based search. The framework was implemented in a formal synthetic domain, demonstrating that it is transparent and systematic. It was then applied to the results of the XPRIZE Pandemic Response Challenge, in which over 100 teams of experts across 23 countries submitted models based on diverse methodologies to predict COVID-19 cases and suggest non-pharmaceutical intervention policies for 235 nations, states, and regions across the globe. Building upon this expert knowledge, by recombining and refining the 169 resulting policy suggestion models, RHEA discovered a broader and more effective set of policies than either AI or human experts alone, as evaluated based on real-world data. The results thus suggest that AI can play a crucial role in realizing the potential of human expertise in global problem-solving. 5. Criteria the author claims the work satisfies (B) The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal. (C) The result is equal to or better than a result that was placed into a database or archive of results maintained by an internationally recognized panel of scientific experts. (D) The result is publishable in its own right as a new scientific result independent of the fact that the result was mechanically created. (G) The result solves a problem of indisputable difficulty in its field. (H) The result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs). 6. Why the result satisfies the criteria The main argument is that RHEA was evaluated against all human-developed systems submitted to a large-scale international competition (H): The XPRIZE Pandemic Response Challenge (https://www.xprize.org/challenge/pandemicresponse). The competition had a $500K prize fund; over 100 teams participated across 23 countries; there was a panel of internationally-recognized judges; and a set of diverse organizations provided support and partnership, including Oxford University, United Nations ITU, AWS, and the City of Los Angeles. This broad international engagement supports the well-accepted idea that effective pandemic response is of indisputable difficulty (G). The results of the XPRIZE competition were placed in a database (C), and it was from this database that human-developed submissions were sourced for RHEA. RHEA not only outperformed the submissions of each team, but outperformed their *union*, i.e., the combined Pareto front across all human-developed submissions. This result was not only achieved for the metric used in the competition (Fig 1g), but across several other key metrics (Fig 1e-1i). Note that RHEA was not a participant in the official competition. Our team assisted XPRIZE (https://www.xprize.org) in designing and developing this competition in order to solicit diverse high-quality solutions from human experts. RHEA was then applied to unlock latent potential in this global human expertise. Several of the human-developed submissions to the XPRIZE have been published in peer-reviewed journals (B). The final result of RHEA, i.e., the neural networks in the final Pareto front representing optimized intervention policy strategies, is also publishable in its own right as a new scientific result (D): In this process, RHEA discovered several features of effective pandemic response policies, such as swing, separability, focus, agility, and periodicity, which can be seen as an independent result in the social sciences (Fig 4). 7. Full Citation Elliot Meyerson, Olivier Francon, Darren Sargent, Babak Hodjat, and Risto Miikkulainen. 2024. "Unlocking the Potential of Global Human Expertise". Advances in Neural Information Processing Systems (NeurIPS), vol. 37, pp. 119227--119259. 8. Prize money, if any, will be divided equally among the co-authors. 9. Why this entry could be the "best" First, RHEA outperformed the 169 human-developed policy models from diverse human-expert teams submitted to a large-scale well-funded international competition. Moreover, it did not just outperform each human solution individually, but also their *union*, i.e. the combined Pareto front of all human submissions. Second, the justification extends beyond measurable performance in that RHEA represents a fundamentally new kind of Humies winner. By unlocking the potential of diverse human expertise and attributing the value back to humans, RHEA explicitly highlights the value of continued human-driven idea development, instead of foreshadowing its obsolescence. All prior Humies winners rely on foundational ideas from humans. The priors that human researchers bring in developing Humies-winning systems are based in a deep knowledge of what might work, which is in turn based on the results from what humans have tried before. The power of evolution comes from its ability to recombine and refine such prior ideas in an automated and scaled way. RHEA makes this process explicit by soliciting diverse expertise from humans and then letting evolution recombine and refine it as optimally as possible. Evolution as a computational tool has a unique capability to this end, due to its inherent nature of explicitly recombining and refining existing solutions, instead of averaging them into an opaque statistical mush, as happens in standard machine learning. Indeed, by tracing back through evolutionary trajectories, the results show that some submitted human ideas that were initially low-performing end up making outsized contributions to the final optimized Pareto front. Thus RHEA is able to *unlock latent potential* of human expertise; potential that would likely be overlooked by a human designer. Third, unlike many Humies winners, which are heavily tuned for a specific application, RHEA is a general framework, so it can be used to create solutions that exceed human performance in many further domains in which diverse human expertise can be gathered. 10. Type of Evolutionary Computation Used Genetic Algorithm (GA), Neuroevolution (NE) 11. Publication Date October 30, 2024 Sincerely, Elliot