1. Paper title: A search-based framework for automatic generation of testing environments for cyber-physical systems AmbieGen tool at the SBST 2022 Tool Competition 2. Author contact details: Dmytro Humeniuk dmytro.humeniuk@polymtl.ca 5719 Northmount Ave. Montréal, H3S3H4, Canada +514 238 9320 Giuliano Antoniol giuliano.antoniol@polymtl.ca Polytechnique Montréal 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada +1 514 652 6588 Foutse Khomh foutse.khomh@polymtl.ca Polytechnique Montréal 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada +1 514 651 5589 3. Corresponding author: Dmytro Humeniuk dmytro.humeniuk@polymtl.ca Polytechnique Montréal Montréal, Canada 4. First paper abstract: Background: Many modern cyber–physical systems incorporate computer vision technologies, complex sensors and advanced control software, allowing them to interact with the environment autonomously. Examples include drone swarms, self-driving vehicles, autonomous robots, etc. Testing such systems poses numerous challenges: not only should the system inputs be varied, but also the surrounding environment should be accounted for. A number of tools have been developed to test the system model for the possible inputs falsifying its requirements. However, they are not directly applicable to autonomous cyber–physical systems, as the inputs to their models are generated while operating in a virtual environment. Aims: In this paper, we aim to design a search-based framework, named AmbieGen, for generating diverse fault-revealing test scenarios for autonomous cyber–physical systems. The scenarios represent an environment in which an autonomous agent operates. The framework should be applicable to generating different types of environments. Methods: To generate the test scenarios, we leverage the NSGA-II algorithm with two objectives. The first objective evaluates the deviation of the observed system’s behaviour from its expected behaviour. The second objective is the test case diversity, calculated as a Jaccard distance with a reference test case. To guide the first objective we are using a simplified system model rather than the full model. The full model is used to run the system in the simulation environment and can take substantial time to execute (several minutes for one scenario). The simplified system model is derived from the full model and can be used to get an approximation of the results obtained from the full model without running the simulation. Results: We evaluate AmbieGen on three scenario generation case studies, namely a smart-thermostat, a robot obstacle avoidance system, and a vehicle lane-keeping assist system. For all the case studies, our approach outperforms the available baselines in fault revealing and several other metrics such as the diversity of the revealed faults and the proportion of valid test scenarios. Conclusion: AmbieGen could find scenarios, revealing failures for all the three autonomous agents considered in our case studies. We compared three configurations of AmbieGen: based on a single objective genetic algorithm, multi-objective, and random search. Both single and multi objective configurations outperform the random search. Multi objective configuration can find the individuals of the same quality as the single objective, producing more unique test scenarios in the same time budget. Our framework can be used to generate virtual environments of different types and complexity and reveal the system’s faults early in the design stage Second paper abstract: AmbieGen is a tool for generating test cases for cyber-physical systems (CPS). In the context of SBST 2022 CPS tool competition, it has been adapted to generating virtual roads to test a car lane keeping assist system. AmbieGen leverages a two objective NSGA-II algorithm to produce the test cases. It has achieved the highest final score, accounting for the test case efficiency, effectiveness and diversity in both testing configurations. 5. Relevant criteria: B, C, D, H 6. Statement: (B) The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal. In our third case study we generated the test scenarios for the self-driving cars LKAS system. We used the test generation infrastructure provided by the SBST 2021 (https://sbst21.github.io/) tool competition and followed the same evaluation set-up used in the competition. Following the obtained results, our approach could reveal the same number of failures given a time budget of 2 hours, as the state of the art tool Frenetic [1], which is the best performing tool at the competition. AmbieGen could also achieve the same test case diversity, producing a higher proportion of the valid test cases. The results of the competition are outlined in [2] and our results are presented in [6].. (C) The result is equal to or better than a result that was placed into a database or archive of results maintained by an internationally recognized panel of scientific experts. We submitted our tool, AmbieGen, to the second edition of the cyber-physical systems testing competition SBST 2022. According to the results of the competition, our tool achieved the highest final score among 5 other competitors. The final score is based on the coverage, test generation efficiency and test generation effectiveness scores. Our tool revealed 90 failures on average given a 2 h time budget compared to 44 of the next best performing tool tool FRENETICV. Our tool also achieved the highest diversity score for the test cases of 0.35, compared to 0.276 of the next best performing tool. The description of our tool, with the focus on this competition will be published in [3]. The results of the competition will be available at the following reference [4]. The competition result archive is maintained by the SBST workshop committee at the following repository [5], which is updated yearly. (D) The result is publishable in its own right as a new scientific result independent of the fact that the result was mechanically created. The full description of our AmbieGen approach was published in the Information Software and Technology Journal [6] and the replication package is available at the following link [7]. Our submission to the SBST2022 competition was accepted to be published in the Proceedings of the SBST2022 as [3] . The difference in the scores obtained by our tool in publication [6] and [3] is related to the differences in evaluation configuration, adopted to the competition rules. (H) The result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs). Our tool AmbieGen (https://github.com/dgumenyuk/tool-competition-av) was submitted to the SBST 2022 tool competition [8] (the competition report is cited as [4] ) and won the competition, among 5 other submitted tools. You can see the excerpt of the announcement of the competition winners via this link: https://youtu.be/Vwxu6TtzBYs?t=19520 7. First paper: Dmytro Humeniuk, Foutse Khomh, Giuliano Antoniol, A search-based framework for automatic generation of testing environments for cyber–physical systems, Information and Software Technology, Volume 149, 2022, 106936, ISSN 0950-5849, https://doi.org/10.1016/j.infsof.2022.106936. (https://www.sciencedirect.com/science/article/pii/S0950584922000866). Second paper: Dmytro Humeniuk, Giuliano Antoniol, and Foutse Khomh. 2022. AmbieGen tool at the SBST 2022 Tool Competition. In The 15th Search-Based Software Testing Workshop (SBST’22 ), May 9, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 4 pages. https: //doi.org/10.1145/3526072.3527531 8. Any prize money, if any, is to be divided equally among the co-authors. 9. “Best” statement: * We developed a state of the art approach for testing the vehicle LKAS system using evolutionary search * Our tool allows to produce a set of diverse road topologies with the maximized difficulty for the autonomous vehicle agent. Testing such agents with our tool will now be more effective. * This tool allows to significantly reduce the human effort in designing the test scenarios for the vehicle LKAS system. 10. Methods used: GA (genetic algorithms), NSGA-2 multi-objective genetic algorithm 11. The date of publication of each paper. [6] Published online on 5 May 2022 [3] To be published in the proceedings of the SBST2022 workshop. References 1. E. Castellano, A. Cetinkaya, C. H. Thanh, S. Klikovits, X. Zhang and P. Arcaini, "Frenetic at the SBST 2021 Tool Competition," 2021 IEEE/ACM 14th International Workshop on Search-Based Software Testing (SBST), 2021, pp. 36-37, doi: 10.1109/SBST52555.2021.00016. 2. S. Panichella, A. Gambi, F. Zampetti and V. Riccio, "SBST Tool Competition 2021," 2021 IEEE/ACM 14th International Workshop on Search-Based Software Testing (SBST), 2021, pp. 20-27, doi: 10.1109/SBST52555.2021.00011. 3. Dmytro Humeniuk, Giuliano Antoniol, and Foutse Khomh. 2022. AmbieGen tool at the SBST 2022 Tool Competition. In The 15th Search-Based Software Testing Workshop (SBST’22 ), May 9, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 4 pages. https: //doi.org/10.1145/3526072.3527531 4. Alessio Gambi, Gunel Jahangirova, Vincenzo Riccio, and Fiorella Zampetti. 2022. SBST Tool Competition 2022. In The 15th Search-Based Software Testing Workshop (SBST’22 ), May 9, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3526072.3527538 5. Online resource: https://github.com/se2p/tool-competition-av 6. Dmytro Humeniuk, Foutse Khomh, Giuliano Antoniol, A search-based framework for automatic generation of testing environments for cyber–physical systems, Information and Software Technology, Volume 149, 2022, 106936, ISSN 0950-5849, https://doi.org/10.1016/j.infsof.2022.106936. (https://www.sciencedirect.com/science/article/pii/S0950584922000866). 7. Online resource: https://github.com/dgumenyuk/Environment_generation.git 8. Online resource: https://sbst22.github.io/