1. Titles of the paper: a. Paper a: Evolving Neural Architecture Using One Shot Model. b. Paper b: Neural Architecture Search using Progressive Evolution. c. Paper c: Novelty Driven Evolutionary Neural Architecture Search (poster). 2. Authors: All three papers have the same 2 authors. a. Nilotpal Sinha nilotpalsinha.cs06g@nctu.edu.tw, nightstorm1990@gmail.com EC223B, National Chiao Tung University, 1001 University Road, East Dist., Hsinchu City 300, Taiwan (R.O.C.) b. Kuan-Wen Chen kuanwen@cs.nctu.edu.tw EC223B, National Chiao Tung University, 1001 University Road, East Dist., Hsinchu City 300, Taiwan (R.O.C.) 3. Corresponding author: a. Nilotpal Sinha nilotpalsinha.cs06g@nctu.edu.tw, nightstorm1990@gmail.com 4. Abstract: a. Paper a: Evolving Neural Architecture Using One Shot Model Previous evolution based architecture search require high computational resources resulting in large search time. In this work, we propose a novel way of applying a simple genetic algorithm to the neural architecture search problem called EvNAS (Evolving Neural Architecture using One Shot Model) which reduces the search time significantly while still achieving better result than previous evolution based methods. The architectures are represented by architecture parameter of one shot model which results in the weight sharing among the given population of architectures and also weight inheritance from one generation to the next generation of architectures. We use the accuracy of partially trained architecture on validation data as a prediction of its fitness to reduce the search time. We also propose a decoding technique for the architecture parameter which is used to divert majority of the gradient information towards the given architecture and is also used for improving the fitness prediction of the given architecture from the one shot model during the search process. EvNAS searches for architecture on CIFAR-10 for 3.83 GPU day on a single GPU with top-1 test error 2.47%, which is then transferred to CIFAR-100 and ImageNet achieving top-1 error 16.37% and top-5 error 7.4% respectively. b. Paper b: Neural Architecture Search using Progressive Evolution Vanilla neural architecture search using evolutionary algorithms (EA) involves evaluating each architecture by training it from scratch, which is extremely time-consuming. This can be reduced by using a supernet to estimate the fitness of every architecture in the search space due to its weight sharing nature. However, the estimated fitness is very noisy due to the co-adaptation of the operations in the supernet. In this work, we propose a method called pEvoNAS wherein the whole neural architecture search space is progressively reduced to smaller search space regions with good architectures. This is achieved by using a trained supernet for architecture evaluation during the architecture search using genetic algorithm to find search space regions with good architectures. Upon reaching the final reduced search space, the supernet is then used to search for the best architecture in that search space using evolution. The search is also enhanced by using weight inheritance wherein the supernet for the smaller search space inherits its weights from previous trained supernet for the bigger search space. Experimentally, pEvoNAS gives better results on CIFAR-10 and CIFAR-100 while using significantly less computational resources as compared to previous EA-based methods. c. Paper c: Novelty Driven Evolutionary Neural Architecture Search Evolutionary algorithms (EA) based neural architecture search (NAS) involves evaluating each architecture by training it from scratch, which is extremely time-consuming. This can be reduced by using a supernet for estimating the fitness of an architecture due to weight sharing among all architectures in the search space. However, the estimated fitness is very noisy due to the co-adaptation of the operations in the supernet which results in NAS methods getting trapped in local optimum. In this paper, we propose a method called NEvoNAS wherein the NAS problem is posed as a multi-objective problem with 2 objectives: (i) maximize architecture novelty, (ii) maximize architecture fitness/accuracy. The novelty search is used for maintaining a diverse set of solutions at each generation which helps avoiding local optimum traps while the architecture fitness is calculated using supernet. NSGA-II is used for finding the pareto optimal front for the NAS problem and the best architecture in the pareto front is returned as the searched architecture. Exerimentally, NEvoNAS gives better results on 2 different search spaces while using significantly less computational resources as compared to previous EA-based methods. 5. List of criteria: a. Paper a: Evolving Neural Architecture Using One Shot Model B, D, E, F, G b. Paper b: Neural Architecture Search using Progressive Evolution B, D, E, F, G c. Paper c: Novelty Driven Evolutionary Neural Architecture Search B, D, E, F, G 6. Statement of why: The biggest difficulty in neural architecture search problem is that it is very computationally expensive i.e. it requires multiple GPUs to perform the search. a. Paper a: Evolving Neural Architecture Using One Shot Model B, F: The method discovers deep CNN architecture while reducing the search time significantly as compared to previous evolution based methods. D, E: The method discovers architectures that beat human designed architectures like ResNet (2016), DenseNet (2017) and ShuffleNet (2018) in computer vision classification task for CIFAR-10 dataset. G: The method performs the search using a single GPU and thus reducing the computation cost significantly. This is done through the use of one shot model/supernet. b. Paper b: Neural Architecture Search using Progressive Evolution B, F: The method further reduces the search time as compared to paper 1 for discovering deep CNN architecture while maintaining the same performance. D, E: The method discovers architectures that beat human designed architectures like ResNet (2016), DenseNet (2017) and ShuffleNet (2018) in computer vision classification task for CIFAR-10 dataset. G: The method also performs the search using a single GPU and also solves the poor correlation problem of the one shot model/supernet. This further reduces the search time as compared to paper 1. c. Paper c: Novelty Driven Evolutionary Neural Architecture Search B, F: The method further reduces the search time as compared to paper 1 and paper 2 for discovering deep CNN architecture while maintaining the same performance. D, E: The method discovers architectures that beat human designed architectures like ResNet (2016), DenseNet (2017) and ShuffleNet (2018) in computer vision classification task for CIFAR-10 dataset. G: The method also performs the search using a single GPU and also solves the poor correlation problem of the one shot model/supernet. This further reduces the search time as compared to paper 1 and paper 2. 7. Full citation a. Paper a: Nilotpal Sinha, Kuan-Wen Chen, “Evolving Neural Architecture Using One Shot Model”, ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO) 2021. @inproceedings{sinha2021evolving, title={Evolving neural architecture using one shot model}, author={Sinha, Nilotpal and Chen, Kuan-Wen}, booktitle={Proceedings of the Genetic and Evolutionary Computation Conference}, pages={910--918}, year={2021} } b. Paper b: Nilotpal Sinha, Kuan-Wen Chen, “Neural Architecture Search using Progressive Evolution", ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO) 2022. (in press) c. Paper c: Nilotpal Sinha, Kuan-Wen Chen, “Novelty Driven Evolutionary Neural Architecture Search", ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO) 2022. (in press) 8. Any prize money, if any, is to be divided equally among the co-authors. 9. why the authors expect that their entry would be the "best," a. Paper a: Evolving Neural Architecture Using One Shot Model In this method, we reduce the search time significantly as compared to previous evolution based methods which would take > 40 days. Our method performed the search in 4 days while using single GPU and outperforming previous methods. Our method also showed how to use one shot model with evolutionary search. b. Paper b: Neural Architecture Search using Progressive Evolution Here, we use the one shot model/supernet to select a promising smaller search within the bigger search space instead of using the supernet to get the best architecture. This method reduces the search time further (1.3 days) while still maintaining the same performance as paper 1. c. Paper c: Novelty Driven Evolutionary Neural Architecture Search This paper shows how to use novelty search in Neural Architecture Search (NAS) problem. To the best of our knowledge, novelty search has not been used in the NAS problem as it is a divergent algorithm. This means that novelty search never converges to a solution but just explores the search space. We use this exploration characteristic of the novelty search to get a diverse set of solutions in a single run using a single GPU and thus reducing the search time further (0.35 day). Note that the performance remains the same with paper 1 and paper 2. All the architecture searches are performed for the deep CNN which is computationally expensive for evolutionary methods. All our methods show different ways to reduce this computational bottleneck while providing better results than previous evolution based methods. 10. All three papers use GA (genetic algorithms). 11. Date of publication: a. Paper a: Evolving Neural Architecture Using One Shot Model 26 June 2021 b. Paper b: Neural Architecture Search using Progressive Evolution In press GECCO 2022 c. Paper c: Novelty Driven Evolutionary Neural Architecture Search In press GECCO 2022